id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.11952 | Pathology-and-genomics Multimodal Transformer for Survival Outcome
Prediction | Survival outcome assessment is challenging and inherently associated with
multiple clinical factors (e.g., imaging and genomics biomarkers) in cancer.
Enabling multimodal analytics promises to reveal novel predictive patterns of
patient outcomes. In this study, we propose a multimodal transformer
(PathOmics) integrating pathology and genomics insights into colon-related
cancer survival prediction. We emphasize the unsupervised pretraining to
capture the intrinsic interaction between tissue microenvironments in gigapixel
whole slide images (WSIs) and a wide range of genomics data (e.g.,
mRNA-sequence, copy number variant, and methylation). After the multimodal
knowledge aggregation in pretraining, our task-specific model finetuning could
expand the scope of data utility applicable to both multi- and single-modal
data (e.g., image- or genomics-only). We evaluate our approach on both TCGA
colon and rectum cancer cohorts, showing that the proposed approach is
competitive and outperforms state-of-the-art studies. Finally, our approach is
desirable to utilize the limited number of finetuned samples towards
data-efficient analytics for survival outcome prediction. The code is available
at https://github.com/Cassie07/PathOmics. | Kexin Ding, Mu Zhou, Dimitris N. Metaxas, Shaoting Zhang | 2023-07-22T00:59:26Z | http://arxiv.org/abs/2307.11952v1 | # Pathology-and-genomics Multimodal Transformer for Survival Outcome Prediction
###### Abstract
Survival outcome assessment is challenging and inherently associated with multiple clinical factors (e.g., imaging and genomics biomarkers) in cancer. Enabling multimodal analytics promises to reveal novel predictive patterns of patient outcomes. In this study, we propose a multimodal transformer (**PathOmics**) integrating pathology and genomics insights into colon-related cancer survival prediction. We emphasize the unsupervised pretraining to capture the intrinsic interaction between tissue microenvironments in gigapixel whole slide images (WSIs) and a wide range of genomics data (e.g., mRNA-sequence, copy number variant, and methylation). After the multimodal knowledge aggregation in pretraining, our task-specific model finetuning could expand the scope of data utility applicable to both multi- and single-modal data (e.g., image- or genomics-only). We evaluate our approach on both TCGA colon and rectum cancer cohorts, showing that the proposed approach is competitive and outperforms state-of-the-art studies. Finally, our approach is desirable to utilize the limited number of finetuned samples towards data-efficient analytics for survival outcome prediction. The code is available at [https://github.com/Cassie07/PathOmics](https://github.com/Cassie07/PathOmics).
Keywords:Histopathological image analysis Multimodal learning Cancer diagnosis Survival prediction
## 1 Introduction
Cancers are a group of heterogeneous diseases reflecting deep interactions between pathological and genomics variants in tumor tissue environments [24]. Different cancer genotypes are translated into pathological phenotypes that could be assessed by pathologists [24]. High-resolution pathological images have proven their unique benefits for improving prognostic biomarkers prediction via exploring the tissue microenvironmental features [18, 10, 1, 12, 25, 13]. Meanwhile, genomics data (e.g., mRNA-sequence) display a high relevance to regulate cancer progression [29, 3]. For instance, genome-wide molecular portraits are crucial for cancer prognostic stratification and targeted therapy [16]. Despite their importance, seldom efforts jointly exploit the multimodal value between cancer image
morphology and molecular biomarkers. In a broader context, assessing cancer prognosis is essentially a multimodal task in association with pathological and genomics findings. Therefore, Synergizing multimodal data could deepen a cross-scale understanding towards improved patient prognostication.
The major goal of multimodal data learning is to extract complementary contextual information across modalities [4]. Supervised studies [5, 7, 6] have allowed multimodal data fusion among image and non-image biomarkers. For instance, the Kronecker product is able to capture the interactions between WSIs and genomic features for survival outcome prediction [5, 7]. Alternatively, the co-attention transformer [6] could capture the genotype-phenotype interactions for prognostic prediction. Yet these supervised approaches are limited by feature generalizability and have a high dependency on data labeling. To alleviate label requirement, unsupervised learning evaluates the intrinsic similarity among multimodal representations for data fusion. For example, integrating image, genomics, and clinical information can be achieved via a predefined unsupervised similarity evaluation [4]. To broaden the data utility, the study [28] leverages the pathology and genomic knowledge from the teacher model to guide the pathology-only student model for glioma grading. From these analyses, it is increasingly recognized that the lack of flexibility on model finetuning limits the data utility of multimodal learning. Meanwhile, the size of multimodal medical datasets is not as large as natural vision-language datasets, which necessitates the need for data-efficient analytics to address the training difficulty.
To tackle above challenges, we propose a pathology-and-genomics multimodal framework (i.e., **PathOmics**) for survival prediction (Fig 1). We summarized our contributions as follows. **(1) Unsupervised multimodal data fusion.** Our unsupervised pretraining exploits the intrinsic interaction between morphological and molecular biomarkers (Fig 1a). To overcome the gap of modality heterogeneity between images and genomics, we project the multimodal embeddings into the same latent space by evaluating the similarity among them. Particularly, the pretrained model offers a unique means by using similarity-guided modality fusion for extracting cross-modal patterns. **(2) Flexible modality finetuning.** A key contribution of our multimodal framework is that it combines benefits from both unsupervised pretraining and supervised finetuning data fusion (Fig 1b). As a result, the task-specific finetuning broadens the dataset usage (Fig 1b and c), which is not limited by data modality (e.g., both single- and multi-modal data). **(3) Data efficiency with limited data size.** Our approach could achieve comparable performance even with fewer finetuned data (e.g., only use 50% of the finetuned data) when compared with using the entire finetuning dataset.
## 2 Methodology
**Overview.** Fig 1 illustrates our multimodal transformer framework. Our method includes an unsupervised multimodal data fusion pretraining and a supervised flexible-modal finetuning. From Fig 1a, in the pretraining, our unsupervised data fusion aims to capture the interaction pattern of image and genomics features.
Overall, we formulate the objective of multimodal feature learning by converting image patches and tabular genomics data into group-wise embeddings, and then extracting multimodal patient-wise embeddings. More specifically, we construct group-wise representations for both image and genomics modalities. For image feature representation, we randomly divide image patches into groups; Meanwhile, for each type of genomics data, we construct groups of genes depending on their clinical relevance [22]. Next, as seen in Fig 1b and c, our approach enables three types of finetuning modal modes (i.e., multimodal, image-only, and genomics-only) towards prognostic prediction, expanding the downstream data utility from the pretrained model.
#### 2.0.1 Group-wise Image and Genomics Embedding.
We define the group-wise genomics representation by referring to \(N=8\) major functional groups obtained from [22]. Each group contains a list of well-defined molecular features related to cancer biology, including transcription factors, tumor suppression, cytokines and growth factors, cell differentiation markers, homeodomain proteins, translocated cancer genes, and protein kinases. The group-wise genomics representation is defined as \(G_{n}\in\mathbb{R}^{1\times d_{g}}\), where \(n\in N\), \(d_{g}\) is the attribute dimension in each group which could be various. To better extract high-dimensional group-wise genomics representation, we use a Self-Normalizing Network (SNN) together with scaled
Figure 1: Workflow overview of the pathology-and-genomics multimodal transformer (**PathOmics**) for survival prediction. In (a), we show the pipeline of extracting image and genomics feature embedding via an unsupervised pretraining towards multimodal data fusion. In (b) and (c), our supervised finetuning scheme could flexibly handle multiple types of data for prognostic prediction. With the multimodal pretrained model backbones, both multi- or single-modal data can be applicable for our model finetuning.
exponential linear units (SeLU) and Alpha Dropout for feature extraction to generate the group-wise embedding \(G_{n}\in\mathbb{R}^{1\times 256}\) for each group.
For group-wise WSIs representation, we first cropped all tissue-region image tiles from the entire WSI and extracted CNN-based (e.g., ResNet50) \(d_{i}\)-dimensional features for each image tile k as \(h_{k}\in\mathbb{R}^{1\times d_{i}}\), where \(d_{i}=1,024\), \(k\in K\) and K is the number of image patches. We construct the group-wise WSIs representation by randomly splitting image tile features into N groups (i.e., the same number as genomics categories). Therefore, group-wise image representation could be defined as \(I_{n}\in\mathbb{R}^{k_{n}\times 1024}\), where \(n\in N\) and \(k_{n}\) represents tile k in group n. Then we apply an attention-based refiner (ABR) [17], which is able to weight the feature embeddings in the group, together with a dimension deduction (e.g., fully-connected layers) to achieve the group-wise embedding. The ABR and the group-wise embedding \(I_{n}\in\mathbb{R}^{1\times 256}\) are defined as:
\[a_{k}=\frac{epx\{w^{T}(tanh(V_{1}h_{k})\odot(sigm(V_{2}h_{k}))\}}{\sum_{j=1}^ {K}epx\{w^{T}(tanh(V_{1}h_{j})\odot(sigm(V_{2}h_{j}))\}} \tag{1}\]
where w,V1 and V2 are the learnable parameters.
\[I_{n}=\sum_{k=1}^{K}a_{k}h_{k} \tag{2}\]
#### 2.2.2 Patient-wise Multimodal Feature Embedding.
To aggregate patient-wise multimodal feature embedding from the group-wise representations, as shown in Fig 1a, we propose a pathology-and-genomics multimodal model containing two model streams, including a pathological image and a genomics data stream. In each stream, we use the same architecture with different weights, which is updated separately in each modality stream. In the pathological image stream, the patient-wise image representation is aggregated by N group representations as \(I_{p}\in\mathbb{R}^{N\times 256}\), where \(p\in P\) and P is the number of patients. Similarly, the patient-wise genomics representation is aggregated as \(G_{p}\in\mathbb{R}^{N\times 256}\). After generating patient-wise representation, we utilize two transformer layers [27] to extract feature embeddings for each modality as follows:
\[H_{p}^{l}=MSA(H_{p}) \tag{3}\]
where MSA denotes Multi-head Self-attention [27] (see Appendix 1), l denotes the layer index of the transformer, and \(H_{p}\) could either be \(I_{p}\) or \(G_{p}\). Then, we construct global attention poolings [17] as Eq.1 to adaptively compute a weighted sum of each modality feature embeddings to finally construct patient-wise embedding as \(I_{embedding}^{p}\in\mathbb{R}^{1\times 256}\) and \(G_{embedding}^{p}\in\mathbb{R}^{1\times 256}\) in each modality.
#### 2.2.3 Multimodal Fusion in Pretraining and Finetuning.
Due to the domain gap between image and molecular feature heterogeneity, a proper design of
multimodal fusion is crucial to advance integrative analysis. In the pretraining stage, we develop an unsupervised data fusion strategy by decreasing the mean square error (MSE) loss to map images and genomics embeddings into the same space. Ideally, the image and genomics embeddings belonging to the same patient should have a higher relevance between each other. MSE measures the average squared difference between multimodal embeddings. In this way, the pretrained model is trained to map the paired image and genomics embeddings to be closer in the latent space, leading to strengthen the interaction between different modalities.
\[\mathcal{L}_{fusion}=argmin\frac{1}{P}\sum_{p=1}^{P}((I_{embedding}^{p}-G_{embedding}^{p})^{2}) \tag{4}\]
In the single modality finetuning, even if we use image-only data, the model is able to produce genomic-related image feature embedding due to the multimodal knowledge aggregation already obtained from the model pretraining. As a result, our cross-modal information aggregation relaxes the modality requirement in the finetuning stage. As shown in Fig 1b, for multimodal finetuning, we deploy a concatenation layer to obtain the fused multimodal feature representation and implement a risk classifier (FC layer) to achieve the final survival stratification (see Appendix 2). As for single-modality finetuning mode in Fig 1c, we simply feed \(I_{embedding}^{p}\) or \(G_{embedding}^{p}\) into risk classifier for the final prognosis prediction. During the finetuning, we update the model parameters using a log-likelihood loss for the discrete-time survival model training [6](see Appendix 2).
## 3 Experiments and Results
**Datasets.** All image and genomics data are publicly available. We collected WSIs from The Cancer Genome Atlas Colon Adenocarcinoma (TCGA-COAD) dataset (CC-BY-3.0) [21, 8] and Rectum Adenocarcinoma (TCGA-READ) dataset (CC-BY-3.0) [20, 8], which contain 440 and 153 patients. We cropped each WSI into 512 \(\times\) 512 non-overlapped patches. We also collected the corresponding tabular genomics data (e.g., mRNA sequence, copy number alteration, and methylation) with overall survival (OS) times and censorship statuses from Cbioportal [2, 14]. We removed the samples without the corresponding genomics data or ground truth of survival outcomes. Finally, we included 426 patients of TCGA-COAD and 145 patients of TCGA-READ.
### 3.0.1 Experimental Settings and Implementations.
We implement two types of settings that involve internal and external datasets for model pretraining and finetuning. As shown in Fig 2a, we pretrain and finetune the model on the same dataset (i.e., internal setting). We split TCGA-COAD into training (80%) and holdout testing set (20%). Then, we implement four-fold cross-validation on the training set for pretraining, finetuning, and hyperparameter-tuning. The test set is only used for evaluating the best finetuned models from each cross-validation
split. For the external setting, we implement pretraining and finetuning on the different datasets, as shown in Fig 2b; we use TCGA-COAD for pretraining; Then, we only use TCGA-READ for finetuning and final evaluation. We implement a five-fold cross-validation for pretraining, and the best pretrained models are used for finetuning. We split TCGA-READ into finetuning (60%), validation (20%), and evaluation set (20%). For all experiments, we calculate the average performance on the evaluation set across the best models.
The number of epochs for pretraining and finetuning are 25, the batch size is 1, the optimizer is Adam [19], and the learning rate is 1e-4 for pretraining and 5e-5 for finetuning. We used one 32GB Tesla V100 SXM2 GPU and Pytorch. The concordance index (C-index) is used to measure the survival prediction performance. We followed the previous studies [6, 5, 7] to partition the overall survival (OS) months into four non-overlapping intervals by using the quartiles of event times of uncensored patients for discretized-survival C-index calculation (see Appendix 2). For each experiment, we reported the average C-index among three-times repeated experiments. Conceptionally, our method shares a similar idea to multiple instance learning (MIL) [9, 23]. Therefore, we include two types of baseline models, including the MIL-based models (DeepSet [30], AB-MIL [17], and TransMIL [26]) and MIL multimodal-based models (MCAT [6], PORPOISE [7]). We follow the same data split and processing, as well as the identical training hyperparameters and supervised fusion as above. Notably, there is no need for supervised finetuning for the baselines when using TCGA-COAD (Table 1), because the supervised pretraining is already applied to the training set.
#### 4.2.2 Results.
In Table 1, our approach shows improved survival prediction performance on both TCGA-COAD and TCGA-READ datasets. Compared with supervised baselines, our unsupervised data fusion is able to extract the phenotype-genotype interaction features, leading to achieving a flexible finetuning for differ
Figure 2: Dataset usage. In a, we use TCGA-COAD dataset for model pretraining, finetuning, and evaluation. In b, we use TCGA-COAD dataset for model pretraining. Then, we use TCGA-READ dataset to finetune and evaluate the pretrained models.
ent data settings. With the multimodal pretraining and finetuning, our method outperforms state-of-the-art models by about 2% on TCGA-COAD and 4% TCGA-READ. We recognize that the combination of image and mRNA sequencing data leads to reflecting distinguishing survival outcomes. Remarkably, our model achieved positive results even using a single-modal finetuning when compared with baselines (more results in Appendix 3.1). In the meantime, on the TCGA-READ, our single-modality finetuned model achieves a better performance than multimodal finetuned baseline models (e.g., with model pretraining via image and methylation data, we have only used the image data for finetuning and achieved a C-index of 74.85%, which is about 4% higher than the best base
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Pretrain} & \multicolumn{2}{c|}{TCGA-COAD} & \multicolumn{2}{c}{TCGA-READ} \\ \cline{3-6} & data modality & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{3-6} & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{3-6} & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{3-6} & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline \multirow{4}{*}{DeepSets [30]} & image+mRNA & - & \(58.70\pm 1.10\) & image+mRNA & \(70.19\pm 1.45\) \\ & image+CNA & - & \(51.50\pm 2.60\) & image+CNA & \(62.50\pm 2.52\) \\ & image+Methyl & - & \(65.61\pm 1.86\) & image+Methyl & \(55.78\pm 1.22\) \\ \hline \multirow{4}{*}{AB-MIL [17]} & image+mRNA & - & \(54.12\pm 2.88\) & image+mRNA & \(68.79\pm 1.44\) \\ & image+CNA & - & \(54.68\pm 2.44\) & image+CNA & \(66.72\pm 0.81\) \\ & image+Methyl & - & \(49.66\pm 1.58\) & image+Methyl & \(55.78\pm 1.22\) \\ \hline \multirow{4}{*}{TransMIL [26]} & image+mRNA & - & \(54.15\pm 1.02\) & image+mRNA & \(67.91\pm 2.35\) \\ & image+CNA & - & \(59.80\pm 0.98\) & image+CNA & \(62.75\pm 1.92\) \\ & image+Methyl & - & \(53.35\pm 1.78\) & image+Methyl & \(53.09\pm 1.46\) \\ \hline \multirow{4}{*}{MCAT [6]} & image+mRNA & - & \(65.02\pm 3.10\) & image+mRNA & \(70.27\pm 2.75\) \\ & image+CNA & - & \(64.66\pm 2.31\) & image+CNA & \(60.50\pm 1.25\) \\ & image+Methyl & - & \(60.98\pm 2.43\) & image+Methyl & \(59.78\pm 1.20\) \\ \hline \multirow{4}{*}{PORPOI -SE [7]} & image+mRNA & - & \(65.31\pm 1.26\) & image+mRNA & \(68.18\pm 1.62\) \\ & image+CNA & - & \(57.32\pm 1.78\) & image+CNA & \(60.19\pm 1.48\) \\ \cline{1-1} & image+Methyl & - & \(61.84\pm 1.10\) & image+Methyl & \(68.80\pm 0.92\) \\ \hline \hline \multirow{4}{*}{Ours} & image+mRNA & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{3-6} & image+mRNA & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{3-6} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{3-6} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{3-6} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{3-6} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline \end{tabular}
\end{table}
Table 1: The comparison of C-index performance on TCGA-COAD and TCGA-READ dataset. "Methyl" is used as the abbreviation of Methylation.
line models). We show that with a single-modal finetuning strategy, the model could generate meaningful embedding to combine image- and genomic-related patterns. In addition, our model reflects its efficiency on the limited finetuning data (e.g., 75 patients are used for finetuning on TCGA-READ, which are only 22% of TCGA-COAD finetuning data). In Table 1, our method could yield better performance compared with baselines on the small dataset across the combination of images and multiple types of genomics data.
#### 3.2.3 Ablation Analysis.
We verify the model efficiency by using fewer amounts of finetuning data in finetuning. For TCGA-COAD dataset, we include 50%, 25%, and 10% of the finetuning data. For the TCGA-READ dataset, as the number of uncensored patients is limited, we use 75%, 50%, and 25% of the finetuning data to allow at least one uncensored patient to be included for finetuning. As shown in Fig 3a, by using 50% of TCGA-COAD finetuning data, our approach achieves the C-index of 64.80%, which is higher than the average performance of baselines in several modalities. Similarly, in Fig 3b, our model retains a good performance by using 50% or 75% of TCGA-READ finetuning data compared with the average of C-index across baselines (e.g., 72.32% versus 64.23%). For evaluating the effect of cross-modality information extraction in the pretraining, we kept supervised model training (i.e., the finetuning stage) while removing the unsupervised pretraining. The performance is lower 2%-10% than ours on multi- and single-modality data. For evaluating the genomics data usage, we designed two settings: (1) combining all types of genomics data and categorizing them by groups; (2) removing category information while keeping using different types of genomics data separately. Our approach outperforms the above ablation studies by 3%-7% on TCGA-READ and performs similarly on TCGA-COAD. In addition, we replaced our unsupervised loss with cosine similarity loss; our approach outperforms the setting of using cosine similarity loss by 3%-6%.
Figure 3: Ablation study. In (a) and (b), we evaluate the model efficiency by using fewer data for model finetuning on TCGA-COAD and TCGA-READ. We show the average C-index of baselines, the detailed results are shown in the Appendix 3.2.
## 4 Conclusion
Developing data-efficient multimodal learning is crucial to advance the survival assessment of cancer patients in a variety of clinical data scenarios. We demonstrated that the proposed PathOmics framework is useful for improving the survival prediction of colon and rectum cancer patients. Importantly, our approach opens up perspectives for exploring the key insights of intrinsic genotype-phenotype interactions in complex cancer data across modalities. Our finetuning approach broadens the scope of dataset inclusion, particularly for model finetuning and evaluation, while enhancing model efficiency on analyzing multimodal clinical data in real-world settings. In addition, the use of synthetic data and developing a foundation model training will be helpful to improve the robustness of multimodal data fusion [11, 15].
#### Acknowledgements.
The results of this study are based on the data collected from the public TCGA Research Network: [https://www.cancer.gov/tcga](https://www.cancer.gov/tcga).
|
2301.13213 | Evolution of binary systems accompanying axion clouds in extreme mass
ratio inspirals | Superradiant instability of rotating black holes (BHs) leads to the formation
of a cloud of ultralight bosons, such as axions. When the BH with the cloud
belongs to a binary system and is in an inspiraling orbit, the resonant
transition between the axion's bound states can occur. We study the history of
the evolution of the binary system accompanying the cloud composed of the
fastest growing mode, and its impact on the observational signatures,
especially for small mass ratio cases. In this case, the hyperfine resonance,
which has a very small resonance frequency, is relevant. Therefore, due to the
long timescale, we should take into account the decaying process of axions in
the transition destination mode, the backreaction to the orbital motion and the
central BH, and gravitational emission from the cloud. We present a formulation
to examine the evolution of the system around the resonance and useful
expressions for the analysis. As a result, we found the mass of the cloud that
can remain after the resonance is, at most, about $10^{-5}$ of the central BH.
The maximum remaining cloud mass is achieved when the mass ratio of the binary
is $q\sim10^{-3}$. In addition, we show that the resonant transition hardly
changes the BH mass and spin distribution, while the associated modification of
the gravitational wave frequency evolution when the binary pass through the
resonance can be a signature of the presence of the cloud. | Takuya Takahashi, Hidetoshi Omiya, Takahiro Tanaka | 2023-01-30T19:00:02Z | http://arxiv.org/abs/2301.13213v2 | # Evolution of binary systems accompanying axion clouds
###### Abstract
Superradiant instability of rotating black holes (BHs) leads to the formation of a cloud of ultralight bosons, such as axions. When the BH with the cloud belongs to a binary system and is in an inspiraling orbit, the resonant transition between the axion's bound states can occur. We study the history of the evolution of the binary system accompanying the cloud composed of the fastest growing mode, and its impact on the observational signatures, especially for small mass ratio cases. In this case, the hyperfine resonance, which has a very small resonance frequency, is relevant. Therefore, due to the long timescale, we should take into account the decaying process of axions in the transition destination mode, the backreaction to the orbital motion and the central BH, and gravitational emission from the cloud. We present a formulation to examine the evolution of the system around the resonance and useful expressions for the analysis. As a result, we found the mass of the cloud that can remain after the resonance is, at most, about \(10^{-5}\) of the central BH. The maximum remaining cloud mass is achieved when the mass ratio of the binary is \(q\sim 10^{-3}\). In addition, we show that the resonant transition hardly changes the BH mass and spin distribution, while the associated modification of the gravitational wave frequency evolution when the binary pass through the resonance can be a signature of the presence of the cloud.
## I Introduction
Ultralight bosons, such as axions or axion-like particles, can cause various phenomena in the universe. Such particles are universally predicted by string theory [1; 2] and can be a candidate for dark matter [3; 4; 5; 6]. They can be weakly coupled to the Standard Model particles, but even in such a case the gravitational interaction with black holes (BHs) and related gravitational waves (GWs) can provide a new avenue to explore them observationally.
The existence of massive bosonic fields induces the superradiant instability around rotating BHs [7; 8]. Bosons with mass in the range \(10^{-20}\sim 10^{-10}\) eV have the Compton wavelength comparable to the size of astrophysical BHs, and extract energy and angular momentum efficiently to form a condensate [9; 10]. We refer to the condensate as an axion cloud and the composing particles simply as axions. The cloud formation makes astrophysical observable imprints, such as a forbidden region in the distribution of mass and spin of BHs [11; 12; 13; 14] and continuous GW emission [15; 16; 17; 18; 19; 20; 21].
In this paper, we focus on the cases where BHs with clouds belong to binary systems. GWs from the binary inspiral can be a signature to examine the environment around BHs including the cloud [22; 23; 24; 25; 26; 27]. Axion clouds occupy a quasi-bound state of axions, which is usually the fastest growing mode. During the inspiral phase, the tidal interaction from the companion acts as an oscillating tidal field. It induces the resonant transition to another mode when the orbital frequency coincides with the phase velocity difference between the original mode of the cloud and the other [28; 29]. The change of the orbital motion of the binary and the associated GW frequency due to the backreaction can also be a signature of the presence of the cloud [30; 31; 32; 29]. To clarify the impact on the observational signatures, it is important to understand the history of the evolution during the inspiral phase.
If the separation of the binary is sufficiently small, the cloud configuration is tidally disrupted [33; 34], and the transition to unbound states occurs [35; 33]. However, for binary systems formed with a sufficiently large separation, the resonant transition should first occur with the smallest possible resonance frequency. The frequency spectrum of axion eigenmodes possesses the structure of hyperfine splittings due to the rotation of the central BH [36], and the resonance frequency associated with the hyperfine splitting is the smallest one. In Ref. [33], we showed that, for nearly equal mass binaries, this hyperfine resonance can be neglected since the resonance condition is not maintained long enough because of the decrease of the angular momentum of the cloud itself. We also showed that, before the transition caused by the leading quadrupole moment of the tidal potential occurs, the cloud is disrupted by the effects of higher multipole moments, and finally the cloud is depleted as a result of transitions to unbound states.
In contrast to nearly equal mass binaries, for small mass ratio binaries, the hyperfine resonance should be considered because of a large backreaction to the orbital motion, which maintains the orbital frequency within the
resonance band for a long period. It has great importance to examine the dynamics of small mass ratio binaries, because they are one of the main targets for future GW observations, such as LISA [37]. In this case, because of the very long timescale of the binary evolution due to the radiation reaction, some effects that can be neglected for the transition for nearly equal mass binaries become relevant.
First, the decay of non-superradiant transition destination modes and the backreaction to the central BH mass and spin become relevant. Since the resonance band is broadened corresponding to the imaginary part of the frequencies of decaying destination modes, the transition timescale staying within the resonance band becomes even longer. Therefore, we should also take into account the GW emission from the cloud during the transition. We develop a formulation that includes all of these effects within the adiabatic approximation. It is difficult to solve the originally obtained set of equations throughout the whole period across the resonance band, since the solution oscillates rapidly. To overcome this difficulty, we also present a method to give an approximate solution with sufficient accuracy.
In this paper, we consider axion clouds in a non-relativistic regime, and neglect the self-interaction of axions, for simplicity. For a relativistic regime, the energy spectrum deviates significantly from the one obtained by non-relativistic approximation, and the transition to be considered can change [38; 39]. In addition, the self-interaction can play an important role during the formation of the cloud [40; 41; 42; 43; 44; 45; 46; 47]. Here, we leave considering these effects as future work, to focus on the tidal effect in binary systems.
This paper is organized as follows. In Sec. II, we review the elements involved in the evolution of axion clouds in binary systems. In Sec. III, we present a formulation for examining the hyperfine resonance in small mass ratio binaries. In Sec. IV, we discuss the results obtained using our formulation. Finally, we give a summary and conclusion in Sec. V. Throughout this paper, we use the unit with \(c=\hbar=G=1\).
## II Elements involved in the evolution of axion clouds
In this section, we summarize the elements involved in describing the evolution of axion clouds, especially during the binary inspirals. Consider a scalar field (axion) of mass \(\mu\) around a rotating BH belonging to a binary system. We denote the central BH mass by \(M\) and angular momentum by \(J=aM=\chi M^{2}\). Formally, we can write the equation of motion for axion on a spacetime with the metric \(\tilde{g}_{\mu\nu}=g_{\mu\nu}+h_{\mu\nu}\) as
\[(\tilde{g}^{\mu\nu}\tilde{\nabla}_{\mu}\tilde{\nabla}_{\nu}-\mu^{2})\phi=0\, \tag{1}\]
where \(g_{\mu\nu}\) is the Kerr metric. We consider the tidal field from the binary companion and the decay due to the gravitational wave emission from the cloud as contributions to the perturbation. As we will see later, since there is a hierarchy of frequencies between them, we can treat them separately. We first review the features of axion clouds in the unperturbed background, and later the effects of the tidal interaction and the GW emission.
### Energy spectrum and superradiance
In the non-relativistic regime, it is appropriate to introduce a new complex scalar field variable \(\psi\) by
\[\phi=\frac{1}{\sqrt{2\mu}}\left(e^{-i\mu t}\psi+e^{i\mu t}\psi^{*}\right). \tag{2}\]
We assume that \(\psi\) changes slowly in time compared to the timescale determined by \(\mu^{-1}\). Then, we can ignore the \(\partial_{t}^{2}\psi\) term and rewrite the background equation of motion (1) as
\[i\frac{\partial}{\partial t}\psi=H_{0}\psi\,\quad H_{0}=-\frac{1}{2\mu} \nabla^{2}-\frac{\alpha}{r}+\mathcal{O}(\alpha^{2})\, \tag{3}\]
where we have introduced the gravitational fine structure constant \(\alpha\equiv M\mu\), and this approximation is well justified for \(\alpha\ll 1\). Solving this equation with the ingoing boundary condition at the BH horizon and the exponentially decaying boundary condition at infinity, we have the quasi-bound eigenstate \(\varphi_{nlm}(\mathbf{r})\) that satisfies \(H_{0}\varphi_{nlm}=(\omega_{nlm}-\mu)\varphi_{nlm}\). They are labeled by the principal, azimuthal and magnetic quantum numbers like a hydrogen atom. The eigenfrequency is approximately given by
\[\omega_{nlm}=(\omega_{R})_{nlm}+i(\omega_{I})_{nlm}\, \tag{4}\]
with [28; 36]
\[(\omega_{R})_{nlm}=\mu\left(1-\frac{\alpha^{2}}{2n^{2}}-\frac{ \alpha^{4}}{8n^{4}}+\frac{(2l-3n+1)\alpha^{4}}{n^{4}(l+1/2)}\right.\\ \left.+\frac{2m\chi\alpha^{5}}{n^{3}l(l+1/2)(l+1)}\right)\,, \tag{5}\]
\[(\omega_{I})_{nlm}=2(r_{+}/M)C_{nlm}(a,\alpha)(m\Omega_{H}-\omega_{nlm}) \alpha^{4l+5}\, \tag{6}\]
where \(r_{+}=M+\sqrt{M^{2}-a^{2}}\) is the horizon radius, \(\Omega_{H}=a/2Mr_{+}\) is the angular velocity of the BH horizon and the explicit form of \(C_{nlm}(a,\alpha)\) can be found in Ref. [28]1.
Footnote 1: It was first derived in Ref. [9], and corrected by a factor of \(1/2\)[48; 49].
As one can see from Eq. (6), the eigenfrequency of a mode satisfying \(\omega_{R}<m\Omega_{H}\) has a positive imaginary part, and the cloud grows exponentially by the superradiance. The mode \(|nlm\rangle=|211\rangle\) is the fastest growing
mode for \(\alpha\lesssim 0.45\). The BH spin decreases as the cloud grows until the superradiance condition is saturated. The critical spin at which the superradiance terminates is approximately given by
\[\chi_{\rm crit}=\frac{4m\alpha}{m^{2}+4\alpha^{2}}. \tag{7}\]
The real part of the eigenfrequency can be regarded as eigenenergy, and its degeneracy among the modes with only \(m\) being different is solved due to the rotation of the BH at the order of \(\mathcal{O}(\alpha^{5})\), which is called the "hyperfine" splitting.
### Tidal interaction
When a BH accompanied by an axion cloud belongs to a binary system, the tidal field from the companion introduces a perturbation. The general state of the cloud can be expressed by
\[\psi=\sum_{i}c_{i}(t)\varphi_{i}\,, \tag{8}\]
as a superposition of orthonormal eigenfunctions \(\varphi_{i}\). Under the same approximation taken in the preceding subsection, the equation of motion with the perturbation is given by
\[i\frac{dc_{i}}{dt}=\sum_{j}\left((\omega_{j}-\mu)\delta_{ij}+\int d^{3}x\ \varphi_{i}^{*}V_{*}\varphi_{j}\right)c_{j}. \tag{9}\]
For simplicity, we assume that the binary orbit is quasi-circular and on the plane perpendicular to the central BH spin. By multipole expansion, we can write the tidal field from the companion of mass \(M_{*}\) at \(\mathbf{r}(t)=(R_{*}(t),\Theta_{*}(=\pi/2),\Phi_{*}(t))\) as
\[V_{*} =\frac{1}{2}\mu h_{\rm tidal}^{tt}\] \[=-q\alpha\sum_{l_{*}m_{*}}\frac{4\pi}{2l_{*}+1}\frac{r_{<}^{l_{*} }}{r_{>}^{l_{*}+1}}Y_{l_{*}m_{*}}^{*}(\Theta_{*},\Phi_{*})Y_{l_{*}m_{*}}( \theta,\phi)\, \tag{10}\]
where \(q\equiv M_{*}/M\) is the mass ratio, \(r_{>}(r_{<})\) is the larger (smaller) of \(r\) and \(R_{*}\), and \(Y_{lm}\) are the spherical harmonics. The angular velocity of the binary is defined by \(\dot{\Phi}_{*}(t)=\pm\Omega(t)\), and the upper (lower) sign represents the case of co-rotating (counter-rotating) orbits. Since this interaction oscillates quasi-periodically, it works efficiently only when the orbital angular velocity is close to the difference between the phase velocity of the two modes. Therefore, it is sufficient to consider a two-mode subspace [29]. Tidal field mixes two modes, and the time evolution of particle number in each mode is, from Eq.(9), given by
\[i\dot{\mathbf{c}}=\mathcal{H}\mathbf{c} \tag{11}\]
with
\[\mathcal{H}=\begin{pmatrix}-\Delta E/2+i\omega_{I}^{(1)}&\eta e^{i\Delta m \Phi_{*}}\\ \eta e^{-i\Delta m\Phi_{*}}&\Delta E/2+i\omega_{I}^{(2)}\end{pmatrix}\, \tag{12}\]
where \(\Delta E=\omega_{R}^{(2)}-\omega_{R}^{(1)}\), \(\Delta m=m_{2}-m_{1}\), and \(\eta(t)=\left|\int d^{3}x\ \varphi_{2}^{*}V_{*}\varphi_{1}\right|\). To remove the rapidly oscillating term, we perform the unitary transformation as \(c\rightarrow\mathcal{U}^{-1}c\) and \(\mathcal{H}\rightarrow\mathcal{U}^{\dagger}\mathcal{H}\,\mathcal{U}-i\mathcal{ U}^{\dagger}\dot{\mathcal{U}}\) with the matrix \(\mathcal{U}(t)=\mathrm{diag}(e^{i\Delta m\Phi_{*}/2},e^{-i\Delta m\Phi_{*}/2})\). As a result, we can describe the level transition due to the tidal field by
\[\mathcal{H}=\begin{pmatrix}\pm\frac{\Delta m}{2}(\Omega-\Omega_{\rm res})+i \omega_{I}^{(1)}&\eta\\ \eta&\mp\frac{\Delta m}{2}(\Omega-\Omega_{\rm res})+i\omega_{I}^{(2)}\end{pmatrix}\, \tag{13}\]
where we defined the "resonance" frequency by \(\Omega_{\rm res}=\pm\Delta E/\Delta m\). Now, we are interested in the time evolution of the occupation number of each state, \(|c_{i}(t)|^{2}\).
### Gravitational wave emission
After an axion cloud forms, it dissipates through the emission of GWs. Here, we assume that the cloud is composed of a single mode as \(\psi=c_{1}\varphi_{1}\). In this case, we can neglect the GW emission due to the spontaneous level transition, and GWs are sourced by the pair-annihilation of axions. The frequency of GWs is given by \(\omega_{\rm GW}=2\omega_{R}\sim 2\mu\). The energy flux of GWs from the \(l=m=1\) cloud is given by [17]
\[\frac{dE_{\rm GW}}{dt}=C\left(\frac{M_{\rm c}}{M}\right)^{2}\alpha^{14}\, \tag{14}\]
where \(C\) is a numerical factor. In our analysis, we adopt \(C=(484+9\pi^{2})/23040\) calculated in Ref. [12]. Here, \(M_{\rm c}\) is the mass of the cloud defined by \(M_{\rm c}=-\int d^{3}x\ T^{t}{}_{t}\), where \(T^{t}{}_{t}\) is the \(t\)-\(t\) component of the energy momentum tensor. According to this, the wave function \(\psi\) is normalized as \(|c_{1}|^{2}(=\int d^{3}x|\psi|^{2})=M_{\rm c}/\mu\) at the leading order in \(\alpha\).
When we consider only the effect of GW emission, energy conservation implies that \(\dot{M}_{\rm c}=-\dot{E}_{\rm GW}\). We set the initial mass of the cloud to \(M_{\rm c,0}\) at \(t=t_{0}\). Here, we define the normalized particle number by \(n_{1}(t)=\mu|c_{1}(t)|^{2}/M_{\rm c,0}\), and write \(M_{\rm c}(t)=M_{\rm c,0}n_{1}(t)\). Energy conservation reads
\[\frac{dn_{1}}{dt}=-\frac{C}{M}\left(\frac{M_{\rm c,0}}{M}\right)n_{1}^{2} \alpha^{14}. \tag{15}\]
## III Formulation
In this section, we first explain the setup of the problem that we consider and then give a formulation to investigate it.
### Setup
We focus on the fastest growing mode \(|nlm\rangle=|211\rangle\). We consider the situation in which the cloud is initially composed of the single mode \(|211\rangle\), and the hyperfine level transition between \(|211\rangle\) and \(|21-1\rangle\) subsequently occurs. Note that this transition occurs only for co-rotating orbit. In Ref. [33], we found that when the binary mass ratio \(q\) is not too small, this transition does not significantly contribute to the dissipation of the cloud because of the reduction of the hyperfine splitting associated with the transfer of the angular momentum of the cloud to the orbital motion. However, when the mass ratio is somewhat small, the resonant tidal interaction at this hyperfine splitting frequency would largely affect the dynamics of the system. We show the parameter region where we should consider the hyperfine resonance as a process that contributes to the cloud dissipation in Fig. 1. We investigate the latter case.
For the transition between \(|211\rangle\) and \(|21-1\rangle\), from Eq.(5), the resonance frequency is given by2
Footnote 2: We do not include the contribution from the angular momentum of the cloud itself, focusing on the case where it is negligible.
\[\Omega_{\rm res}=\frac{\mu}{12}\chi\alpha^{5}. \tag{16}\]
This is smaller by a factor of \(\alpha^{3}\) than that of the "Bohr" transition between modes with different values of \(n\). When we study the Bohr transition, \(\omega_{I}\) in Eq.(13) and GW flux are so small in the timescale for passing through the resonance band that we can usually neglect them3. However, for hyperfine transition, binary evolution around the resonance frequency is very slow and the timescale for passing through the resonance band can be large, especially for \(q\ll 1\). In addition, since the angular momentum of the cloud is transferred to the orbital motion, the timescale becomes even larger. As a result, we should take into account not only the backreaction to the orbital motion, but also the backreaction to the mass and spin of the central BH and the effect of the GW emission from the cloud. We summarize the timescales involved in the current problem in Appendix A.
Footnote 3: When we consider a higher \(l\) mode, the transition to the mode with smaller \(l\) is allowed by the selection rule. In that case, the decay rate of the second mode can be large, and it would be important [50].
In the following, we label the quantities associated with the mode \(|211\rangle\) by 1, and those with \(|21-1\rangle\) by 2. For these modes, the imaginary parts of the eigenfrequencies are given by
\[\omega_{I}^{(i)}=\frac{1}{24}\frac{r_{+}}{M}\left\{\left(1-\chi^ {2}\right)+4r_{+}^{2}(m_{i}\Omega_{H}-\omega_{R})^{2}\right\}\] \[\times(m_{i}\Omega_{H}-\omega_{R})\alpha^{9}\, \tag{17}\]
where \(i\) is 1 or 2, and \(m_{1}=1\) and \(m_{2}=-1\) represent the magnetic quantum number. The mixing term in the Hamiltonian (13) is given by
\[\eta=9.0\ \frac{q}{1+q}\frac{M\Omega^{2}}{\alpha^{3}}. \tag{18}\]
### Evolution of the system
The dynamical timescale of the cloud can be estimated by \(\omega_{R}^{-1}\simeq\mu^{-1}\). It is always short compared to the growth/decay rate of the cloud, _i.e._, \((\omega_{I}^{(i)})^{-1}\gg\mu^{-1}\). Thus, we describe the evolution of the cloud and the central BH within the adiabatic approximation. The local energy and angular momentum conservation at the BH horizon reads
\[\frac{dM}{dt}+2\omega_{I}^{(1)}M_{\rm c}^{(1)}+2\omega_{I}^{(2)} M_{\rm c}^{(2)}=0\, \tag{19}\] \[\frac{dJ}{dt}+\frac{2\omega_{I}^{(1)}}{\mu}M_{\rm c}^{(1)}-\frac {2\omega_{I}^{(2)}}{\mu}M_{\rm c}^{(2)}=0\, \tag{20}\]
with \(M_{\rm c}^{(i)}=M_{\rm c,0}n_{i}(t)\). Here, we used the relation between the energy flux and the angular momentum flux for each mode \(\hat{J}_{\rm c}^{(i)}=(m_{i}/\omega_{R}^{(i)})\hat{E}_{\rm c}^{(i)}\) and the approximation \(\omega_{R}=\mu\). We denote the initial mass and angular momentum of the BH just before entering the resonance band by \(M_{0}\) and \(J_{0}\), and accordingly \(\alpha_{0}=M_{0}\mu\).
Figure 1: Parameter region where the hyperfine resonance is relevant to dissipate the cloud. In the shaded region, the resonance sustains longer because the effect of the backreaction to the orbital motion is stronger than the effect of the reduction of the hyperfine splitting. The initial angular momentum of the cloud is set to \(J_{\rm c,0}\to 0\). See Ref. [33] for the detail.
Next, we consider the evolution of the binary system at the leading post-Newtonian order. In clean binary systems, angular momentum conservation implies \(\dot{J}_{\rm orb}=-\mathcal{T}_{\rm GW}\), where \(J_{\rm orb}=q(1+q)^{-1/3}M_{0}^{5/3}\Omega^{-1/3}\) is the orbital angular momentum and \(\mathcal{T}_{\rm GW}\) is the torque caused by the radiation reaction due to the GW emission. It can be rewritten as [51, 52]
\[\frac{d\Omega}{dt} =\gamma\left(\frac{\Omega}{\Omega_{0}}\right)^{11/3}\, \tag{21}\] \[\frac{\gamma}{\Omega_{0}^{2}} =\frac{96}{5}\frac{q}{(1+q)^{1/3}}(M_{0}\Omega_{0})^{5/3}\, \tag{22}\]
where the reference frequency is chosen as \(\Omega_{0}=(\mu/12)(J_{0}/M_{0}^{2})\alpha_{0}^{5}\) (which is the "initial" resonance frequency). Here, we add the cloud and the BH contributions to the total angular momentum conservation as \(\dot{J}_{\rm orb}+\dot{J}+\dot{J}_{\rm c}^{(1)}+\dot{J}_{\rm c}^{(2)}+\dot{J}_ {\rm GW}=-\mathcal{T}_{\rm GW}\), where \(\dot{J}_{\rm GW}=(1/\mu)\dot{E}_{\rm GW}\) is the angular momentum flux of the GW from the cloud in Eq. (14). Note that we consider GW emission only from the first mode \(|211\rangle\). (As we will see later, the particle number occupying the second mode, which is non-superradiant, is always tiny and does not contribute to the GW emission.) Then, we obtain4
Footnote 4: Strictly speaking, we should take the mass of the one paired with the companion as \(M+M_{\rm c}\). However, since the cloud mass is small compared to the central BH mass, we approximated it as \(M_{0}\).
\[\frac{d\Omega}{dt}= \gamma\left(\frac{\Omega}{\Omega_{0}}\right)^{11/3}+R\left(\frac {\Omega}{\Omega_{0}}\right)^{4/3}\frac{\Omega_{0}}{M_{0}^{2}}\] \[\times\left[\frac{d}{dt}\left(J+J_{\rm c}^{(1)}+J_{\rm c}^{(2)} \right)+\frac{1}{\mu}\frac{dE_{\rm GW}}{dt}\right]\, \tag{23}\]
with \(R=3(1+q)^{1/3}q^{-1}(M_{0}\Omega_{0})^{1/3}\). We take \(\Omega(t_{0})=\Omega_{0}(1+(8/3)(\gamma/\Omega_{0})|t_{0}|)^{-3/8}\) as the initial value so that \(\Omega=\Omega_{0}\) at \(t=0\) when there are no clouds.
Finally, we describe the level transition between two modes. It is described by the Schrodinger equation with the Hamiltonian (13). Note that the particle number occupying the first mode decreases due to the GW emission by pair annihilation. Since the frequency of the emitted GW (\(\omega_{\rm GW}\sim 2\mu\)) is much larger than that of the tidal field (\(\Omega_{\rm res}\sim\mu\alpha\)6), we can treat them separately. Thus, we add the effect of the GW emission into the Schrodinger equation as
Footnote 6: Here, we approximate \(|\omega_{I}^{(2)}|\simeq\frac{1}{48}\mu\chi\alpha^{8}\) and \(\chi=\chi_{\rm crit}\simeq 4\alpha\).
\[i\frac{dc_{1}}{dt} =\left(-(\Omega-\Omega_{\rm res})+i\omega_{I}^{(1)}-i\Gamma_{\rm GW }\right)c_{1}+\eta c_{2}\, \tag{24}\] \[i\frac{dc_{2}}{dt} =\eta c_{1}+\left((\Omega-\Omega_{\rm res})+i\omega_{I}^{(2)} \right)c_{2}\, \tag{25}\]
where \(|c_{i}(t)|^{2}=M_{\rm c,0}n_{i}(t)/\mu\). Here, \(\Gamma_{\rm GW}\) represents the decay rate through the GW emission, whose explicit expression does not become necessary below.
From the above, the variables in this problem are \(\{M,J,\Omega,c_{1},c_{2}\}\), and we should solve the Eqs. (19), (20), (23), (24), and (25). However, because of the highly oscillatory behavior of the solutions for Eqs. (24) and (25), it is difficult to solve these equations for a long time with sufficient accuracy. To overcome this difficulty, we derive a set of approximate equations that can be solved easily.
### Adiabatic elimination
Here, we take advantage of the fact that the decay rate of the second mode \(|\omega_{I}^{(2)}|\) is large compared to the transition rate due to the mixing term \(\eta\) around the resonance frequency. Indeed, their ratio is estimated as7
Footnote 7: Here, we approximate \(|\omega_{I}^{(2)}|\simeq\frac{1}{48}\mu\chi\alpha^{8}\) and \(\chi=\chi_{\rm crit}\simeq 4\alpha\).
\[\frac{|\omega_{I}^{(2)}|}{\eta}\sim 8\times 10^{2}\left(\frac{10^{-3}}{q} \right)\left(\frac{0.1}{\alpha}\right). \tag{26}\]
In this case, we can carry out an adiabatic elimination of the second mode and discuss with only the particle number of the first mode.
First, we redefine the variables as
\[\tilde{c}_{i}(t)=e^{-i\int^{t}dt^{\prime}\left\{(\Omega-\Omega_{\rm res})-i \omega_{I}^{(1)}+i\Gamma_{\rm GW}\right\}}c_{i}(t)\, \tag{27}\]
for \(i=1,2\). Then, we can rewrite Eqs.(24) and (25) as
\[i\frac{d\tilde{c}_{i}}{dt}=\sum_{j=1,2}\tilde{\mathcal{H}}_{ij}\tilde{c}_{j}\,\quad \tilde{\mathcal{H}}=\begin{pmatrix}0&\eta(t)\\ \eta(t)&\Delta(t)+i\Gamma(t)\end{pmatrix}\, \tag{28}\]
with
\[\Delta(t) =2\left(\Omega(t)-\Omega_{\rm res}(t)\right)\, \tag{29}\] \[\Gamma(t) =\omega_{I}^{(2)}(t)-\omega_{I}^{(1)}(t)+\Gamma_{\rm GW}(t). \tag{30}\]
Redefined particle numbers \(|\tilde{c}_{i}|^{2}\) are related to \(|c_{i}|^{2}\) as
\[|\tilde{c}_{i}(t)|^{2} =e^{-2\int^{t}dt^{\prime}(\omega_{I}^{(1)}-\Gamma_{\rm GW})}|c_{i} (t)|^{2}. \tag{31}\]
Now, we write
\[\tilde{c}_{2}(t)=y(t)e^{-i\int_{-\infty}^{t}dt^{\prime}(\Delta+i\Gamma)}. \tag{32}\]
Substituting this into Eq.(28), we have
\[\frac{dy}{dt}=-i\eta\tilde{c}_{1}e^{i\int_{-\infty}^{t}dt^{\prime}(\Delta+i \Gamma)}. \tag{33}\]
By integrating this, we formally obtain
\[y(t)=-i\int_{-\infty}^{t}dt^{\prime}\ \eta\tilde{c}_{1}e^{i\int_{-\infty}^{t^{ \prime}}dt^{\prime\prime}(\Delta+i\Gamma)}. \tag{34}\]
If \(|\Delta+i\Gamma|\gg\eta\), we can assume that the change rate of \(\tilde{c}_{1}(t)\) is much slower than \(|\Delta+i\Gamma|\). Then, we can carry out repeated integration by parts of the integral in Eq. (34), to obtain an expansion in the inverse power of \(|\Delta+i\Gamma|\). At the leading order of this expansion, we have
\[y(t)=-\frac{\eta}{\Delta+i\Gamma}\tilde{c}_{1}e^{i\int_{-\infty}^{t}dt^{\prime }(\Delta+i\Gamma)}. \tag{35}\]
Then, substituting this expression for \(y(t)\) into \(\tilde{c}_{2}\) in the equation for \(d\tilde{c}_{1}/dt\), Eq. (28), and integrating it, we obtain
\[\tilde{c}_{1}(t)=\exp\left(i\int_{-\infty}^{t}dt^{\prime}\frac{\eta^{2}}{ \Delta+i\Gamma}\right). \tag{36}\]
From the above expressions, we find that the change rate of the amplitude \(\tilde{c}_{1}\) is much smaller than \(|\Gamma|\). Thus, from Eq. (26), the assumed conditions are all satisfied. Finally, we can write the redefined particle number for each mode as
\[|\tilde{c}_{1}(t)|^{2} =\exp\left(2\int_{-\infty}^{t}dt^{\prime}\frac{\Gamma\eta^{2}}{ \Delta^{2}+\Gamma^{2}}\right)\, \tag{37}\] \[|\tilde{c}_{2}(t)|^{2} =\frac{\eta^{2}}{\Delta^{2}+\Gamma^{2}}|\tilde{c}_{1}(t)|^{2}. \tag{38}\]
Under this approximation, the equations that we need to solve are Eqs. (19), (20), (23), and
\[\frac{dn_{1}}{dt}=2\omega_{I}^{(1)}n_{1}+\frac{2\Gamma\eta^{2}}{\Delta^{2}+ \Gamma^{2}}n_{1}-\frac{1}{M_{\rm c,0}}\frac{dE_{\rm GW}}{dt}\, \tag{39}\]
with
\[n_{2}=\frac{\eta^{2}}{\Delta^{2}+\Gamma^{2}}n_{1}. \tag{40}\]
The last term of Eq. (39) comes from the \(i\Gamma_{\rm GW}\) in the exponential of Eq. (31), and can be identified with the right-hand side of Eq. (15). In practical calculations, \(\Gamma_{\rm GW}\) should be so small compared to \(|\omega_{I}^{(2)}|\) that we can neglect it in \(\Gamma\) (Eq. (30)). We also neglect the time derivative of \(\eta\) and the higher order term of \(|\Delta+i\Gamma|^{-1}\). Now, the set of variables to be solved are \(\{M,J,\Omega,n_{1}\}\), and we can easily solve the equations numerically for a wide range of parameters.
## IV Results
In this section, we show the evolution of the system obtained by solving the equations we formulated in the
Figure 3: Evolution of the central BH mass \(M(t)\) (left), the angular momentum \(J(t)\) (middle) and the deviation from the critical spin \(\chi(t)-\chi_{\rm crit}\) (right) for \(q=10^{-4}\), \(\alpha_{0}=0.1\) and \(M_{\rm c,0}=10^{-3}M_{0}\). Due to the absorption of particles belonging to the primary cloud, the mass and the angular momentum of the BH increase slightly, but it maintains the BH spin parameter slightly below the threshold value of the superradiance condition.
Figure 2: Evolution of the normalized particle number of the first mode \(n_{1}(t)\) (left) and the orbital frequency \(\Omega(t)\) (right) around the resonance frequency for \(q=10^{-4}\), \(\alpha_{0}=0.1\) and \(M_{\rm c,0}=10^{-3}M_{0}\). Blue solid lines show the results of solving all equations, and orange dashed lines show the results without taking into account the backreaction to the orbital motion and the mass and spin of the central BH. Green dashed line, in the left panel, shows the evolution of \(n_{1}(t)\) considering only the effect of the GW emission.
preceding section. In addition, we discuss their implications for observable signatures.
### Time evolution
We first discuss the initial conditions. To form a somewhat large cloud, the BH must have a large spin when it is formed. However, the growth timescale of the cloud is much faster than the timescale of the binary evolution, and hence the BH spin will be quickly reduced to the threshold value for the superradiance of the dominant cloud. Thus, we set the initial BH spin to the threshold value, \(J_{0}=a_{\rm crit}M_{0}\). Also, how to choose the initial time is not trivial because of the decay of the cloud through the GW emission. From the analysis of the simplified toy model in Appendix B, we can estimate the "start time" at which the tidal field begins to be relevant as
\[t\sim-\left(1+\frac{\eta^{2}}{\gamma}\right)\frac{|\omega_{I}^{(2)}|}{2\gamma} \equiv-t_{s}\,. \tag{41}\]
We adopt \(t_{0}=-30t_{s}\) evaluated with \(\alpha=\alpha_{0}\), \(\Omega=\Omega_{0}\) and \(a=a_{\rm crit}\) as the the initial time.
Now, the initial condition of this system is parameterized by \(\{q,\alpha_{0},M_{c,0}\}\). First, let us discuss the results for the fiducial set of parameters: \(\{q=10^{-4},\alpha_{0}=0.1,M_{c,0}=10^{-3}M_{0}\}\). The time evolution of the normalized particle number of the primary cloud \(n_{1}(t)\) and that of the binary's orbital frequency \(\Omega(t)\) are shown in Fig. 2. Before reaching the resonance frequency, the particle number decreases mainly through the GW emission. However, since the resonance band is widened due to the presence of rapid decay of the secondary mode, characterized by \(\omega_{I}^{(2)}\), the orbital frequency is slightly modified by the effect of transition, even in this stage.
Then, when the orbital frequency gets close to the resonance frequency, the tidal interaction works more efficiently. The particles in the first mode are transferred to the second mode, and the number \(n_{1}\) decreases dramatically. With the transition, the angular momentum of the cloud is transferred to the binary orbital motion, and the orbital frequency stagnates around the resonance frequency. Here, we should note that, because of this stagnation, the duration to pass through the resonance band becomes much longer and the net transition rate is much larger than the case when the backreaction is neglected.
After the resonance, the particle number is exponentially reduced owing to the backreaction to the central BH shown in Fig. 3. Let us explain the reason why it can give such a large influence on the cloud decay after the resonance. Initially, the superradiance condition of the primary cloud is saturated, _i.e._, \(\omega_{I}^{(1)}=0\). However, once even a small number of particles are transferred to the second mode, which has an angular momentum in the opposite direction to the central BH spin, and is absorbed by the BH, the BH spin decreases slightly. Then, the first mode becomes a non-superradiant mode, and the particles belonging to the primary cloud also begin to be absorbed by the BH. Thus, the BH mass and angular momentum gradually increase maintaining the spin parameter slightly below the threshold value until the resonant transition becomes more efficient.
At around the peak of the resonance, the particle number of the second mode increases, and the flux to the BH of the second mode with negative angular momentum dominates that of the first mode with a positive spin. After passing the resonance frequency, the flux of the first mode dominates again, but at that time there are not enough particles left to spin-up the BH beyond the superradiance threshold. As a result, the BH spin settles to a value slightly below the threshold for the first mode to be superradiant. Although the deviation from the critical spin is tiny, \(|\omega_{I}^{(1)}|\) is sufficiently large to eliminate the cloud within the timescale of the binary inspiral.
In summary, the cloud, first, dissipates through the GW emission. Then, the particle number of the first mode decreases dramatically with the resonant transition, and the transferred particles to the second mode are absorbed by the BH immediately. After that, the pri
Figure 4: Dependence of the evolution of the cloud on the gravitational fine structure constant \(\alpha_{0}\). Each line shows the evolution of the cloud mass for \(q=10^{-4}\), \(M_{c,0}=10^{-3}M_{0}\) and various \(\alpha_{0}\). The cloud mass at a late epoch monotonically increases, as \(\alpha_{0}\) increases.
Figure 5: Dependence of the cloud on the mass ratio \(q\). Each line shows the evolution of the cloud mass for \(\alpha_{0}=0.1\), \(M_{c,0}=10^{-3}M_{0}\) and various \(q\). As \(q\) becomes smaller, the timescale of the binary evolution becomes longer, and thus the decay due to the GW emission becomes dominant.
mary cloud decreases exponentially due to the BH spin-down below the superradiance threshold.
We also show the parameter dependence of this system. In Fig. 4 and Fig. 5, we show the evolution of the particle number for the same parameter but varying \(\alpha_{0}\) and \(q\), respectively. If we neglect the backreaction and GW emission, and approximate the binary orbital frequency evolution by a linear function of \(t\), the survival probability of the primary cloud is analytically evaluated as \(\exp(-\pi\eta^{2}/\gamma)\)[29]6. This means that the efficiency of the tidal effect is determined by the product of the amplitude of the tidal perturbation \(\eta\) and the timescale passing through the resonance band \(\eta/\gamma\). This measure of the tidal effect \(\eta^{2}/\gamma\) is proportional to \(q\alpha_{0}^{-11/3}\) for \(q\ll 1\). Thus, the cloud mass after the resonance becomes tiny, when \(\alpha_{0}\) is small and \(q\) is somewhat large.
Footnote 6: Surprisingly, this result is not changed by the presence of \(\omega_{I}^{(2)}\).
### Initial and final cloud mass
In terms of observation, it is interesting to clarify how much of the cloud can remain after the satellite passes through the resonance frequency. The evolution of this system and the fate of the cloud also depend on the initial cloud mass. If there are no processes that prevent the cloud's growth and the BH has nearly extremal spin when it forms, the cloud mass can be estimated as \(\sim\alpha M\)[18]. In reality, however, there can be other dissipation processes besides GW emission, such as dissipation due to axion's self-interaction [42; 45]. Therefore, it is worth discussing the dependence on the initial cloud mass.
We find that the initial value of the cloud mass that maximizes the final cloud mass is mainly determined by the value of the BH spin after the resonance. It can be classified into two cases, which we describe below. We show the example of the cloud mass and BH spin evolution for the initial cloud mass from \(10^{-10}M_{0}\) to \(10^{-1}M_{0}\) in Fig. 6 (case 1) and Fig. 7 (case 2). Here, we take the final time as \(\tau_{\rm bin}/4\), where \(\tau_{\rm bin}=\Omega_{0}/\gamma\) is the timesscale of the binary evolution (see also Appendix A).
In case 1 (Fig. 6), for large initial cloud mass, the cloud mass decreases exponentially due to the BH spin-down. In this case, since the transition rate is large, there are not enough particles left to spin-up the BH after the resonance. On the other hand, for somewhat small initial cloud mass, the particle number is too small to spin-down the BH efficiently from the beginning. In this case, the absorption to the BH can be neglected, and the final mass is determined only by the transi
Figure 7: Case 2. The same plot as Fig. 6, but for \(q=10^{-4}\). In this case, since the transition rate is not large, there is enough particle number left to spin-up the BH after the transition for a large initial cloud mass. Thus, the cloud mass does not decrease at the late epoch, and the largest initial cloud mass gives the largest final cloud mass.
Figure 6: Case 1. Evolution of the cloud mass (top) and the BH spin (below) for \(\alpha_{0}=0.2\), \(q=10^{-3}\) and various initial cloud mass \(M_{c,0}\). Black dotted line in the below panel shows the minimum value of the spin estimated in Sec. IV.3. In this case, the particle number after the transition is too small to spin-up the BH after the transition for a large initial cloud mass. The small initial cloud mass such that the BH spin-down is negligible gives the largest final cloud mass.
teraction. Thus, the case with such a small initial cloud mass gives the maximum final cloud mass, for example, \(M_{\rm c,0}=10^{-9}M_{0}\) in Fig. 6.
In case 2 (Fig. 7), for the largest initial cloud mass (\(M_{\rm c,0}=10^{-1}M_{0}\)), the cloud mass does not decrease at the late epoch in this timescale. This is because the transition rate is small and there are enough particles left to spin-up the BH by almost the threshold value of the superradiance after the resonance. Thus, in this case, the largest initial cloud mass simply gives the maximum final cloud mass.
We summarize the possible maximum final mass \(M_{\rm c,fin}\) of the cloud after the resonance in the parameter space \((\alpha_{0},q)\) in Fig. 8. We take \(10^{-1}M_{0}\) as the largest initial cloud mass, and contours below \(10^{-15}M_{0}\) are not shown. The area above the red boundary belongs to the case1, and the area below it belongs to the case2. In case1 region, the final cloud mass is mainly determined by the transition rate, _i.e._, the strength of the tidal interaction characterized by \(\eta^{2}/\gamma\). Thus, for small \(\alpha_{0}\) and somewhat large \(q\), the cloud hardly remains. In case2 region, the final cloud mass is mainly determined by the GW emission. For small \(\alpha_{0}\) and \(q\), the timescale of the binary evolution becomes large, and thus the cloud has small mass by the time orbital frequency reaches around the resonance. As a result, we find that the largest final mass of the cloud is \(\sim 10^{-5}M_{0}\), which is achieved at \(\alpha_{0}\gtrsim 0.2\) and \(q\sim 10^{-3}\).
### BH spin-down
In this subsection, we discuss the impact on the statistical distribution of BH spin. If axions exist, most of BHs which experienced sufficiently large spin-up in the past are expected to remain at the critical spin corresponding to the threshold for the superradiance (Eq. (7)). Such an accumulation of the spin distribution can be an observational signature of the existence of axions [11; 12]. However, as we saw in the preceding subsections, axions transferred to the mode with \(m=-1\) by the tidal interaction make the BH spin smaller than the critical spin. Then, the question is, how small can the BH spin be?
To answer it, we analyze the evolution of the BH spin parameter. From Eqs. (19) and (20), we have
\[\frac{d\chi}{dt} =-2\frac{\chi}{M}\frac{dM}{dt}+\frac{1}{M^{2}}\frac{dJ}{dt}\] \[=4\chi\frac{M_{\rm c,0}}{M}(\omega_{I}^{(1)}n_{1}+\omega_{I}^{(2 )}n_{2})\] \[\quad+\frac{2}{\alpha}\frac{M_{\rm c,0}}{M}(-\omega_{I}^{(1)}n_{1 }+\omega_{I}^{(2)}n_{2}). \tag{42}\]
For \(\chi<\chi_{\rm crit}\) (, _i.e._\(\omega_{I}^{(1,2)}<0\)), the first term on the right-hand side is always negative. Near the resonance, the flux of the second mode can be dominant, at which point the second term is also negative. On the other hand, when \(n_{2}\) decreases and the flux of the first mode becomes dominant, the second term becomes positive. Thus, we can estimate the minimum value of the BH spin parameter achieved by the reabsorption of transferred axions as \(\chi_{\rm min}\) satisfying \(d\chi/dt=0\) around the resonance.
Here, we use the approximation obtained in Eq. (40) for \(n_{2}\). In particular, near the resonance, we can write
\[n_{2}\simeq\frac{\eta^{2}}{\Gamma^{2}}n_{1}. \tag{43}\]
Figure 8: Possible maximum final mass of the cloud after the hyperfine resonance. The area above the red boundary belongs to the case1 (_e.g._, Fig. 6), and the area below it belongs to the case2 (_e.g._, Fig. 7).
Figure 9: Approximate minimum value of the spin parameter of the central BH obtained by \(d\chi/dt=0\) in Eq. (42). It gives an estimation of the upper limit of the deviation from the critical spin, which can be reached by the BH spin-down. Blue solid line shows the boundary below which the hyperfine transition is relevant as Fig. 1.
Substituting it in Eq. (42) and approximating \(M\simeq M_{0}\) and \(\alpha\simeq\alpha_{0}\), we can find the root of \(d\chi/dt=0\) numerically. In Fig. 9, we show the deviation of \(\chi_{\rm min}\) obtained in this way from the critical spin \(\chi_{\rm crit}\) for the parameter space \((\alpha_{0},q)\). However, it is important to stress that the deviation obtained here is only an approximate upper bound. In fact, if the cloud mass is too small, \(\chi_{\rm crit}(d\chi/dt)^{-1}\) can be larger than the timescale of binary evolution as the cloud mass decreases. In that case, the BH spin-down stops before reaching the \(\chi_{\rm min}\). In Fig. 6 and Fig. 7, we show the evolution of the BH spin, with the dotted line corresponding to \(\chi_{\rm min}\). When the cloud mass is somewhat large, the BH spin can only go down to about \(\chi_{\rm min}\) at most. On the other hand, if the cloud mass is too small, BH spin-down terminates before reaching \(\chi_{\rm min}\), and the absorption to the BH is negligible. In particular, although it seems that the deviation of \(\chi_{\rm min}\) from the critical spin for large \(\alpha_{0}\) and small \(q\) can be \(\mathcal{O}(0.1)\) from Fig. 9, in that region, the timescale of the binary evolution becomes small and there are no enough time to spin-down the BH. Therefore, although the spin-down due to the absorption can be sufficiently large to deplete the cloud, it would not affect the constraints on axions from the BH spin measurements.
### Modification of the orbital frequency
Next, we discuss the modification of the GW frequency evolution at around the resonance. The GW frequency at which resonance occurs is given by [28]
\[f_{\rm res}=\frac{\Omega_{0}}{\pi}=2.2\ {\rm mHz}\ \frac{1}{1+4\alpha_{0}^{2}} \left(\frac{\alpha_{0}}{0.1}\right)^{7}\left(\frac{10M_{\odot}}{M}\right). \tag{44}\]
For typical binary systems with a supermassive BH having an extreme mass ratio companion, the resonance frequency is too low to detect. However, GWs at around the resonance frequency from an intermediate mass BH accompanied by a stellar mass or an even smaller mass exotic compact object could be observed by space-based GW detectors, such as LISA.
Around the resonance frequency, the orbital frequency stagnates due to the angular momentum transfer associated with the transition. This backreaction effect also causes the delay of the rapid decrease of the cloud and enhances the transition rate. In Fig. 10, we show the evolution of the cloud mass and the orbital frequency for \(\alpha_{0}=0.1,q=10^{-5}\) and various initial cloud mass. When the cloud mass is large enough, this backreaction greatly affects the evolution. We can estimate the threshold value of the cloud mass before the transition for the backreaction works effectively from Eq. (23). For simplicity, neglecting the GW emission from the cloud and considering only the primary cloud, the orbital evolution around the resonance is approximated as
\[\frac{d\Omega}{dt}\simeq\gamma+R\,\frac{\Omega_{0}}{M_{0}^{2}} \frac{dJ_{\rm c}^{(1)}}{dt}. \tag{45}\]
Here, from Eq. (39), the time derivative of \(J_{\rm c}^{(1)}\) is given by
\[\frac{dJ_{\rm c}^{(1)}}{dt}\simeq\frac{M_{\rm c}}{\mu}\frac{2\eta ^{2}}{\omega_{I}^{(2)}}. \tag{46}\]
For the orbital frequency to stagnate, in the right-hand side of Eq. (45), the first term \(\gamma\) (GW radiation reaction) and the second term must be comparable. Thus, we can estimate the threshold value of the cloud mass required for the backreaction to work by equating these terms. We denote it as \(M_{\rm c,float}\), and it is given as
\[M_{\rm c,float} = \frac{\gamma|\omega_{I}^{(2)}|\alpha}{2R\Omega_{0}\eta^{2}}M \tag{47}\] \[\simeq 9.5\times 10^{-8}M_{0}(1+q)^{4/3}\left(\frac{\alpha_{0}}{0. 1}\right)^{16/3}\.\]
Therefore, even with a small mass of the cloud, we can expect that this modification can be a clear signature of the presence of an axion condensate.
Unfortunately, the timescale of the binary evolution \(\tau_{\rm bin}\) is typically much longer than the observation time (\(\lesssim 10\) yr). At first glance, it seems difficult to resolve the degeneracy with the uncertainties in the chirp mass and the mass ratio by observing the time derivatives of the GW frequency \(\dot{f}\) and \(\ddot{f}\). However, we point out that \(f\ddot{f}/\dot{f}^{2}\) can be a good indicator of deviation from clean binaries. If the binary system is clean and the mass ratio is sufficiently small, \(q\ll 1\), this non-dimensional quantity becomes a model-independent constant, _i.e._, \(f\ddot{f}/\dot{f}^{2}=11/3\) in the early stage of the inspiral.
It would be natural to assume that the cloud mass is bounded from above by the mass where the GW emission timescale \(\tau_{\rm GW}\) equals the timescale of the binary evolution \(\tau_{\rm bin}\) (see Appendix. A). Then, around the resonance, the cloud mass, reduced only by the GW emission, is bounded by
\[M_{\rm c,GW} = \frac{M^{2}}{\tau_{\rm bin}}\frac{\alpha^{-14}}{C} \tag{48}\] \[\simeq 8.9\times 10^{-4}M_{0}\frac{q}{(1+q)^{1/3}}\left(\frac{ \alpha_{0}}{0.1}\right)^{14/3}\.\]
In Fig. 11, we show the value of the indicator \(f\ddot{f}/\dot{f}\) in the presence of a cloud with \(M_{\rm c,0}=M_{\rm c,GW}\), for \(\alpha_{0}=0.1\) and \(\alpha_{0}=0.2\). They show that the deviation from clean binaries can be larger than \(\mathcal{O}(1)\), even if the axion cloud has only a tiny fraction of the mass of the central BH. While \(q\) dependence on the indicator for the same cloud mass is weak7, \(M_{\rm c,GW}\) is approximately linearly proportional to \(q\). Thus, when the mass ratio \(q\) is too small, the effect of the angular momentum transfer due to the tidal interaction also becomes small.
Footnote 7: In Eq. (23), the main contribution to the square brackets in the second term of the right-hand side is \(j_{\rm c}^{(1)}\). From Eq. (39), \(j_{\rm c}^{(1)}\) is roughly proportional to \(q^{2}\) around the resonance. Thus, \(\Omega\simeq\mathcal{O}(q)\). One can find that \(\dot{\Omega}\simeq\mathcal{O}(q^{2})\) by differentiating Eq. (23).
## V Summary and discussion
In this paper, we have investigated the evolution of inspiralling binary systems accompanying an axion cloud before and after the orbital frequency crosses the hyperfine resonance frequency, focusing on small mass ratio (\(q\ll 1\)) cases. Our main interest is how the hyperfine level transition proceeds and affects the observational signatures. From the comparison of timescales, we found it necessary to take into account the following components; the decaying process of the axion in the destination mode of the hyperfine transition (imaginary part of the eigenfrequency), the GW emission from the cloud, and the backreaction to the orbital motion and that to the mass and spin of the central BH. We presented a formulation to examine the evolution of the cloud, the central BH, and the orbital motion including all these effects. In particular, carrying out the adiabatic elimination of the degree of freedom of the amplitude of the second mode allows us to examine a wide parameter region numerically, and gives useful expressions for analyzing the behavior of the system.
Our results show that the cloud mass is typically significantly reduced by the GW emission before the resonant transition occurs. If \(q\) is sufficiently large or \(\alpha\) is sufficiently small, axions in the \(m=1\) fastest growing mode are almost transferred to the \(m=-1\) mode, which has angular momentum in the opposite direction to the BH spin and is easily absorbed by the BH. Then, the primary cloud becomes non-superradiant and can fall into the BH, which results in the increase of the BH angular momentum, counter-intuitively. However, the increase of the BH mass dominates to maintain the first mode to be non-superradiant. As a result, the cloud almost completely disappears by the absorption to the BH. On the other hand, if \(q\) is extremely small or \(\alpha\) is sufficiently large, the transition rate due to tidal interaction is small. In such cases, since there are enough particles left to spin-up the BH again after the transition, the absorption to the BH at the late epoch can be neglected, and the cloud does not disappear completely. However, it dissipates
Figure 11: Indicator in GW frequency of the presence of the cloud \(f\tilde{f}/\hat{f}^{2}\) around the resonance \(t=0\) for various \(q\), \(\alpha_{0}=0.1\) (left) and \(\alpha_{0}=0.2\) (right). The initial cloud mass is set where the timescale of the GW emission and the binary evolution are equal, _i.e._, \(M_{c,0}=M_{c,\mathrm{GW}}\) in Eq. (48). If the binary system is clean, \(f\tilde{f}/\hat{f}^{2}=11/3\) model-independently. Around the resonance, this quantity can be largely changed with the level transition of the cloud.
Figure 10: Evolution of the cloud mass (left) and the orbital frequency (right) for \(\alpha_{0}=0.1,q=10^{-5}\) and various initial cloud mass \(M_{c,0}\). Black dotted line in the left panel shows the threshold value of the cloud mass required for the backreaction to work effectively, obtained in Eq. (47). Black dashed line in the right panel shows the evolution of the orbital frequency in the clean binary.
mainly owing to the GW emission before the transition, and the maximum mass of the cloud that can remain after the resonance is \(\sim 10^{-5}M_{0}\) at most. How much of axion clouds can remain after the resonance might have an implication to the survey of the cloud as an environment around the BH, such as [23; 24; 26].
We also discussed the implication to the observational signatures. First, we confirmed that the time variation of the BH spin around the transition is tiny, although this tiny variation can be important to determine the evolution of the cloud. This result makes robust the constraint on the existence of an axion field obtained through the BH parameter distribution measured by GWs from binary systems. Second, we studied the influence of the transition on the inspiral GW waveform. We found that even for extremely small cloud mass, the backreaction to the orbital motion works effectively, and the frequency stagnates around the resonance frequency. In particular, the combination \(f\dot{f}/\dot{f}^{2}\) is affected by the transition to a detectable level. Therefore, for example, the GWs from an intermediate mass BH associated with a small mass satellite can be a good target for the axion search. We need more extensive analysis to conclude the observability of axion clouds with the modification of the waveform. Furthermore, the generalization of the inspiral orbit and the discrimination from other environmental effects would be important. We leave them as future work.
###### Acknowledgements.
T. Takahashi was supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2123. TT is supported by JSPS KAKENHI Grant Number JP17H06358 (and also JP17H06357), _A01: Testing gravity theories using gravitational waves_, as a part of the innovative research area, "Gravitational wave physics and astronomy: Genesis", and also by JP20K03928. HO is supported by Grant-in-Aid for JSPS Fellows JP22J14159.
## Appendix A Timescales
In this appendix, we summarize the timescales involved in our problem.
**Binary evolution:**:
The timescale of the binary evolution due to the GW radiation at the resonance frequency \(\Omega_{0}\) is given by
\[\tau_{\rm bin}=\frac{\Omega_{0}}{\gamma}=\frac{5}{96}M\frac{(1+q)^{1/3}}{q}(M \Omega_{0})^{-8/3}\, \tag{10}\]
where \(\gamma\) is defined by Eq. (22).
**Transition:**:
The resonance bandwidth can be estimated as \(\Delta\Omega\sim 2\eta\). Hence, if one can neglect the instability of the mode of the transition destination and linearize the orbital evolution, the timescale for passing through the resonance band is given by
\[\tau_{\rm trans}=\frac{2\eta}{\gamma}. \tag{11}\]
**Decay of the secondary cloud:**:
The secondary cloud decreases as \(\sim e^{-2|\omega_{I}^{(2)}|t}\), and the timescale is given by
\[\tau_{\rm inst}=|\omega_{I}^{(2)}|^{-1}. \tag{12}\]
**GW emission of the primary cloud:**:
From the energy conservation \(\dot{M}_{\rm c}=-\dot{E}_{\rm GW}\) (see Eq. (14)), one can obtain
\[M_{\rm c}(t)=\frac{M_{\rm c,0}}{1+(t-t_{0})/\tau_{\rm GW}}. \tag{13}\]
Here, the timescale is given by
\[\tau_{\rm GW}=\frac{1}{C}\frac{M^{2}}{M_{\rm c,0}}\alpha^{-14}. \tag{14}\]
Parameter dependencies of the timescales mentioned above are summarized in Fig. 12 and Table 1.
## Appendix B Toy model for adiabatic elimination
In this appendix, we discuss the approximation used in Sec. III.3 with a simplified toy model. Consider the two level transition described by the Schrodinger equation
\[i\frac{d}{dt}\begin{pmatrix}c_{1}\\ c_{2}\end{pmatrix}=\begin{pmatrix}0&\eta\\ \eta&\Delta(t)-i\omega_{I}\end{pmatrix}\begin{pmatrix}c_{1}\\ c_{2}\end{pmatrix}. \tag{15}\]
Let \(\eta\) and \(\omega_{I}\) be constants and \(\Delta(t)=2\gamma t\) (\(\gamma\) is constant). This model is a simplification of ignoring all backreactions, GW emissions and linearizing the binary evolution in the problem we investigate. If \(\omega_{I}=0\), this model is known as Landau-Zener problem [29; 53; 54]. Now, we want to study the level transition to the decaying mode (\(\omega_{I}>0\)). For this problem, we have an exact analytic solution with the initial conditions \(c_{1}(-\infty)=1\) and \(c_{2}(-\infty)=0\) as [55; 56]
\[|c_{1}(t)|^{2} = e^{-\omega_{I}t-\frac{\pi}{2}\frac{\eta^{2}}{2\gamma}}\left|D_{i \eta^{2}/2\gamma}\left(e^{i\frac{3\pi}{4}}(\sqrt{2\gamma}t-i\omega_{I}/\sqrt{2 \gamma})\right)\right|^{2}\, \tag{30}\] \[|c_{2}(t)|^{2} = e^{-\omega_{I}t-\frac{\pi}{4}\frac{\eta^{2}}{2\gamma}}\frac{\eta ^{2}}{2\gamma}\left|D_{i\eta^{2}/2\gamma-1}\left(e^{i\frac{3\pi}{4}}(\sqrt{2 \gamma}t-i\omega_{I}/\sqrt{2\gamma})\right)\right|^{2}\, \tag{31}\]
where \(D_{\nu}(z)\) is the parabolic cylinder function.
Carrying out the adiabatic elimination as Sec. III.3, we obtain the approximate solution for the particle number as
\[|c_{1}(t)|^{2} \simeq \exp\left(2\int_{-\infty}^{t}dt^{\prime}\frac{\omega_{I}\eta}{4 \gamma^{2}t^{\prime 2}+\omega_{I}^{2}}\right) \tag{32}\] \[= \exp\left[-\frac{\eta^{2}}{\gamma}\left(\arctan\frac{2\gamma t}{ \omega_{I}}+\frac{\pi}{2}\right)\right]\,\]
and
\[|c_{2}(t)|^{2}\!\simeq\frac{\eta^{2}}{4\gamma^{2}t^{2}+\omega_{I}^{2}}|c_{1}( t)|^{2}. \tag{33}\]
In Fig. 13, we compare the approximate solution obtained by the adiabatic elimination with the exact one. As one can confirm from the figure, the two solutions agree quite well when \(\omega_{I}/\eta\) is sufficiently large.
We can estimate the time when the perturbation starts to work from Eq. (32). For \(|t|\!\!\gg\omega_{I}/2\gamma\) (\(t<0\)), one can expand \(|c_{1}(t)|^{2}\) with respect to \(1/|t|\) as
\[|c_{1}(t)|^{2}\!\sim\exp\left(-\frac{\eta^{2}\omega_{I}}{2\gamma^{2}|t|} \right). \tag{34}\]
If \(\eta^{2}/\gamma\gg 1\), the exponent can be \(\mathcal{O}(1)\), even for \(|t|\!\!\gg\omega_{I}/2\gamma\). In this case, the proper choice of the time for the onset of the perturbation would be \(t\sim-\eta^{2}\omega_{I}/2\gamma^{2}\). On the other hand, if \(\eta^{2}/\gamma\lesssim 1\), the exponent in Eq. (32) vanishes for \(|t|\!\!\gg\omega_{I}/2\gamma\). In this case, it is enough to choose the starting time at \(t\sim-\omega_{I}/2\gamma\). Combining them, we have the estimation of the time when the perturbation starts to work as \(t\sim-(1+\eta^{2}/\gamma)(\omega_{I}/2\gamma)\).
Figure 12: Timescales involved in the resonant transition of axion clouds in binary systems for \(\alpha=0.1\) and \(M=M_{\odot}\). Blue and yellow solid lines show the timescale of the binary evolution and the transition at the hyperfine resonance, respectively. Blue and yellow dashed lines show the same quantities, but for the typical Bohr transition (\(|211\rangle\to|31-1\rangle\)). Green and red lines show the timescales of decay of the secondary cloud (\(|21-1\rangle\)) and of the GW emission of the primary cloud (\(|211\rangle\)) for \(M_{c,0}=0.1M\), respectively.
\begin{table}
\begin{tabular}{|c|c|} \hline process & time \\ \hline Binary evolution & \(\tau_{\rm bin}=2.2\times 10^{13}\ {\rm s}\frac{\left(1+q\right)^{1/3}}{q} \left(\frac{M}{M_{\odot}}\right)\left(\frac{\chi}{0.4}\right)^{-8/3}\left( \frac{\alpha}{0.1}\right)^{-16}\) \\ \hline Transition & \(\tau_{\rm trans}=1.3\times 10^{10}\ {\rm s}\frac{1}{(1+q)^{2/3}} \left(\frac{M}{M_{\odot}}\right)\left(\frac{\chi}{0.4}\right)^{-5/3}\left( \frac{\alpha}{0.1}\right)^{-13}\) \\ \hline Decay of the secondary mode & \(\tau_{\rm inst}\simeq 5.9\times 10^{5}\ {\rm s}\left(\frac{M}{M_{\odot}} \right)\left(\frac{\chi}{0.4}\right)^{-1}\left(\frac{\alpha}{0.1}\right)^{-9}\) \\ \hline GW emission & \(\tau_{\rm GW}=2.0\times 10^{11}\ {\rm s}\left(\frac{M}{M_{\odot}} \right)\left(\frac{M_{c,0}/M}{0.1}\right)^{-1}\left(\frac{\alpha}{0.1}\right)^ {-14}\) \\ \hline \end{tabular}
\end{table}
Table 1: Timescales involved in the hyperfine resonance of axion clouds. |
2303.05208 | Geometry of Language | In this article, we present a fresh perspective on language, combining ideas
from various sources, but mixed in a new synthesis. As in the minimalist
program, the question is whether we can formulate an elegant formalism, a
universal grammar or a mechanism which explains significant aspects of the
human faculty of language, which in turn can be considered a natural
disposition for the evolution and deployment of the diverse human languages. We
describe such a mechanism, which differs from existing logical and grammatical
approaches by its geometric nature. Our main contribution is to explore the
assumption that sentence recognition takes place by forming chains of tokens
representing words, followed by matching these chains with pre-existing chains
representing grammatical word orders. The aligned chains of tokens give rise to
two- and three-dimensional complexes. The resulting model gives an alternative
presentation for subtle rules, traditionally formalized using categorial
grammar. | Loe Feijs | 2023-03-09T12:22:28Z | http://arxiv.org/abs/2303.05208v1 | # Geometry of Language
###### Abstract
In this article, we present a fresh perspective on language, combining ideas from various sources, but mixed in a new synthesis. As in the minimalist program, the question is whether we can formulate an elegant formalism, a universal grammar or a mechanism which explains significant aspects of the human faculty of language, which in turn can be considered a natural disposition for the evolution and deployment of the diverse human languages. We describe such a mechanism, which differs from existing logical and grammatical approaches by its geometric nature. Our main contribution is to explore the assumption that sentence recognition takes place by forming chains of tokens representing words, followed by matching these chains with pre-existing chains representing grammatical word orders. The aligned chains of tokens give rise to two- and three-dimensional complexes. The resulting model gives an alternative presentation for subtle rules, traditionally formalized using categorial grammar.
## 1 Introduction
The quest for a kind of fundamental understanding of language is an important intellectual undertaking. Already in antiquity, scholars understood that language is not random, but that each language is governed by a precise set of rules. Grammar books have a long history, for example, Panini's Sanskrit grammar dates to the 6th to 5th century BCE. In the early 19th century, Grimm and others discovered that language evolution is also not random but follows precise rules. Although a variety of formalisms for describing languages have been proposed, some of the fundamental questions have not been adequately answered. Key questions are: is there a unique language instinct built-in to the human brain, and if so, how does it work? Is there a formalism which is minimal, yet capable of describing each human language?
The introduction of production-style grammar rules by Noam Chomsky was an important step forward [1]. Later Chomsky introduced the concept of UG, universal grammar [1]. The idea is that, although different human languages have different grammars, there could a set of structural rules, innate to humans. In the Minimalist program, the hypothesis was that there could be a kind of minimal kernel which supports the quick learning of any language offered to a child. But it was hard to identify this minimalist meta-grammar and, in more recent years Chomsky holds it that perhaps the only faculty of language in the narrow sense (FLN) is a recursive computational system [1].
Another significant contribution was the introduction of type theory in the form of categorial grammars [22, 23]. The study of language analysis has been mostly driven forward by efforts to let computers do analysis work and a large portion of present-day grammatical theory is influenced by or geared towards, the languages of computers and mathematical logic.
In this article, while being informed by production-rules grammars and categorial grammars, we take a slightly different approach, which is new to the best of our knowledge. We acknowledge the inspiration from early writings on "denksoep" (Dutch, i.e. thinking soup) by N.G. de Bruijn (1996), next to inspiration from DNA computing [1]. We work from omnipresent phenomena of human language, trying to find simple rules about two- and three-dimensional complexes which explain the language phenomena. The author misses the knowledge and the tools to search or check for the actual molecules and neuron networks in our brains. The result is a kind of model which explains aspects of language analysis. Focus is
on the syntactic aspects of language, and although the model contains clues and handles for dealing with meaning, we leave semantics as an option for future research.
## 2 Subject-Verb-Object Sentences
In this section, we limit ourselves to simple sentences of the subject-verb and subject-verb-object types. These correspond to the word order in English phrases and in Dutch non-complementizer phrases. Extensions to more complicated examples will be provided later, in Sections 3-7.
The most obvious phenomenon of human language is the sequential nature of its form (not necessarily of its meaning). Words are produced one after the other, both in speaking and in writing. Speech understanding thus starts from the same spoken word order, and so does reading.
The **first assumption** we adopt in our model is that the words are converted into compact tokens, each token representing the spoken occurrence of a word. Here we write them as strings, such as "cows", "eat", and "grass". Later we also depict them as coloured balls, which is convenient for modelling and showing 3D complexes. Additional tokens represent grammatical categories, such as _NP_ (noun phrase), \(V_{1}\) (non-transitive verb), \(V_{2}\) (transitive verb), _Adj_ (adjective) and \(S\) (sentence).
The **second assumption** is that tokens are connected into linear chains such that the token order in the chain corresponds to the word order of the spoken or written sentence. We write them from left to right and add hyphens to connect them, as in "cows"--"eat"--"grass". In addition to such chains of words, it is also possible to have chains which contain grammatical categories, for example, a mixed chain "cows"--\(V_{2}\)--"grass"", or a completely abstract chain _NP_--\(V_{2}\)--_NP_.
The **third assumption** is that every now and then when a chain is heard and is confirmed to belong to a certain grammatical category, a token representing that category is added. We write it at the right-hand side end of the chain after an arrow sign. For example, "cows"--"eat"--"grass"\(\rightarrow\)\(S\) which represents the knowledge that _cows eat grass_ is a sentence, or "cows"\(\rightarrow\)_NP_, classifying one specific word, or _NP_--\(V_{2}\)--_NP_\(\rightarrow\)\(S\) which represents the abstract knowledge of the word order in this type of sentence. This lastly added category token is said to be the _conclusion_. Knowledge of language is represented by a set of such chains. Everyone has a different set of chains, depending on what he or she has heard and memorized (it is even possible that there are many areas or vessels within one person with different such sets, but for the time being we work with one set).
In a grammar based on production rules, such as proposed by Chomsky (1957), similar knowledge is coded by production rules, in particular the rule S _NP V2 NP_ for the non-terminal \(S\) and the rules for the terminal symbols \(\ NP\Rightarrow\) "cows", \(\ NP\Rightarrow\) "grass", and of course \(V_{2}\Rightarrow\) "eat".
The **fourth assumption** is that during processing a new sentence, the newly heard chain is matched against the pre-existing chains. The chains are aligned in the same direction (head-to-head and end-to-end). This matching works such that equal tokens attach to each other in pairs and turn the aligned chains into more complex structures we call _complexes_. The complex is _complete_ if it forms a closed mesh with precisely one conclusion dangling. This conclusion is what is picked up by the environment of the matching processes (perhaps the dangling conclusion drops of the complex and triggers the action-upon-recognition). So if an \(S\) is the only loose end of the complex, the owner of this brain knows he or she recognized a sentence.
We show the above four assumptions in action. Let the pre-existing chains be "cows" \(\rightarrow\)_NP_, "grass"\(\rightarrow\)_NP_, "eat"\(\rightarrow\)_V\({}_{2}\)_, and _NP_--_V\({}_{2}\)_--_NP_\(\rightarrow\)_S_. Next, let us assume that in this context the chain "cows"--"eat"--"grass" enters, after which the complex shown in Figure 1 forms.
The fresh chain is usually written underneath and the complex formation proceeds upwards. We allow for one exception to the rule that the chains must be aligned parallel: the conclusions of chains such as "cows"\(\rightarrow\)_NP_ are allowed to be bent upwards. As the example shows, the entire complex becomes a kind of commuting diagram with two inner loops and an outer loop, and the \(S\) is the only dangling end. This \(S\) triggers the environment, in other words, "cows"--"eat"--"grass" represents a correct sentence. Of course, we need not use all pre-existing chains. On the other hand, for each pre-existing chain we assume that enough copies are available (otherwise we could run into trouble with recursion). We show the ingredients which are sufficient for successful recognition of the sentence in Figure 2.
We show the complexes as drawn by the molecular visualization program Molekel version 5.4.0.8 (ugovaretto.github.io), which was designed primarily for showing molecules, but whose stick-and-ball pictures also work for the complexes studied here. In Molekel we can rotate them in 3D space and see them in perspective, but in this paper, we show flattened renderings. We use white balls for raw word tokens, blue for nouns or noun phrases (_NP_), red for verbs (here _V\({}_{2}\)_), and a big yellow ball for the \(S\). We use long sticks for the horizontal links, corresponding to the temporal ordering, from left to right. We use shorter sticks for the connection between a chain and its conclusion (either horizontal or bent upwards), and no stick when equal tokens are attached to each other (vertically), see Figure 3.
Figure 1: Complex formation of _cows_ eat grass.
Figure 2: Ingredients for _cows eat grass_ to be recognized as \(S\).
## 3 Leaving the 2d Plane
Now we address grammatical sub-constructs, for example, adjectives. As a classical production rule, an adjective would be expressed by a rule such as _NP_\(\Rightarrow\)_Adj NP_ for the non-terminal _NP_ and rules for terminal symbols _Adj_\(\Rightarrow\)"big", _Adj_\(\Rightarrow\)"brown", and so on. In our own model, we assume corresponding pre-existing chains "big"\(\rightarrow\)_Adj_, "brown"\(\rightarrow\)_Adj_, and _Adj_--_NP_\(\rightarrow\)_NP_. Now assume that in this context the chain "brown"--"cows"--"eat"--"grass" enters. If we analyze this in a two-dimensional format we must allow the conclusions of chains not only to be bent upward but also being stretched, see Figure 4.
But if we allow the same construction to happen in 3D space, a little bending and rotating suffices, see Figure 5. We show adjectives as grey balls. Although one could think that this freedom allows for any mix of ingredients to yield an \(S\), this is not the case. Note that there is the rule that alignment follows the original temporal ordering, in other words, each chain has a beginning and an end. In the stick-and-ball figures, this ordering is not shown, except for the fact that we use renderings where this ordering roughly follows the horizontal \(x\)-direction. In other words, the beginning of a chain is to the left, and the end is to the right. We could add arrow signs in the sticks, but for the time being, the rendering condition works fine (Figure 5).
Figure 4: _Brown cows eat grass_ recognized as \(S\).
Figure 3: _Cows eat grass_ recognised as \(S\).
## 4 Mixed Chains
It is not necessary that all words are classified into grammatical categories. There can be consolidated chains which still have fragments at a word level, such as idiomatic phrases or phrases not yet well understood. In the next example, consider sentences of the form "it's"--"okay"--"to".... In the model, let the context contain pre-existing chains including "eat"\(\rightarrow\)\(V_{2}\), "grass"\(\rightarrow\)\(NP\), and "it's"--"okay"--"to"--"to"--\(V_{2}\)--\(NP\)\(\rightarrow\)\(S\). Assume that in this context the chain "it's"--"okay"--"to"--"eat"--grass" enters. In 2D notation, we get the complex of Figure 6.
The complex is complete since it forms a closed mesh with precisely one dangling conclusion, viz. \(S\). This is recognized as a sentence. Presenting the geometry of the complex in 3D, we see that the problem of the long sticks between the words disappears. Equal tokens attach in pairs easily, see Figure 7.
Figure 5: _Brown cows eat grass_ recognized as \(S\).
Figure 6: _It’s okay to eat grass_ recognized as \(S\).
Figure 7: _It’s okay to eat grass_ recognized as \(S\).
So far, the reader could object that the new model only was used to replicate analyses which are well within reach of traditional production rule grammars. But now we present examples where the model works with specific chains with mostly real words, not only abstract grammatical categories and where we show how a kind of inductive learning takes place, just by building complexes.
In the next example, the model works with a mix of grammatical categories and actual words. In the example, there is one word which does not have a grammatical category yet, but the recognition succeeds by means of analogy. Assume the following chain has been heard before and is known to be a sentence: _pigs eat beans_. Let "cows", "pigs", "eat" and "hate" be known, but assume "beans" is not. Let the context thus contain pre-existing chains "cows"\(\rightarrow\)_NP_, "pigs"\(\rightarrow\)_NP_, "eat"\(\rightarrow\)_V\({}_{2}\)_, "hate"\(\rightarrow\)_V\({}_{2}\)_ and finally "pigs"--"eat"--"beans"\(\rightarrow\)_S_. Now let us assume that, in this context, the chain "cows"--"hate"--"beans" comes in. Even without assigning _NP_ to "beans", it is induced by analogy that "beans" works as an _NP_ (it does in one context the same it does in the other context).
In 3D representation, the chains can be aligned without stretched sticks. Again, here is one dangling end, an \(S\), which therefore is the conclusion of the complex shown in Figure 9.
In the next example, which is also a kind of analogy, the geometry of the complex is even more exciting. Assume the following chains are heard and are confirmed as being sentences: _birds fly_, _bats fly_, and _birds sing_. Let this knowledge be memorized, which in the model means that the context contains pre-existing chains "birds"--"fly"\(\rightarrow\)_S_, "bats"--"fly"\(\rightarrow\)_S_, and "birds"--"sing"\(\rightarrow\)_S_. Next, let us assume that, in this context, the chain "bats"--"sing" enters.
Even without the introduction of formal grammatical categories such as _NP_ and _V\({}_{1}\)_ one can induce by analogy that "bats" can do (grammatically) what "birds" do and so if "birds" followed by "sing" makes an \(S\) then "bats" followed by "sing" make an \(S\) too. It is hard to do
Figure 8: _Cows hate beans recognized as \(S\).
Figure 9: _Cows hate beans recognised as \(S\).
this in the 2D plane; a best effort is shown in Figure 10: _Bats sing_ recognised as \(S\) by analogy.**Figure** 10: _Bats sing_ recognised as \(S\) by analogy.
However, in 3D this works perfectly. Note that the \(S\) conclusions attach to each other as well. The complex is complete, see Figure 11. There is one dangling end, an \(S\), which therefore is the conclusion of the complex. In other words, "bats"--"sing" is a correct sentence (syntactically, here we do not care whether bats can really sing).
## 5 Cases
Next, we address a linguistic phenomenon which is not elegantly captured by the formalism of traditional Chomsky-style production rule grammars. It is the phenomenon of specific noun phrases which are meant for a subject position or a direct object position, but not for both. Whereas the sentence _he loves Mary_ is correct, it is not okay to use "he" in the direct object position, as in _he loves he_ (wrong). Instead, the special form "him" has to be used for that. In older languages such as Latin, Sanskrit and old versions of Dutch and German the phenomenon is part of an elaborate system of _cases_. The form "he" is said to be the first case and "him" the fourth case, or nominative case and accusative case, respectively.
If we try to do this with a production rule grammar we could have \(\mathit{NP}_{1}\Rightarrow\) "he" and \(\mathit{NP}_{4}\Rightarrow\) "him". Next to the common rule for sentence \(S\Rightarrow\mathit{NP}\ V_{2}\ \mathit{NP}\) one has to add two special versions \(S\Rightarrow\mathit{NP}_{1}\ V_{2}\ \mathit{NP}\) and \(S\Rightarrow\mathit{NP}\ V_{2}\ \mathit{NP}_{4}\). But it is ugly that _he loves him_ still is not accepted unless yet another special rule \(S\Rightarrow\mathit{NP}_{1}\ V_{2}\ \mathit{NP}_{4}\) is added.
Moortgat used grammatical categories which are similar to the types used in mathematical type theory [11, 12] and thus solved this problem. The resulting grammars are known as categorial grammars. For example, a grammatical category such as \(\mathit{NP}\mathit{S}\) is meant for words which leftwardly look for something of type \(\mathit{NP}\) and then result in something of type \(S\). This is precisely what a intransitive verb does: when there is a noun phrase left to it, the result is a sentence. Similarly, \(\mathit{S}\mathit{/NP}\) is looking to its right for something of type \(\mathit{NP}\). Bracketing occurs inside types, so for example, \((\mathit{NP}\mathit{S})\mathit{/NP}\) is a type too (it is the type of
transitive verbs). A colon is used to indicate typing, so for example "eat" : \(V_{2}\) means that "eat" has type \(V_{2}\). We present the most important rules used by Moortgat next. Each rule comes in two versions.
\[\begin{array}{llll}\bullet&\mbox{(R1) if }\,f\colon B/A\mbox{ and }a:A\mbox{ then }f\,a:B&\mbox{(right version)}\\ \bullet&\mbox{(R1) if }\,f\colon A\backslash B\mbox{ and }a:A\mbox{ then a }f\colon B&\mbox{(left version)}\\ \bullet&\mbox{(R2) if }\,f\colon A/B\mbox{ and }g:B/C\mbox{ then }f\,g:A/C&\mbox{(right version)}\\ \bullet&\mbox{(R2) if }\,g:C\backslash B\mbox{ and }f\colon B\backslash A\mbox{ then }g\,f \colon C\backslash A&\mbox{(left version)}\\ \bullet&\mbox{(R3) if }\,f\colon(C\backslash A)/B\mbox{ then also }f\colon C \backslash(A/B)&\mbox{(right version)}\\ \bullet&\mbox{(R3) if }\,f\colon C\backslash(A/B)\mbox{ then also }f\colon(C \backslash A)/B&\mbox{(left version)}\\ \bullet&\mbox{(R4) if }\,a:X\mbox{ then also }a:B/(A\backslash B)&\mbox{(right version)}\\ \bullet&\mbox{(R4) if }\,a:X\mbox{ then also }a:(B/A)\backslash B&\mbox{(left version)}\\ \end{array}\]
Rule R1 is called _application_, R2 _composition_, R3 _associativity_ and R4 is _lifting_. A sequence is a correct sentence if there is a derivation which assigns type \(S\) to it.
We show a few examples of derivations in Moortgat-style categorial grammar. The first example is _birds sing_. Let "birds" : _NP_ and "sing" : \(V_{1}\) where \(V_{1}\) is considered an abbreviation of _NP\(S\)_. Figure 12 shows that _birds sing_ has type \(S\).
Now we present an example with an intransitive verb and an adjective, viz. _brown cows eat grass_. Of course "cows" : _NP_. Let "eat" : \(V_{2}\) where \(V_{2}\) is nothing but an abbreviation of (_NP\(S\)_)/_NP_. Let "brown" : _NP_/_NP_, so the adjective looks for an _NP_ to its right and if it finds it, the composition of the adjective and noun phrase has type _NP_ again. The type _NP\({}_{1}\)_ is defined as _S/(NP\(S\))_, whereas the type _NP\({}_{4}\)_ can be defined as (_S/NP_)_S_. Figure 13 shows one possible derivation.
In fact, this is not the only derivation, since by rule R3 the word "eat" also has type _NP_(_S/NP_), so "eat" can be combined with "brown cows" first. These ambiguities, which turn out harmless, are typical of Moortgat-style categorial grammars.
Now we are ready to address the "he" and "him" issue, which is the third example. The type _NP\({}_{1}\)_ was already defined as _S/(NP\(S\))_, whereas the type _NP\({}_{4}\)_ can be defined as (_S/NP_)_S_. Let "he" : NP\({}_{1}\) and "him" : _NP\({}_{4}\)_. The sentence _he loves Mary_ is derived easily, as shown in Figure 14.
Figure 12: _Birds sing_ derived as having type \(S\).
Figure 13: _Brown cows eat grass_ derived as having type \(S\), Moortgat style.
In the same way, _Mary loves him_ is derived easily. Now it is a remarkable property of the Moortgat-style categorial grammar that the recognition of _he loves him_ comes for free. This is because of rule R2, composition. We show this in Figure 15, using the right version of rule R2.
After this excursion into Moortgat's approach, we show how the same sentence is addressed using the model of 3D complexes. Assume the following chain has been heard before and is known to be a sentence: _pigs eat beans_. Let the context also contain pre-existing chains "he"\(\rightarrow\)_NP\({}_{1}\)_, "loves"\(\rightarrow\)_V\({}_{2}\)_, "him"\(\rightarrow\)_NP\({}_{4}\)_ which assign abstract tokens to specific words. Let the context also contain pre-existing chains _NP_---_VP_---_NP_---S describing transitive verbs, _NP_1---V2---NP_---S describing nominative case noun phrases, _NP_---_V2---_NP_4---S describing accusative case noun phrases. Just as in Moortgat's approach, no special provisions for the "he"-"him" combination have to be made, see Figure 16.
In 3D presentation, the corresponding complex is shown in Figure 17. Note that this complex is somewhat similar to the Moortgat derivation in the sense that first the _he loves_ fragment is processed and found to be something which combines with an _NP\({}_{4}\)_ to its right in order to become a sentence.
Figure 16: _He loves him recognized as \(S\)._
Figure 14: _He loves Mary derived as having type \(S\), Moortgat style._
Figure 15: He loves him derived as having type as S, Moortgat style, right version of rule R2.
The first Moortgat derivation of _he loves him_, as shown above, is only one of several possible derivations, as was to be expected. The next derivation has the same result but this time "loves" combines with "him" first. In order to make this happen, the brackets in the type of "loves" have to be moved, but because of rule R3, associativity, that is no problem, as shown in Figure 18.
Just like there are multiple Moortgat derivations, there are multiple complexes possible. In Figure 19, we show another one.
Figure 19: _He loves him_ recognized as \(S\), different complex.
Figure 17: _He loves him_ recognized as \(S\), similar to the Moortgat style derivation.
Figure 18: _He loves him_ derived as having type \(S\) (alternative derivation).
In 3D presentation, the same complex is shown in Figure 20. It is remarkable that the same tetraeder-like geometric configuration of Figure 11 reappears in Figure 17 and (somewhat modified) in Figure 20.
## 6 Inflexion and Congruence
Besides word order, there are other interesting grammatical features, notably inflexion, cases, and congruence. Many of our contemporary languages seem to be losing those features, but in many classical languages, they are even more important than word order. However, if there something like universal grammar or if there is some truth in the ideas behind the minimalist program, then some aspects inflexion, cases, and congruence must be related to our innate language capacity too.
Flexion means that the words are modified depending on the grammatical role they have in the sentence. Verbs have variable vowels to reflect tense (for example _lesen/lasen_ is present/past tense in German, to read) or elaborate systems of endings. Nouns have different endings to reflect cases (_rosa/rosae/rosam_ for nominative/genitive and dative/accusative in Latin, rose). Also adjective have cases (nominative/genitive/dative/accusative). It is most remarkable that the sounds used for the noun cases are the same or almost the same as the corresponding adjective cases. Congruence means that noun and verb if connected, have the same case and number. Therefore, the words that are connected sound the same. If they are connected by syntax, they are connected in meaning. For example, _aurea fibula_ in Latin means the golden brooch in nominative whereas _auream fibulam_ means golden brooch in accusative. This observation, that similar-sounding things belong together, was, in fact, a decisive inspiration for our matching mechanism: equal tokens attach in pairs (the fourth assumption in Section 2).
What has our model to offer for modeling inflexion and congruence? We investigate this by analyzing a sentence in Latin from the book Aeneid (IV.139) written by Vergil (70BC-19BC), viz. _aurea purpuream subnectit fibula western_. This type of sentence is called a golden line. The nominatives, characterized by their "a" ending belong together. Similarly, the accusatives, characterized by their "m" ending belong together. Word order hardly matters in Latin, hence the meaning: _a golden brooch fastens her purple dress_.
Figure 20: _He loves him_ recognized as \(S\) (alternative complex).
In Figure 1 this is modelled by assuming that two "a" tokens combine into another "a" and similarly for the "b"-type tokens. The \(V_{2}\) verb _subnectit_ needs an "a"-type token and a "b" token. We have two types of noun phrases now and therefore we gave the tokens two different colours: blue for the "a" (nominative), purple for the "m" accusative.
We conjecture that the phenomenon that our language faculty connects similar-sounding tokens, is related to rhyme. With rhyme we refer to poetry, such as by Baudelaire (1821-1867): _L'un t'eclaire avec son ardeur/ L'autre en toi met son deuil, Nature! / Ce qui dit a l'un : Sepulture! / Dit a l'autre : Vie et splendeur!_ In Germanic languages, with their emphasis in the beginning of the words, it is also about similar beginnings; _heerlijk helder Heineken_ (Dutch advertisement for beer). Modern languages, notably English, have less inflexion and thus modern grammar theory has focused on word order, and our computer languages, inspired by production rule grammars (Naur et al. 1976) and type theory (Milner 1978), have no inflexion whatsoever. But as Vergilius' example and the poems show, the human innate language capacity is not about word order only.
## 7 Recursion
Recursion is the ability to place one component inside another component of the same kind. Quoting from (DeWitt 2013): "In linguistics, the core application of recursion is phrase embedding. Chomsky posits an operation, _unbounded Merge_, that recursively merges words to create larger phrases."
In this section, we show a possible analysis of the (Dutch) sentence _geiten haten kinderen die lawaai maken_ (goats hate children who make noise). The relative pronoun _die_ marks the subordinate sentence, but also plays the role of a noun phrase in the latter sentence. In Dutch, the word order is subject-verb-object, but in this complememtizer phrase, as in other subordinate sentences, there is a reverse work order, viz. subject-object-verb. In a production rule grammar, the folowing rules would describe the generation of this sentence: \(S\Rightarrow NP\ V_{2}\)\(NP\) and \(CP\Rightarrow\) "die" \(NP\ V_{2}\) and finally \(NP\Rightarrow NP\ CP\) where _geiten_, _kinderen_ en _lawaai_ are \(NP\)_ and _haten_, _make_ are \(V_{2}\). The complememtizer phrase (\(CP\)) works as a postfix adjective. We show the complex in Figure 22.
Figure 21: _Aurea purptuream subnectit fibula vstem_ (left) and idem recognized as \(S\) (right).
The token complex in Figure 22 is rather complicated. Therefore we like to point out another possibility, namely that there are several containers of tokens, where the recognized output of one container is fed into the input of another. This is sketched in Figure 23, where the leftmost container triggers upon recognition of a _CP_ and the rightmost container triggers upon recognition of an \(S\) (Sentence).
Although we can only speculate on the implementation of such mechanisms in the human brain, we like to point out that there is a potential advantage of the mechanism with multiple containers. The number of pre-existing chains in each container can thus be limited, and a kind of work-division takes place. The pre-existing chains represent knowledge of the language which has been acquired before, such as "lawaai"\(\rightarrow\)_NP_ and "makeen"\(\rightarrow\)_V\({}_{2}\)_ and "die"--_NP_-- _V\({}_{2}\)\(\rightarrow\)CP_, in the leftmost container. All kinds of other and incomplete tokens may be present in the containers, but in Figure 23, only the ingredients needed for the successful recognition are shown. Even many types of containers are possible, corresponding to the various grammatical categories.
Figure 23: _Die lawaai maken_ recognized as Complementizer Phrase and _geiten haten kinderen CP_ recognized as \(S\) in two distinct token containers.
Figure 22: Complementizer phrase _die lawaai maken_ recognized as _CP_ and _sentence geiten haten kinderen die lawaai maken_ recognized as \(S\).
## 8 Related Work
The goal to find a simple yet powerful mechanism or set of rules that characterise the human language faculty has been the driving force behind Chomsky's multi-decade quest for grammatical theory (Chomsky 1995), (Higginbotham 1998). Chomsky's _minimalist program_ embodies the search for a minimal set of principles valid for all languages, a kind of set of possibilities that the children have innate and use when learning any natural language offered to them. The underlying idea is that the human language faculty has a kind of optimal and computationally efficient design. Similarly, Pinker's writings on the _language instinct_ (Pinker1994) defend the thesis that language is innate and that humans have a kind of common "universal grammar".
The author started his explorations around 1995 and was inspired by De Bruijn's "denksoep" (thinking soup), e.g. found in (De Bruijn1996). De Bruijn writes: In the simplest case such reactions are about a compound \(A\) and a compound \(B\) giving rise to a compound C, expressed by \(A+B\to C\). This is not necessarily an ordinary chemical equation where \(C\) is entirely composed out of \(A\) and \(B\): it is also possible that \(C\) is being built from fragments floating around in the soup and that this synthesis is brought about with the cooperation of \(A\) and \(B\) [...] There may be many such reactions \(A_{i}+B_{i}\to C_{i}\) where the index \(i\) can take al large number of values. (end quote).
We found the first version of the he-loves-him complex (cf. Figure 17) around 2000 but only later it was picked up again, and we found a way to write things down in a more satisfactory way.
We also noticed Maiya Sershen (2004) mentioning the concept of _sentence molecules_ in Speculative Grammarian, but we were not able to find other proper sources. Unlike the chains and complexes proposed here, Sershen says the sentence molecules are (quote) large protein molecules in neural cells which almost precisely mimic the binary-branched tree structures already familiar to linguists worldwide. (end quote).
Cedric Boeckx (2017) discusses chunking devices in the brain, one of which is the so-called "global workspace", another is the fronto-thalamo-basal ganglia loop, originally dedicated to motor sequences.
Our work also resembles the work, entitled _geometry of language_ too, by Glyn Morrill (1997). Just as in the present article, Morrill's work is inspired by Lambek Calculus (Lambek 1958) and the way Moortgat applies it to natural language. As in other so-called resource logics, Morrill's formalism gets rid of the sequential ordering of traditional sequent calculi and considers the assumptions in the context as a set or a multiset rather than a sequence. To make sure that the word order is not completely lost Morrill works with a non-planarity condition for the network that shows how the given words fulfil the needs of the various antecedent-succedent pairs. Example (37) from (Moririll 1997) gives the idea, see Figure 24.
Thus Morrill sticks to 2D representation. The idea that the network could be a 3D complex is not made explicit (nor are the rules about closing the complex as embodied in our assumptions 1-4 of Section 1). Of course, Morrill is right to preserve word order, but in our approach, this is achieved by the sequential order in the various chains and the condition that they are aligned more or less alongside each other. Another distinction is that Morrill's tokens still carry markings for antecedent and succedent, e.g. N\({}^{+}\) and S\({}^{-}\), respectively, whereas in our fourth assumption we just let equal tokens attach to each other, without such markings. Actually, the idea that equals attach to each other is also inspired by observations about rhyme and congruence, as discussed in Section 6.
In the meantime, Moortgat (2002) has worked along another line, integrating syntax and semantics exploiting the Curry/Howard/De Bruijn propositions-as-types analogy. The idea is that an analysis of types (grammatical categories) is done and at the same time a lambda-calculus term construction around the original lexical tokens takes place. Thus logical aspects of meaning are embodied in the resulting the lambda-calculus term, somewhat as in Montague's proper treatment of quantifiers (Montague 1974).
## 9 Concluding Remarks
Clearly, the work presented here is exploratory, and somewhat outside the lines of contemporary mainstream linguistic research. We consider the analysis of the examples in Section 4 (analogical reasoning) and Section 5 (cases) particularly elegant and we conclude that a meaningful geometric view on language is possible. We hope it is a valuable contribution to the quest for an understanding of what is the essence of the human language facility. Many questions remain, such as:
* Is it possible that there are actual biological mechanisms in our brains, which can process chains and form complexes? Is it possible that the tokens are receptor-type molecules forming chains, somewhat like mRNA and that neuron cells play the role of containers (as suggested in Figure 23)? Or is there another neural computation mechanism which simulates the token-chains, in which case various brain areas play the role of containers?
* What are the limits to bending chains and stretching connections to let the complexes recognise good sentences, yet avoiding that "anything goes"? Can we fully formalise
Figure 24: _John finds Mary recognised as \(S\) in Morrill’s formalism._
the proposed model of tokens, chains and complexes and compare its expressive power in a mathematical sense to known grammar formalisms such as production-rule based grammars, X-bar theory or categorial grammars?
How to validate the model? The best approach seems to build a simulator in software and feed such simulator with input from books, mail-boxes or chat-boxes by way of training. This approach is a new project, an option for later research.
|
2304.09495 | An algorithm for constructing and classifying the space of small integer
weighing matrices | In this paper we describe an algorithm for generating all the possible
$PIW(m,n,k)$ - integer $m\times n$ Weighing matrices of weight $k$ up to
Hadamard equivalence. Our method is efficient on a personal computer for small
size matrices, up to $m\le n=12$, and $k\le 50$. As a by product we also
improved the \textit{\textbf{nsoks}} \cite{riel2006nsoks} algorithm to find all
possible representations of an integer $k$ as a sum of $n$ integer squares.
We have implemented our algorithm in \texttt{Sagemath} and as an example we
provide a complete classification for \ $n=m=7$ and $k=25$. Our list of
$IW(7,25)$ can serve as a step towards finding the open classical weighing
matrix $W(35,25)$. | Radel Ben-Av, Giora Dula, Assaf Goldberger, Yoseph Strassler | 2023-04-19T08:33:11Z | http://arxiv.org/abs/2304.09495v1 | # An algorithm for constructing and classifying the space of small integer weighing matrices
###### Abstract.
In this paper we describe an algorithm for generating all the possible \(PIW(m,n,k)\) - integer \(m\times n\) Weighing matrices of weight \(k\) up to Hadamard equivalence. Our method is efficient on a personal computer for small size matrices, up to \(m\leq n=12\), and \(k\leq 50\). As a by product we also improved the _nsoks_[10] algorithm to find all possible representations of an integer \(k\) as a sum of \(n\) integer squares. We have implemented our algorithm in Sagemath and as an example we provide a complete classification for \(n=m=7\) and \(k=25\). Our list of \(IW(7,25)\) can serve as a step towards finding the open classical weighing matrix \(W(35,25)\).
## 1. Introduction
Let us define \(PIW(m,n,k)=\{\ P\ |\ P\in\mathbb{Z}^{m\times n}\,\ PP^{\top}=kI\}\), \(\mathbb{Z}\) is the ring of integers, and let \(IW(n,k)=PIW(n,n,k)\). Classical weighing matrices \(W(n,k)\) are the subset of \(IW(n,k)\) of matrices over \(\{-1,0,1\}\). Weighing matrices \(W(n,k)\) have been extensively investigated over the past few decades [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. A particular interesting subcase are Hadamard matrices \(H(n)=W(n,n)\). Hadamard proved that \(H(n)\) is noneempty for \(n>2\) only if \(n\) is divisible by \(4\). The Hadamard conjecture is that \(H(n)\) is noneempty for all \(n\) divisible by \(4\). A conjecture on weighing matrices extending the Hadamard conjecture is that \(\,W(4l,k)\) is nonempty for every \(l\) and every \(k\leq 4l\). For a (somewhat outdated) summary of methods and existence tables, see [13, Chap. V].
A circulant matrix is square \(n\times n\) matrix \(C\) such that \(C_{i,j}=f(i-j)\) where \(f:\mathbb{Z}/n\to\mathbb{C}\) and the indices and operations thereof are taken from the abelian group \((\mathbb{Z}/n,+)\). Cirulant weighing matrices, denoted by \(CW(n,k)\) exist only if \(k\) is a perfect square, and under this assumption, there is a variety of solutions. For example, the weight \(k=9\) has been fully classified, see [1]. Integral circulant weighing matrices, denoted by \(ICW(n,k)\), are a stepping stone for the construction of circulant or general classical weighing matrices, see e.g. [10]. In more detail, we may try to construct a weighing matrix \(M=(C_{ij})\in IW(n_{1}n_{2},k)\), where the blocks \(C_{i,j}\) are circulant of size \(n_{2}\times n_{2}\). If we replace each circulant block \(C_{i,j}\) with its row sum \(c_{i,j}=sum(C_{i,j})\), the resulting matrix \(S=(c_{i,j})\) is in \(IW(n_{1},k)\). So it might be easier to first construct \(S\), and knowing \(S\) will give us a clue as how to construct the matrix \(M\). Some of the authors have used this method to construct \(W(25,16)\) from a certain \(IW(5,16)\) (see [27] for the matrix). A special instance of this method applies to the doubling [10] and Williamson [28]
###### Abstract
We study the _row-lex_ ordering of a given size. This ordering has some key properties which allow us to efficiently keep track of the H-equivalence class at any stage of the algorithm. For efficiency reasons we add an extra parameter \(mindepth\) which controls the running time. This comes at the cost of producing more than one matrix in some H-equivalence classes. To overcome this multiplicity, we use the _code-invariant_ (for definition see SS4.1 below) to help us separate some non H-equivalent matrices from each other. In the past some of the authors have used that invariant to find a symmetric \(W(23,16)\). Then we use a procedure (not described in this paper) to establish an explicit H-equivalence between matrices not separated by the code invariant. This gives us the full classification to H-equivalence classes. To proceed to TH-equivalence classification, we transpose the matrices in that list, compute again the code-invariants, and prove H-equivalences if needed.
## 1. Introduction
In this paper we study the _row-lex_ ordering of a given size. This ordering has some key properties which allow us to efficiently keep track of the H-equivalence class at any stage of the algorithm. For efficiency reasons we add an extra parameter \(mindepth\) which controls the running time. This comes at the cost of producing more than one matrix in some H-equivalence classes. To overcome this multiplicity, we use the _code-invariant_ (for definition see SS4.1 below) to help us separate some non H-equivalent matrices from each other. In the past some of the authors have used that invariant to find a symmetric \(W(23,16)\). Then we use a procedure (not described in this paper) to establish an explicit H-equivalence between matrices not separated by the code invariant. This gives us the full classification to H-equivalence classes. To proceed to TH-equivalence classification, we transpose the matrices in that list, compute again the code-invariants, and prove H-equivalences if needed.
An important building block is the NSOKS algorithm to find all representations of an integer \(k\) as a sum of \(r\) integer squares. An implementation of this already exists [10] and used in [1] and other papers to represent integers as sums of four squares. Our needs are more extensive, and below (SS2) we give an improved version.
This entire procedure has been implemented for \(IW(7,25)\), and a the full list of all 44 TH-inequivalent solutions is given.
## 2. The Nsoks algorithm
The NSOKS algorithm computes the collection of all representations of a positive integer \(n\) as a sum of \(r\) nonnagative squares. The input is the number \(n\), an integer \(r\) = the number of required squares and an optional argument \(maxsq\) which is an
upper bound for the integers \(s\) in the representation. The output is a list \([S_{1},\ldots,S_{t}]\), where each \(S_{i}\) is a list \([(s_{1},m_{1}),\ldots,(s_{l},m_{l})]\) with \(s_{1}<s_{2}<\ldots s_{l}\leq maxsq\) such that \(\sum m_{i}=r\) and \(\sum m_{i}s_{i}^{2}=n\). Each \(S_{i}\) is a representation of \(n\) as a sum of \(r\) nonnegative squares, and \(S\) is the full list of all possible such representations, up to ordering the squares.
A Maple implementation [10] exists on the web. Nevertheless, our SageMath implementation runs faster. For example, our NS0KS\((200,200)\) has 27482 representations and runs on our SageMath machine in 0.3s. In comparison, the Maple code adapted to SageMath, on our machine, runs in 13s. Both codes have been checked to give the same answer. Our algorithm advances by recursion, from the largest square down to zero. The algorithm loops on the largest square \(s^{2}\) and its multiplicity \(m_{s}\). Then we descend to \(n\to n-m_{s}s^{2}\), and call NS0KS by recursion, this time by setting \(maxsq=s-1\). The main point of improvement over [10] is that we work with multiplicities, thus reducing the recursion depth. The second point is that once we get down to \(maxsq=1\), we do not need to recurse any more, and the answer is determined immediately (line 2).
```
1:procedureNS0KS\((n,r,maxsq=False)\)
2:if\(maxsq=1\)thenreturn\([[(1,n),(0,r-n)]]\)\(\triangleright\) No need for recursion.
3:endif
4:\(M\leftarrow\lfloor\sqrt{n}\rfloor\)
5:if\(maxsq\)then
6:\(M\leftarrow\min(maxsq,M)\)
7:endif
8:\(L\leftarrow\lceil\sqrt{n/r}\rceil\)
9:\(SquaresList\leftarrow[]\)
10:for\(s\in[L,M]\)do\(\triangleright\) Loop on the square.
11:for\(i\in[1,\lfloor n/s^{2}\rfloor]\)do\(\triangleright\) Loop on the multiplicity.
12:\(n^{\prime}\gets n-i\cdot s^{2}\)
13:if\(i=r\)then
14: Append \([(s,r),]\) to \(SquaresList\).
15:else
16:\(rem\leftarrow\)NS0KS\((n^{\prime},r-i,maxsq=s-1)\)\(\triangleright\) The recursion step.
17:for\(SubSquaresList\in rem\)do
18: Append \([(s,i),*SubSquaresList]\) to \(SquareList\)
19:endfor
20:endif
21:endfor
22:endfor
23:return SquaresList
24:endprocedure
```
**Algorithm 1** Find all representations of \(n\) as a sum of \(r\) nonnegative squares
## 3. The row-lex ordering and the search algorithm
In this section we define the row-lex ordering on the set of integer matrices of a given size \(m\times n\), and prove some basic properties of this ordering. Using these properties, we design a search algorithm to find an exhaustive list of all \(PIW(m,n,k)\) up to Hadamard equivalence. The output of our algorithm may contain more than one candidate in a single Hadamard class, and in the next section we will discuss a post processing procedure towards a correction of this flaw.
### The row-lex ordering
The discussion here is not limited to (partial) weighing matrices, and we consider all integer matrices of a given size \(p\times n\). We denote this set by \(\mathbb{Z}^{m\times n}\). The set \(\mathbb{Z}\) of all integers carries its natural ordering \(\leq\). We extend first this ordering to the set of \(n\)-vectors, \(\mathbb{Z}^{n}\) by the lexicographic extension of \(\leq\), still denoted \(\leq\). This means that
\[(v_{1},\ldots,v_{n})<(w_{1},\ldots,w_{n}),\;\text{iff}\;\exists j\;(v_{1}, \ldots,v_{j-1})=(w_{1},\ldots,w_{j-1})\text{ and }v_{j}<w_{j}.\]
Next we extend this ordering to \(m\times n\)-matrices by lexicographic extension of \(\leq\) on the rows (i.e. viewing the matrix as a vector of rows). We write this ordering as \(M\leq_{R}N\). This is called the _row-lex ordering_. Similarly we can consider the _column-lex ordering_, by extending \(\leq\) on columns, viewing the matrix as a vector of columns. We shall write \(M\leq_{C}N\) for this ordering. The two orderings are not equal, and for our algorithm which looks at situations where \(m\leq n\) it will be more appropriate to use the row-lex ordering.
Some notation is in order: For any matrix \(M\), let \(M_{i}\) denote its \(i\)th row and let \(M^{j}\) denote its \(j\)th column. Let \(M_{i:k}\) denote the submatrix whose rows are \(M_{i},M_{i+1},\ldots,M_{k-1}\) given in this order. We denote \(M^{j:l}\) analogously for columns. Let \((-1)_{i}M\) denote the matrix \(M\) with \(M_{i}\) replaced by \(-M_{i}\). More generally we denote \((-1)_{S}M\), for a set of indices \(S\), as the matrix \(M\) with \(M_{i}\) replaced by \(-M_{i}\) for all \(i\in S\). Similarly we denote \((-1)^{j}M\) and \((-1)^{S}M\) for columns. For each matrix \(M\), let \([M]\) denote its Hadamard equivalence class. Let
\[\text{Min}(M)=\min\{A\ |\ A\in[M]\},\]
the minimum is taken with respect to the row-lex ordering. We say that \(M\) is _minimal_ if \(M=\text{Min}(M)\). In each Hadamard class there exists a unique minimal matrix. We now study some proprties of minimal matrices. We say that a vector \(v\)_ begins with a positive (resp. negative) entry_ if for some \(j\), \(v_{1}=\cdots=v_{j-1}=0\) and \(v_{j}>0\) (resp. \(v_{j}<0\)).
**Lemma 3.1**.: _In a minimal matrix each nonzero row and each nonzero column begins with a negative entry._
Proof.: Let \(M\) be minimal. Suppose that a row \(M_{i}\) begins with a positive entry. Then \(-M_{i}<M_{i}\), and by definition \((-1)_{i}M<M\), in contradiction to minimality.
Now suppose that \(M^{j}\) begins with a positive entry, sitting at the position \((i,j)\). Then in \((-1)^{j}M\), the first \(i-1\) rows remain unchanged, while \(((-1)^{j}M)_{i}<M_{i}\), which in turn implies that \((-1)^{j}M<M\), again contradicting the minimality of \(M\).
One consequence of the proof, that we shall not use in this paper, is the following statement.
**Theorem 3.2**.: _Each matrix \(M\) can be brought, using only row and column negations, to a form where each nonzero row and column begins with a negative entry._
Proof.: Sequences of row and column negations define an equivalence relation on matrices. The proof of the above lemma shows that the minimal representative in a class has the desired property.
**Lemma 3.3**.: _In a minimal matrix \(M\in\mathbb{Z}^{m\times n}\), the columns are in increasing order: \(M^{1}\leq M^{2}\leq\cdots\leq M^{n}\)._
Proof.: Suppose by contradiction that \(M^{j-1}>M^{j}\). Let \(j\) be smallest one with this property. Then for some \(i\), \(M_{s,j-1}=M_{s,j}\) for all \(s<i\) and \(M_{i,j-1}>M_{i,j}\). By swapping columns \(j,j-1\) we obtain a matrix \(M^{\prime}\) in which rows \(1,2,\ldots,i-1\) did not change, while row \(i\) has decreased. Thus \(M^{\prime}<_{R}M\), a contradiction.
The following is a key property in our algorithm.
**Proposition 3.4**.: _For the row-lex ordering, a matrix \(M\) is minimal, if and only if for all \(i\), \(M_{1:i}\) is minimal._
Proof.: Clearly if \(M_{1:i}<_{R}M^{\prime}_{1:i}\) then \(M<_{R}M^{\prime}\). If \(M_{1:i}\) is not minimal, then we can perform Hadamard operations on \(M\) involving all columns and only the first \(i\) rows, to decrease \(M_{1:i}\). The resulting matrix \(M^{\prime}<_{R}M\), in contradiction to the minimality of \(M\).
We remark that in general the initial column submatrices \(M^{1:j}\) of a minimal matrix \(M\) need not be minimal. Our algorithm will build matrices row by row, and this explains why we prefer to use the row-lex ordering rather than the column-lex counterpart.
### Minimizing a class
In this short section we describe the algorithm MINCLASS to find the minimal representative in a Hadamard class. Suppose that we are given a matrix \(M\in\mathbb{Z}^{m\times n}\). Let \(Mon(m)\) denote the set of all monomial \(m\times m\) matrices with values in \(\{0,-1,1\}\). Let \(Neg(M)\) denote the matrix obtained from \(M\) by negating each column that begins with a positive entry. Let \(Ord(M)\) be the matrix obtained from \(M\) by permuting its columns to be written from left to right in increasing column order. Consider the following algorithm:
```
1:procedureMinCLASS(\(M\))
2:\(m\leftarrow\) height, \(n\leftarrow\) width
3:\(Min\gets M\)
4:for\(P\in Mon(m)\)do\(\triangleright\) go over all row negations and permutations
5:\(N\gets PM\)
6:\(N\gets Neg(N)\)
7:\(N\gets Ord(N)\)
8:if\(N<_{R}Min\)then
9:\(Min\gets N\)
10:endif
11:endfor
12:return\(Min\)
13:endprocedure
```
**Algorithm 2** Minimizing a Hadamard class
**Proposition 3.5**.: _The procedure \(\mathtt{MINCLASS}(M)\) returns the minimal matrix in the class of \(M\)._
Proof.: Let \(M_{0}=PMQ\) be the minimal representative in the Hadamard class of \(M\), \(P,Q\) are monomial. The algorithm enumerates over \(P\in Mon(m)\) and for the correct \(P\) we have \(N:=PM=M_{0}Q^{-1}\). It suffices to show that \(M_{0}=Ord(Neg(N))\). The nonzero columns of \(Neg(N)\) and of \(M_{0}\) all begin with a negative entry, so both matrices have the same multiset of columns, which means that \(M_{0}=Neg(N)\Pi\) for a permutation matrix \(\Pi\). Since the columns of \(M_{0}\) are in increasing order, then necessarily \(Ord(Neg(N))=M_{0}\).
### The main search algorithm
Now we turn to the main algorithm RepPIW which outputs a list of representatives of all Hadamard classes in \(PIW(m,n,k)\). In its default implementation the program outputs exactly one matrix per class, however it contains an optional parameter,'mindepth', which can improve the running time, at the cost of listing one or more matrices per a single class. Before stating the algorithm we give a concise description.
The algorithm relies on Proposition 3.4 that initial submatrices of a minimal matrix are minimal. The starting point is a list of all minimal integral vectors of weigth \(k\), which is in bijection with the output of \(\mathtt{NSOKS}(n,k)\). This gives the list for \(PIW(1,n,k)\). At each stage the algorithm holds a list \(MinPIW(p,n,k)\) of all minimal representatives of the \(PIW(p,n,k)\). To each member \(X_{p}\in MinPIW(p,n,k)\), we produce the list \(LV(X_{p})\) of all integral vectors of weight \(k\) that are (i) larger than the last row of \(X_{p}\), and (ii) are orthogonal to all rows of \(X_{p}\). Then for each \(v\in LV(X_{p})\) we obtain the matrix \(X_{p+1}=[X_{p},v]\) by adding a new row below \(X_{p}\). Using \(\mathtt{MINCLASS}\), we test if \(X_{p+1}\) is minimal. We add it to the new list \(MinPIW(p+1,n,k)\) iff it is minimal. Stopping at \(p=m\), Proposition 3.4 guarantees that we have correctly created a list representatives for all Hadamard classes of \(PIW(m,n,k)\).
One improvement that we add, which greatly affects the performance is the parameter'mindepth' which tells the algorithm to stop using \(\mathtt{MINCLASS}\) if \(p>mindepth\). When \(p\) is greater we just add any vector \(v\) satisfying (i) and (ii). The assumption here is that there are not too many vectors left, and thus the final list is not too large. On the positive side we save a lot of time of minimizing. We will potentially get more representatives than necessary, but as will be discussed below, we have an effective way to tell which are isomorphic to which, eventually yielding the list we want. Following is the pseudo-code. In this algorithm we use the following notation: '\(\mathtt{SignedPerms}(v)\)' is the set of all permutations and element negations of a vector \(v\). For a matrix \(M\), recall that \(M_{i}\) denotes its \(i\)th row. Let \(M^{-}\) denote the matrix without its last row. Let \([M,v]\) denote the matrix \(M\), augmented by the additional row \(v\).
**Theorem 3.6**.: _The function \(\mathtt{RepPIW}(m,n,k)\) outputs the list \(MPIW(m,n,k)\) of all minimal of Hadamard representatives of \(PIW(m,n,k)\). The function \(\mathtt{RepPIW}(m,n,k,mindepth=d)\) outputs a larger list of \(PIW(m,n,k)\) containing all minimal elements._
Proof.: The proof of the first part is by induction on \(m\). For \(m=1\) this is clear, as \(MPIW(1,n,k)\) is the list of all minimal vector is \(SOKS\). Assuming validity for \(m-1\), we enter the for loop at \(p=m-1\) (line 11) with \(MinPIW(m-1)=MPIW(m-1,n,k)\) by the induction hypothesis. Suppose that \(M\in MPIW(m,n,k)\). Then by Theorem 3.4\(M_{1:m-1}\in MinPIW(m-1)\). The list \(R(M_{1:m-1})\) holds all vectors that are orthogonal to \(M_{1:m-1}\). Thus the vector \(M_{m}\) enters the list \(R(M_{1:m-1})\) (line 13) and passes the minimality test (line 17), allowing \(M\) to enter the list \(MinPIW(m)\) (line 18). This proves that \(MinPIW(m)\supseteq MPIW(m,n,k)\). The opposite inclusion is clear as line 18 allows only minimal matrices. This proves the first assertion. The second assertion follows easily, as we do not always perform the minimality test, but yet the minimal matrices pass all tests.
### Improving Minclass
The procedure MINCLASS\((M)\) becomes impracticle as the number of rows \(m\) becomes large, since we have a factor of \(2^{m}m!\) which is the size of \(Mon(M)\). We suggest an improvement which can greatly reduce complexity, however, so far we have not implemented this, and we are not able to estimate the worst case complexity. It looks like the 'average' case complexity is low (again we
find it difficult to define what is 'average').
The idea is simple. We first minimize the indivdual rows of \(M\in\mathbb{Z}^{m\times n}\). Only the smallest row(s) can be selcted as the first row of \(Min(M)\). Having chosen the first row, we now adjoin all remaining vectors as candidates to the second row of \(Min(M)\). Then we minimize the resulting \(2\times n\) matrices. Again we only choose the smallest one(s). We proceed similarly with the third row(s) and so on. Crucially, note that for the minimization we do not need to go over \(Mon(p)\). We only need to add the new row and its negation, and then minimize by columns. This minimization will not alter the first \(p-1\) rows (as they form a minimal matrix). Also note that our choice of the first \(p-1\) rows is the smallest possible, which is necessary for them to be the matrix \(Min(M)_{1:p-1}\). Below is the pseudocode.
```
1:procedureFastMINICLASS(\(M,Init=(\ ),RIndList=\{[1,2\dots,m],\}\))
2:\(m\leftarrow\) height, \(n\leftarrow\) width.
3:ifheight(Init)==m then\(\triangleright\) This is if \(Init\) is the full matrix.
4:return Init
5:endif
6:for\(Inds\in RIndList\)do\(\triangleright\) This is a list of unused row numbers.
7:\(Ns\leftarrow[\ ]\)
8:for\(i\in Inds\)do
9:\(v\gets M_{i}\)
10:\(N_{1}\leftarrow[Init,v]\)
11:\(N_{2}\leftarrow[Init,-v]\)\(\triangleright\) We test if \(v\) or \(-v\) is to be added.
12:\(N[i]\leftarrow\min(Ord(Neg(N_{1})),Ord(Neg(N_{2})))\).
13: Append \(N[i]\) to \(Ns\)
14:endfor
15:\(NewInit[Inds]\leftarrow\min(Ns)\).
16:\(NewRIndList[Inds]\leftarrow[Inds\setminus\{i\}\)for all \(i\) if\(N[i]==\min(Ns)]\)
17:\(\triangleright\) Remove index \(i\) if this gave a minimum.
18:endfor
19:\(MinN\leftarrow\min(NewInit[Inds]\) for all \(Inds)\)\(\triangleright\) Pick the absolute minimum.
20:\(MIndList\leftarrow\{NewRIndList[Ind]\) for all \(Inds\) if \(NewInit[Inds]=MinN\}\)
21:\(MinM\leftarrow\)FastMINICLASS(\(M,MinN,MIndList\))\(\triangleright\) Recursion on new initials.
22:return\(MinM\)
23:endprocedure
```
**Algorithm 4** Fast Minimizing a Hadamard class
In this algorithm the input is a matrix \(M\), an initial matrix \(Init\) which is supposed to be \(Min(M)_{1:p}\) and a set \(RIndList\) of lists of indices, where each list contains the row indices not used in \(Init\) (there might be few options due to branching). The algorithm constructs the minimal matrix in the class of \(M\) subject to the constraint that its \(1:p\) part equals \(Init\).
**Remark 3.7**.: _If there is no branching, i.e. there is just one candidate added to an initial at each time, the algorithm finishes quickly. Otherwise we will suffer from
branching. There are cases with vast branching, such as scalar matrices, but we feel that on 'average' there will be only small branching. We find it hard to estimate the average effect._
**Remark 3.8**.: _Some of the branching is caused by matrix automorphisms (i.e. Hadamard self equivalences). If this were the only cause, we could just settle for a greedy algorithm, picking up the first candidate row each time, thus avoid branching. We know however, that in classical Hadamard matrices every 3-tuple of rows minimizes to the same matrix giving way to massive branching, regradless of automorphisms._
## 4. Results for \(Iw(7,25)\)
In this section we report on the performance of our algorithm to classify \(IW(7,25)\) up to Hadamard equivalence. We ran the main algorithm RepPIV\((7,7,25,mindepth=4)\). The implementation was programmed on SageMath [DSJ\({}^{+}\)20] on a Dell laptop with Core i7 and 8GB ram. The running time was 2 minutes. The output was a list of 420 matrices in \(IW(7,25)\).
Next we have computed the _code invariant_ on each matrix, which we now define.
### The code invariant
For an integral matrix \(D\in[-L,L]^{d\times n}\), we compute the value \(Code(M)\) which is the vector \(\mathbf{b}D\), where \(\mathbf{b}=[b^{d-1},\ldots,b^{2},b,1]\), \(b=2L+1\). The correspondence \(D\to Code(D)\) is an injection. We write \(D\prec_{d}M\) if \(D\) is a \(d\times n\) submatrix of \(M\). For any matrix \(M\in[-L,L]^{r\times n}\), we define the _code invariant_ to be
\[CodeInv(M,d)\ :=\ \text{Multiset}\{Code(Min(D))\ |\ D\prec_{d}M\}.\]
This is clearly a Hadamard invariant of \(M\).
We have computed \(CodeInv(M,3)\) for all \(M\) in our list of 420 matrices, and discovered that this invariant breaks our list into 49 sublists, \(L_{1},\ldots,L_{49}\). The members of each list \(L_{j}\) have the same code invariant, and the memebers of different lists have different code invariants.
At this point we have verified that in each list \(L_{j}\), all elements are Hadmard equivalent, by producing the monomial transformations. Finally, we have reduced our list to 44 elements, each pair of them is not Hadamard equivalent, nor equivalent to the transpose. To summarize,
**Theorem 4.1**.: _Up to Hadamard equivalence, there are \(49\) nonequivalent matrices in \(IW(7,25)\). Allowing transposition, these reduce to only \(44\) of classes._
### All \(Iw(7,25)\) up to TH equivalence
**Definition 4.2**.: A matrix \(M\in IW(n,k)\) is _imprimitive_ if it is H-equivalent to a block sum of smaller \(IW(n_{i},k)\). Otherwise we say that \(M\) is _primitive_.
As it turns out, exactly 19 matrices of our list of 44 are primitive. The rest are H equivalent to block sums of \(IW(1,25),IW(2,25),IW(4,25),IW(5,25)\) and \(IW(6,25)\). We shall use the notation \(n_{1}A_{1}\oplus n_{2}A_{2}\cdots\oplus n_{r}A_{r}\) to denote a block sum of \(n_{1}\) copies of \(A_{1}\), \(n_{2}\) copies of \(A_{2}\) and so on. We first list the primitive \(IW(r,25)\) for \(r\leq 7\).
\(-IW(1,25)\)**:**:
\[A_{1}=[5].\]
\(-IW(2,25)\)**:**:
\[B_{1}=\begin{bmatrix}3&4\\ 4&-3\end{bmatrix}.\]
\(-IW(3,25)\)**:**:
\[\emptyset.\]
\(-IW(4,25)\)**:**:
\[C_{1}=\left[\begin{array}{rr|rr}1&4&2&-2\\ 4&1&-2&2\\ \hline 2&-2&4&1\\ -2&2&1&4\end{array}\right],C_{2}=\left[\begin{array}{rr|rr}1&4&2&-2\\ 4&-1&-2&-2\\ \hline 2&-2&4&1\\ 2&2&-1&4\end{array}\right]\]
\(-IW(5,25)\)**:**:
\[D_{1}=\left[\begin{array}{rrrr}3&-2&-2&-2&-2\\ \hline 2&-2&0&1&4\\ 2&4&-2&0&1\\ 2&1&4&-2&0\\ 2&0&1&4&-2\end{array}\right],D_{2}=\left[\begin{array}{rrrr}3&2&2&2&2\\ \hline 2&3&-2&-2&-2\\ 2&-2&3&-2&-2\\ 2&-2&-2&3&-2\\ 2&-2&-2&-2&3\end{array}\right]\]
\(-IW(6,25)\)**:**: \(E_{i}=\)
\[\left[\begin{array}{rrrr}4&2&2&1&0&0\\ 2&0&-4&0&2&1\\ 2&-4&0&0&-1\\ 2&-2&0&1&-4&0\\ 2&0&1&-4&0\\ 2&0&1&-4&0\\ \end{array}\right]\left[\begin{array}{rrrr}3&3&2&1&1&1\\ 3&-1&-1&-2&-3\\ 2&-1&-3&-1&2&-3\\ 1&-1&-3&2&-1&-1\\ 1&-1&-3&2&3\\ 1&-3&1&-2&3\\ 1&-1&3&2&3\\ \end{array}\right]\left[\begin{array}{rrrr}4&2&2&1&0&0\\ 2&0&-3&-2&2&2\\ 2&-2&-2&-3\\ 1&-2&-2&0&0&2\\ 0&0&1&-2&-2&4\end{array}\right],\left[\begin{array}{rrrr}4&2&2&1&0&0\\ 2&0&-3&-2&2&2\\ 2&-2&-2&-3\\ 1&-4&2&0&0&2\\ 0&0&2&-4&1&-2\\ 2&-2&-2&-2&3\end{array}\right]\]
\[\left[\begin{array}{rrrr}4&2&2&1&0&0\\ 2&0&-4&0&2&1\\ 1&0&0&-4&-2&-1\\ 1&0&0&-4&-2&-1\\ 1&-1&-2&-3&-1\\ 1&-1&3&2&-3\\ 1&-1&-3&2&3\\ \end{array}\right],\left[\begin{array}{rrrr}3&3&2&1&1&1\\ 3&-1&1&-2\\ 2&-3&1&-1&-3\\ 1&-1&-3&2&3\\ 1&-1&-3&2&3\\ \end{array}\right],\left[\begin{array}{rrrr}3&3&2&1&1&1\\ 3&-3&1&1&-2\\ 2&-3&1&-1&-3\\ 1&-1&-3&2&3\\ \end{array}\right],\left[\begin{array}{rrrr}3&3&2&1&1&1\\ 3&-3&1&1&-2\\ 2&-3&1&-3&-1\\ 2&-3&1&-1&-3&2\\ 1&-1&-3&2&3&1\\ \end{array}\right]\]
\(-IW(7,25)\)**:**: \(F_{i}=\)
\(-IW(7,25)\)**:**: \(F_{i}=\)
Some of these matrices have nice structure, and we have reorganized some of them to show that structure. We did not fully analyze this. Below is the table giving all \(44\) matrices. The notation in the left column, e.g. \(A\oplus B\oplus C\) stands for taking all possible sums \(A_{i}\oplus B_{j}\oplus C_{k}\). We write \(mA\) for \(A\oplus A\cdots\oplus A\) (\(m\) times). In the right column we write the multiplicity of this type in the list, due to different choices of indices. For example the entry \(3A\oplus C\) consists of \(2\) types: \(3A\oplus C_{1}\) and \(3A\oplus C_{2}\). All multiplicities sum up to \(44\), and it has been verified by the code-invariant that all of these are TH-inequivalent.
|
2303.02567 | Minimize Web Applications vulnerabilities through the early Detection of
CRLF Injection | Carriage return (CR) and line feed (LF), also known as CRLF injection is a
type of vulnerability that allows a hacker to enter special characters into a
web application, altering its operation or confusing the administrator. Log
poisoning and HTTP response splitting are two prominent harmful uses of this
technique. Additionally, CRLF injection can be used by an attacker to exploit
other vulnerabilities, such as cross-site scripting (XSS). According to Open
Web Application Security Project (OWASP), CRLF vulnerabilities are among the
top 10 vulnerabilities and are a type of injection attack. Automated testing
can help to quickly identify CRLF vulnerabilities, and is particularly useful
for companies to test their applications before releasing them. However, CRLF
vulnerabilities foster a better approach to mitigate CRLF vulnerabilities in
the early stage and help secure applications against high-risk known
vulnerabilities. There has been less research on CRLF vulnerabilities and how
to detect them with automated testing. There is room for further research to be
done on this subject matter in order to develop creative solutions to problems.
It will also help to reduce false positive alerts by checking the header
response of each request. Security automation is an important issue for
companies trying to protect themselves against security threats. Automated
alerts from security systems can provide a quicker and more accurate
understanding of potential vulnerabilities and can help to reduce false
positive alerts. Despite the extensive research on various types of
vulnerabilities in web applications, CRLF vulnerabilities have only recently
been included in the research. Utilizing automated testing as a recurring task
can assist companies in receiving consistent updates about their systems and
enhance their security. | MD Asibul Hasan, Md. Mijanur Rahman | 2023-03-05T03:28:33Z | http://arxiv.org/abs/2303.02567v1 | # Minimize Web Applications vulnerabilities through the early Detection of CRLF Injection
###### Abstract
Carrriage return (CR) and line feed (LF), also known as CRLF injection is a type of vulnerability that allows a hacker to enter special characters into a web application, altering its operation or confusing the administrator. Log poisoning and HTTP response splitting are two prominent harmful uses of this technique. Additionally, CRLF injection can be used by an attacker to exploit other vulnerabilities, such as cross-site scripting (XSS). Email injection, also known as email header injection, is another way that can be used to modify the behavior of emails. The Open Web Application Security Project (OWASP) is an organization that studies vulnerabilities and ranks them based on their level of risk. According toOWASP, CRLF vulnerabilities are among the top 10 vulnerabilities and are a type of injection attack. However, CRLF vulnerabilities can also lead to the discovery of other high-risk vulnerabilities, and it fosters a better approach to mitigate CRLF vulnerabilities in the early stage and help secure applications against known vulnerabilities. Although there has been a significant amount of research on other types of injection attacks, such as Structure Query Language Injection (SQL Injection). There has been less research on CRLF vulnerabilities and how to detect them with automated testing. There is room for further research to be done on this subject matter in order to develop creative solutions to problems. It will also help to reduce false positive alerts by checking the header response of each request. Automated alerts from security systems can provide a quicker and more accurate understanding of potential vulnerabilities and can help to reduce false positive alerts. Despite the extensive research on various types of vulnerabilities in web applications, CRLF vulnerabilities have only recently been included in the research. Utilizing automated testing as a recurring task can assist companies in receiving consistent updates about their systems and enhance their security.
Cyber Security, OWASP vulnerabilities, Security Detection, CRLF Injection, Injection Attack
## I Introduction
Cyber security is primarily concerned with the protection of anything that is connected to the internet. This can be an application/software, network, device, etc. There are numerous types of vulnerabilities in applications, such as SQL injection, cross-site scripting (XSS), and local file inclusion (LFI), while network vulnerabilities may include denial of service (DoS) attacks, sniffing, and spoofing[3][4]. To ensure cyber security, engineers must prioritize confidentiality, integrity, and availability, which are the three letters upon which the CIA triad stands. The goal of this research is to identify a specific application vulnerability. The cyber security industry is massive and consists primarily of two teams: one works for the company while the other works against the company, typically the intruder. It is crucial that everyone in the Software Development Life Cycle (SDLC) maintains the process, but due to a lack of understanding or high value, some organizations skip security testing. However, security testing checks whether the software is vulnerable to cyber attacks, test the impact of malicious activities, and determine the long-term success of the software.
The majority of researchers covered different types of vulnerability which is also dangerous for web applications as well as other software. But there is less research on CRLF injection vulnerability and has been one of the most dangerous vulnerabilities in recent years. Because this vulnerability was discovered newly that is why there is no details research about this vulnerability so there is a scope to improve CRLF detection. Researchers have only investigated the other types of vulnerabilities where attacks are mainly based on HTTP attacks [1]. This research only covers web application vulnerability which is a very big issue in developing a secure application[5]. CRLF vulnerability is not a common vulnerability like Cross site scripting or SQL injection [5][6] but this can lead to other vulnerabilities and expose the system to critical information.
If there is any vulnerability that discloses the company's inside information or exposes users'/customers' information that will be a huge disaster for the company. The most widespread flaw in web applications is the injection, which includes SQL, HTML, CRLF, and other types of injection. Other flaws include XSS, broken access control, security misconfiguration, exposed sensitive data, inadequate attack protection, using components with known vulnerabilities, using unprotected APIs, local file inclusion, and broken authentication and session management [1][7]. To solve this problem, many researchers have tried several methods but less research has been carried out in the area of CRLF vulnerability detection or discovery, and this still remains one of the most critical vulnerabilities. This can expose system information and hackers can steal confidential data from applications. CRLF is not only a single vulnerability but this can also lead to some other vulnerabilities mainly injection type of vulnerabilities.
The majority of organization who is concerned about their security hire a security specialist to prevent security breaches,
and a security engineer to check for vulnerabilities manually but this process takes so much time and reduce productivity. As cyber threats evolve, security engineers are increasingly tasked with threat modeling, penetration testing, and automation to proactively determine the level of vulnerability. This is focused on a critical injection of vulnerability CRLF. CRLF refers to Carriage Return and Line Feed. It's an injection attack that could lead to XSS attack by doing that attacker can grab the user session and in some cases can accelerate privilege[8][9]. XSS attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted websites. XSS attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user through such an online application. The code for a web browser often takes the form of a JavaScript segment, but it can also be HTML, Flash, or any other type of code that the browser is capable of executing. [10][11]. XSS vulnerabilities normally allow an attacker to masquerade as a victim user, to carry out any actions that the user is able to perform and to access any of the user's data[6]. There are so many researches that have been done based on web application vulnerability as well as network vulnerability and threats. The majority of these are dangerous attacks that can take over the full system. But though CRLF vulnerability is a new kind of vulnerability that has not been explored by researchers, most especially the specific vulnerability. Some software works impeccably with CRLF vulnerability but they are paid applications. To solve this problem with CRLF vulnerability, this research will provide insight and a logical approach for those who want to work in this area of interest. The first sectionof this study described its abstract. The second section described the introduction of the study. The third section conveyed the literature review, followed by themethodology, and finally the conclusion of the research.
## II Literature Review
According to our studies, there has been a fair amount of research done on vulnerability management. Some of the research has been focused on injection-based attacks including SQL injection, HTML injection, and also code injection. A study on three major SQLi techniques was implemented on the educational and financial websites of Bangladesh and executes analysis web applications for figuring out the security condition [1]. But there was no mention of any CRLF vulnerability. Some case studies have been conducted on various types of vulnerabilities in some websites in Bangladesh. Additionally, some papers have explored automated and manual penetration testing in a range of domains. An example of the online application called Tunestore is used in a case study to carry out security testing. It provides an example of tool- and manually-assisted web application security testing. Testing on Tunestore is done using Paros, WebScarab, JBroFuzz, Fortify, and Acunetix. [5].
This paper aims to eliminate CRLF vulnerability on web applications and helps the security tester to detect the vulnerability before releasing the product. Solving this vulnerability will also secure the application against XSS attacks because CRLF can also lead to XSS. These are major vulnerabilities according to OWASP.
## III Methodology
CRLF vulnerability in web applications is a major security concern that can have serious consequences. This vulnerability allows attackers to insert malicious code into a web page or application, which can then be executed by the web browser or program. This can result in the exposure of sensitive information, the execution of arbitrary code, or the launch of a denial of service attack.
CRLF vulnerabilities are often exploited through CRLF injection attacks, in which malicious code is injected into a web page or application. To prevent CRLF injection attacks, it is essential to properly validate and sanitize all input. Any user input that will be used in a Structured Query Language (SQL) should be properly encoded and checked for incorrect characters. It is also critical to keep all web servers and applications up to date with the latest security fixes.
As a result of this CRLF attack, more harmful attacks including XSS, page injection, web cache poisoning, and many others are launched. Log poisoning and HTTP response splitting are the popular use of these attacks. By adding a line end and an extra line, the attacker adds false log file entries. This could be done to deceive system administrators or cover up other attacks [10]. LF, CR, #, and! are common ASCII characters used in creating server-side attacks. By including them in the feature set, these assaults can be detected.[2]
One way to exploit a CRLF vulnerability is to inject a CRLF character into a web application in order to exploit a buffer overflow vulnerability. Another way to exploit a CRLF vulnerability is to inject a CRLF character into a web application in order to exploit an XSS vulnerability. Proposed framework to find CRLF vulnerability:
Figure 01: How CRLF attacks occurred
In this figure, the user will give a list of website links or a single link when running CRLF. After that, the application will check for header responses if there is CR or LF signs based on that application and will make sure whether it is vulnerable or not vulnerable. This framework will give fewer false positive alerts than other applications. An extract of the complete HTTP GET request is shown below:[1]
In this figure, it is a header request where CRLF means the CR and LF tag can be found.
CR and LF are special characters (ASCII 13 and 10 respectively, also referred to as \(\backslash\)r and \(\backslash\)n) that are used to signify the End of Line (EOL). They're used to note the termination of a line, however, dealt with differently in today's popular Operating Systems.
## IV Result and Discussions
Our study concentrated on determining the presence and consequences of CRLF vulnerabilities in the wild as well as investigating potential remedies and the most effective methods for avoiding and overcoming these problems. Our research shows that CRLF vulnerabilities affect a large number of websites and online apps and that they are rather widespread in web applications. These flaws might have detrimental effects, such as allowing hackers to insert malicious code into a website or application, which could result in data breaches, identity theft, and other security breaches. We advise using a number of quality standards, such as input validation, sanitization, and encoding of user input, as well as routine testing and monitoring of web applications to discover and resolve any vulnerabilities, in order to mitigate these issues. Overall, our research emphasizes how critical it is to handle online security in a proactive manner, including routinely identifying and patching possible vulnerabilities like CRLF issues. By doing this, businesses can defend themselves against security flaws, guarantee the security of their users, and safeguard their websites and apps.
The findings of this investigation showed that a total of 40 websites were examined using the suggested framework for locating CRLF vulnerabilities, which are shown in Figure 4. Three out of the forty websites were determined to be vulnerable to CRLF injection attacks, according to the data graph. For some legal issues, it's not possible to disclose the target website's name or website address.
The findings of the research showed that 40 websites were examined for CRLF vulnerabilities using three distinct frameworks which are depicted in Figure 5. According to the statistics in the figure, the suggested framework was able to find more susceptible websites than Acunetix and Metasploit Pro combined.
## V Conclusion
This study's objective was to better our knowledge about CRLF vulnerabilities in web applications. This study examined the characteristics and potential repercussions of CRLF vulnerabilities as well as techniques for spotting and reducing these dangers. The suggested framework was more effective than the already available tools and had fewer false positives. Our research has shown the significance of taking CRLF vulnerabilities into account during the software development lifecycle and highlighted how they could affect the security of online applications. By offering practical knowledge that
Figure 03: CRLF in the header
Figure 02: Proposed Framework
Figure 04: Vulnerability chart
Figure 05: Comparison chart
may assist people and organizations in defending against the continuously changing threats in the digital world, this study has also contributed to the larger area of cyber security.
|
2308.09998 | A Study of Six Extreme Low Mass Ratio Contact Binary Systems | Multi-band (B, V and R) photometric and spectroscopic observations of six
poorly studied contact binaries carried out at the Western Sydney University
and Las Cumbres Observatory were analysed using a recent version of the
Wilson-Devenney code. All six were found to be of extreme low mass ratio
ranging from 0.073 to 0.149. All are of F spectral class with the mass of the
primary component ranging from 1.05Msun to 1.48Msun. None show light curve
features of enhanced choromospheric activity (O'Connell Effect) however five of
the six do have significant ultraviolet excess indicating presence of increased
magnetic and chromospheric activity. Period analysis based on available survey
data suggests two systems have a slowly increasing period suggesting mass
transfer from the secondary to the primary, two have a slow declining period
with likely mass transfer from primary to the secondary while one shows a
steady period and one undergoing transition from a declining to increasing
period suggesting possible mass transfer reversal. We also compare light curve
solutions against theoretical markers of orbital stability and show that three
of six systems have mass ratios within the theoretical instability limit and
maybe regarded as potential merger candidates. | Surjit S. Wadhwa, Bojan Arbutina, Jelena Petrovic, Miroslav D. Filipovic, Ain Y. De Horta, Nick F. H. Tothill, Gojko Djuravsevic | 2023-08-19T12:31:55Z | http://arxiv.org/abs/2308.09998v1 | # A Study of Six Extreme Low Mass Ratio Contact Binary Systems
###### Abstract
Multi-band (B, V and R) photometric and spectroscopic observations of six poorly studied contact binaries carried out at the Western Sydney University and Las Cumbres Observatory were analysed using a recent version of the Wilson-Devenney code. All six were found to be of extreme low mass ratio ranging from 0.073 to 0.149. All are of F spectral class with the mass of the primary component ranging from \(1.05M_{\odot}\) to \(1.48M_{\odot}\). None show light curve features of enhanced choromospheric activity (O'Connel Effect) however five of the six do have significant ultraviolet excess indicating presence of increased magnetic and chromospheric activity. Period analysis based on available survey data suggests two systems have a slowly increasing period suggesting mass transfer from the secondary to the primary, two have a slow declining period with likely mass transfer from primary to the secondary while one shows a steady period and one undergoing transition from a declining to increasing period suggesting possible mass transfer reversal. We also compare light curve solutions against theoretical markers of orbital stability and show that three of six systems have mass ratios within the theoretical instability limit and maybe regarded as potential merger candidates.
Red Nova, Contact Binary Merger, Low Mass Ratio 0000-0002-0001]Surjit S. Wadhwa
0000-0002-3195-7008]Bojan Arbutina
0000-0002-1888-7088]Jelena Petrovic
0000-0002-4883-0888]Miroslav D. Filipovic
0000-0002-3193-0888]Ain Y. De Horta
0000-0002-8883-0888]Nick F. H. Tothill
0000-0002-4883-0888]Gojko Djurasevic
## 1 Introduction
Investigation of extreme low mass ratio contact binaries has recently seen heightened interest with view to identifying potential merger (red nova) progenitors (Wadhwa et al., 2021; Gazeas et al., 2021; Christopoulou et al., 2022; Liu et al., 2023). It has been known for some time that merger events and orbital instability in contact binaries is most likely when the mass ratio of the components (\(q=M_{2}/M_{1}\)) is below some critical value (Rasio and Shapiro, 1995; Arbutina, 2007, 2009). We have recently introduced methods to aid in the rapid identification of potential low mass ratio contact binary systems from survey photometry data (Wadhwa et al., 2022) in addition to a theoretical framework linking the mass of the primary component and geometric elements determined through light curve analysis to orbital instability (Wadhwa et al., 2021).
We have previously reported analysis of fifteen extreme low mass ratio contact binaries with features of orbital instability (Wadhwa et al., 2022, 2023). This study reports photometric and spectroscopic observations of six extreme low mass ratio poorly studied contact binary systems selected from the All Sky Automated Survey for SuperNovae (ASAS-SN) (Shappee et al., 2014; Jayasinghe et al., 2020). The systems were selected for observations based on the techniques described in Wadhwa et al. (2022) as being likely of low mass ratio and potentially unstable. Identification details for the systems are summarised in Table 1. In addition to light curve analysis we show that at least 5 systems exhibit features of chromospheric activity without photospheric evidence for star spots.
## 2 Photometric and Spectroscopic Observations
A1044 was observed over 5 nights in April 2020 with the Western Sydney University (WSU) 0.6m telescope equipped with a cooled SBIG 8300 CCD camera and standard Johnson \(BVR\) filters. All other systems were imaged using the 0.4m telescopes from the Las Cumbres Observatory (LCO) network. The LCO network telescopes acquire images using the SBIG STL-6303 CCD camera and Bessel \(V,B\) and Sloan \(r^{\prime}\) filters. Images were acquired in \(V\) and \(R/r^{\prime}\)
bands for all systems except A0842 which was only observed in \(V\) band due to technical difficulties. To document the \(B-V\) magnitude we also acquired between 40 and 50 images during eclipses in the \(B\) band. All images of A1044 were calibrated using multiple dark, flat and bias frames. The LCO network has an automated pipeline which provided calibrated images for all other systems. Differential photometry for each system was performed with the AstroImageJ (Collins et al., 2017) package using the comparison and check stars noted in Table 1. The comparison star magnitudes were adopted from the American Association of Variable Star Observers (AAVSO) Photometric All-Sky Survey (Henden et al., 2015). The AstroImageJ package estimates photometric errors and we excluded all observations where the estimated error was greater than 0.01 magnitude. Details of observations such as dates, observation numbers, exposure times along with light curve characteristics such as amplitude, maximum brightness, and \(B-V\) colour are collated in Table 2.
Assessment of period variation, especially when small, requires high cadence long term (over many decades) observations. Given the lack of suitable historical observations no meaningful Observed-Computed (\(O-C\)) analysis could be performed for any of the systems. Instead we use the technique of employing periodic orthogonal polynomials and an analysis of variance statistic (a quality of fit marker) to fit multiple overlapping subsets, each of approximately 100 - 150 observations, of \(V\) or \(g^{\prime}\) band survey photometry data for each system to estimate any significant period variations. The methodology is described in detail by Schwarzenberg-Czerny (1996) and was used by Tylenda et al. (2011) to demonstrate the exponential decay in the period of the only confirmed contact binary merger system V1309 Sco. We find that two of our systems (A540 and A2003) have a linear trend towards a reducing period, two systems (A0842 and A1037) have a linear trend of a rising period, V565 Dra has a shallow parabolic trend indicating a shift from falling to rising period while A1044 appears to have a relatively steady period. If one considers the transfer of mass as the only contributor to period change a rising period suggests mass transfer from the secondary to the primary and visa versa for a falling period. The period trends are summarised in Table 3 and illustrated in Figure 1.
Successful light curve analysis of contact binary systems without radial velocity data is only possible if complete eclipses are present (Terrell & Wilson, 2005). During such analysis the temperature of the primary (\(T_{1}\)) is usually fixed as the shape of contact binary light curves is dependent almost exclusively on the geometric parameters such as the mass ratio, inclination and degree of contact. The light curve shape places a constrain on the component temperature ratio (\(T_{2}/T_{1}\)) but not on the absolute value of the component temperatures (Rucinski, 1993, 2001). Notwithstanding the above, varied methods are used in assigning the temperature of the primary (\(T_{1}\)) with colour based estimations
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{4}{|c|}{systems} \\ \hline Name & Abbreviation & \multicolumn{2}{|c|}{Comparison Star} & \multicolumn{1}{c|}{Check Star} \\ \hline ASAS J054049-5527.8 & A0540 & 2MASS 05403192-5527279 & 2MASS 05405378-5526306 \\ ASAS J084220-03034. & A0842 & TYC 4867-806-11 & TYC 4867-463-1 \\ ASAS J103737-3709.5 & A1037 & TYC 7197-1470-1 & TYC 7197-1596-1 \\ ASAS J104422-0711.2 & A1044 & TYC 4919-253-1 & 2MASS 10441404-0710597 \\ V565 Dra & V565 Dra & TYC 3897-742-1 & 2MASS 17383576+5710441 \\ ASAS J200304-0256.0 & A2003 & TYC 5164-275-1 & 2MASS 10032574-0257207 \\ \hline \end{tabular}
\end{table}
Table 1: Identifications, abbreviations, check and comparison stars for 6 studied
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Name & Date of Obs & Obs. (\(V,R\)) & Exp. Times (\(V,R\)) & Max Bright. (\(V\)) & Ampl. (\(V\)) & \(B-V\) & (\(B-V\))\({}_{o}\) & Sp. Type \\ \hline A0540 & 11/22 - 01/23 & 310, 340 & 45s,40s & 12.52 & 0.35 & 0.62 & 0.58 & F9 \\ A0842 & 02/23 - 02/23 & 375, - & 45s,- & 11.51 & 0.30 & 0.51 & 0.50 & F8 \\ A1037 & 02/23 - 05/23 & 420, 360 & 40s,35s & 10.79 & 0.34 & 0.58 & 0.55 & F9 \\ A1044 & 04/20 - 05/20 & 415, 395 & 40s,33s & 11.72 & 0.23 & 0.15 & 0.12 & F0* \\ V565 Dra & 06/22 - 05/23 & 460, 345 & 45s,40s & 11.26 & 0.32 & 0.63 & 0.60 & F7* \\ A2003 & 08/22 - 08/22 & 480, 550 & 40s,35s & 10.51 & 0.41 & 0.68 & 0.53 & F7 \\ \hline \end{tabular} \({}^{*}\)Spectral classification from LAMOST survey.
\end{table}
Table 2: Details of observations, spectral class and light curve parameters. \((B-V)_{o}\) is distance and extinction corrected estimate.
being employed most often. Colour calibrated estimates, although widely used, have been shown to be cumbersome. Recently in an analysis of four contact binaries Hu et al. (2022) found temperature variations between \(B-V\) and \(J-K\) colour calibrations in excess of 500K for 2 stars and 250K and 150K for the other two. Ma et al. (2023) reported variation in excess of 400K between spectra and space based survey colour databases. The VizeR database records a range in excess of 1000K for four of the systems reported here and many hundreds for the other two. Spectral classification of stars possibly represents an accurate and standard method and more recently many investigators (see e.g. Li et al., 2023; Guo et al., 2023; Chang et al., 2022; Guo et al., 2022) have adopted low resolution spectral class calibrations (where available) as an alternative to assign the usually fixed value for \(T_{1}\).
One mechanism to overcome the wide variations that can result from various templates and colour calibrations is through the investigation of the Spectral Energy Distribution (SED) constructed using photometric data from various bands collectively. Robitaille et al. (2007) and Bayo et al. (2008) performed comparison of the SEDs constructed from survey photometry with synthetic theoretical spectra and the modelled value for the effective temperature compared favourably to the theoretical spectral value. We compared the effective temperature of the systems (and hence temperature of the primary) determined through SED calibration against those estimated through spectral class for each system. Firstly, using the methodology described in Bayo et al. (2008) we constructed a photometry data set (SED) in different bands for each system from publicly available survey data. The constructed SEDs were then fitted to theoretical models which incorporated Kurucz atmospheres using \(\chi^{2}\) minimisation as described in Bayo et al. (2008) to determine the effective temperature. The SEDs and fitted model are illustrated in Figure 2. Two (A1044 and V565 Dra) of our six systems were observed with the The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) (Luo et al., 2018) with the reported spectral classes F0 and F7, respectively. For the other four systems we used the 2m telescopes from the LCO network equipped with the low resolution FLOYDS spectrograph to acquire spectra that were compared to standard main sequence star spectra from Jacoby et al. (1984); Pickles (1998) to de
Figure 1: Period trend based on survey photometry data. The green line represents the best fit.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Name & Epoch (HJD) & Period (d) & Period Trend (d/yr) \\ \hline A0540 & \(2459948.146810\pm 0.000250\) & \(0.2982187\pm 0.0000050\) & \(-1.87\times 10^{-7}\) \\ A0842 & \(2459984.366616\pm 0.000309\) & \(0.3335395\pm 0.0000025\) & \(1.86\times 10^{-6}\) \\ A1037 & \(2459986.138027\pm 0.000212\) & \(0.3434028\pm 0.0000015\) & \(1.15\times 10^{-6}\) \\ A1044 & \(2458944.604343\pm 0.000052\) & \(0.6117118\pm 0.0000010\) & Steady \\ V565 Dra & \(2459752.674348\pm 0.000322\) & \(0.3903187\pm 0.0000025\) & Parabolic \\ A2003 & \(2459811.109575\pm 0.000230\) & \(0.4571959\pm 0.0000030\) & \(-8.27\times 10^{-7}\) \\ \hline \end{tabular}
\end{table}
Table 3: Updated orbital elements and period trend.
termine the spectral class for each system as recorded in Table 2. Selected FLOYDS spectra and matched library spectra are shown in Figure 3. We used the April 2022 update of Pecaut & Mamajek (2013) calibration tables of spectral class and temperature for main sequence stars to determine the spectral based temperature of the primary component. The VizieR range, SED and spectral class effective temperatures are summarised in Table 4. From Table 4 we see that in all cases except A2003, where the variation between SED and spectral effective temperature varies by more than 500K, there is good agreement between a collective photometric approach and spectral class for estimating the effective temperature. The findings are very similar to Panchal et al. (2022) who also found close agreement between spectral class effective temperature and SED modelled values. The difference in A2003 most likely, although not for certain, relates to the relative high extinction for the system with an estimated distance corrected extinction of 0.48 magnitudes. Overall we consider that the recent trend towards the use of low resolution spectra and spectral classification to assign a fixed value to the temperature of the primary to be valid and we have used this method for light curve analysis of the systems presented in this study.
## 3 Light curve analysis
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Name & VizieR Range & SED & Spec. Class \\ \hline A0540 & 5818 - 6874 & 6000 & 6050 \\ A0842 & 5959 - 6740 & 6250 & 6180 \\ A1037 & 5305 - 6338 & 6000 & 6050 \\ A1044 & 6632 - 7793 & 7250 & 7200 \\ V565 Dra & 5921 - 7044 & 6250 & 6280 \\ A2003 & 5006 - 5773 & 5750 & 6280 \\ \hline \end{tabular}
\end{table}
Table 4: Effective temperature (K) range as reported in the VizieR database, SED modelling and spectral class interpolation.
Figure 2: Observed and modelled SEDs for all six systems. The observed photometry is indicated in green and the fitted model in black. The flux on the vertical axes is in erg/cm\({}^{2}\)/s/Å. The wavelength is in Angstroms (Å). Both axes are in log scale.
We confirm that all systems show total eclipses and as such determination of the mass ratio from photometry alone would be possible. As there is no significant asymmetry in the maximum brightness only unspotted solutions were modelled. We used the Wilson-Devinney (WD) code (2013 version) which incorporates Kurucz atmospheres to model simultaneous \(V\) and \(R\) band light curve solutions (Nelson, 2021; Kallrath et al., 1998; Wilson, 1990). Since the effective temperature of the primary is below 7500K for each system, the gravity darkening coefficients were set equal \(g_{1}=g_{2}=0.32\), and bolometric albedoes as \(A_{1}=A_{2}=0.5\). we used the logarithmic limb darkening coefficients from van Hamme (1993) as advocated by Nelson and Robb (2015).
We searched for the mass ratio (\(q\)) for the systems using the grid method first described by (Russo and Collazzo, 1982). Systematic search was made for a range of fixed values for the mass ratios from 0.05 to 15. A coarse search was performed up to \(q=1\) in increments of 0.1 and in increments of 0.2 up to \(q=15\). The search was then refined in increments of 0.01 near the best solution. During the search procedure the temperature of the secondary component (\(T_{2}\)), the surface potential (\(\Omega\)) (i.e. fillout \(f\)), orbital inclination (\(i\)) and the dimensionless luminosity of the primary (\(L_{1}\)) were all treated as adjustable parameters. For each mass ratio, iterations were executed until the reported standard deviations were higher than the suggested adjustment for all parameters. To obtain the full solution the mass ratio was also made an adjustable parameter during the last iteration and the suggested standard deviations for each parameter was recorded as the potential error. Summary of the light curve solutions is presented in Table 5. Observed and WD fitted light curves are illustrated in Figure 4.
## 4 Absolute Parameters and Orbital Stability
### Absolute Parameters
Full investigation of astrophysical phenomenon such as orbital stability and chromospheric activity requires knowledge of the absolute physical parameters especially the mass of the primary. Without high resolution spectroscopic observations one is reliant on indirect methods to estimate the mass of the primary component. In this study we use the mean of a distance based estimate and a colour calibration based estimate of the mass of the primary. It is accepted that the primary component of contact binaries follow zero age main sequence profile (Yildiz and Dogan, 2013). For our colour based estimation we used the 2MASS \(J-H\) magnitudes (Skrutskie et al., 2006) for each system and the calibration tables of Pecaut and Mamajek (2013) (April 2022 update) for low mass (\(0.6M_{\odot}<M_{1}<1.6M_{\odot}\)) stars to interpolate the mass of the primary component.
The distance based estimation was interpolated from the absolute magnitude of the primary component corrected for extinction. The absolute magnitude of the primary component was determined as follows: As all systems have total eclipses and are of low mass ratio the secondary eclipse apparent magnitude represents the apparent magnitude of the primary. We obtained the absolute magnitude of the primary (\(M_{V1}\)) using the GAIA EDR 3 (Anders et al., 2022) distance and the line of sight extinction corrected for distance (\(E(B-V)_{d}\)) as described in Wadhwa et al. (2023). The absolute magnitude was obtained using the standard distance module. The observed \(B-V\) was also corrected for extinction (\(B-V\))\({}_{o}\) as:
\[(B-V)_{o}=(B-V)-E(B-V)_{d}. \tag{1}\]
Figure 3: Observed (low resolution) and matched library spectra for A0842 and A1037.
The extinction corrected \((B-V)_{o}\) values are summarised in Table 2 while the estimated absolute magnitudes are summarised in Table 5 along with other absolute parameters.
The distance based mass of the primary was obtained from interpolation of the absolute magnitude and mass of main sequence stars from the calibration tables of Pecaut & Mamajek (2013) (April 2022 update) for low mass (\(0.6M_{\odot}<M_{1}<1.6M_{\odot}\)) stars. The distance based estimate resulted in the largest error and this was adopted as the error for the mass estimation. All other errors were propagated from this estimation. The mass of the secondary (\(M_{2}\)) was determined from the mass ratio and Kepler's third law used to derive the current separation (\(A\)) between
Figure 4: The WD model (black line) and observed V band (open green triangle), R band (open red square) light curves for the six reported systems. The open purple diamonds represents the check star. The flux has been arbitrarily shifted vertically for clarity
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & A0540 & A0842 & A1037 & A1044 & V565 Dra & A2003 \\ \hline \(T_{1}\) (K) (Fixed) & 6050 & 6180 & 6050 & 7200 & 6280 & 6280 \\ \(T_{2}\) (K) & \(5884\pm 16\) & \(5997\pm 27\) & \(5741\pm 34\) & \(7190\pm 27\) & \(6259\pm 11\) & \(6266\pm 10\) \\ Incl. (\({}^{\circ}\)) & \(82.3\pm 0.9\) & \(77.5\pm 1.4\) & \(68.1\pm 0.8\) & \(72.7\pm 0.5\) & \(90.0^{+0.0}_{-0.5}\) & \(82.5\pm 0.5\) \\ \(q\) & \(0.14\pm 0.001\) & \(0.100\pm 0.006\) & \(0.090\pm 0.003\) & \(0.073\pm 0.003\) & \(0.108\pm 0.002\) & \(0.149\pm 0.002\) \\ \(q_{inst}\) (\(f\)=0) & \(0.083\pm 0.006\) & \(0.084\pm 0.003\) & \(0.074\pm 0.003\) & \(0.048\pm 0.001\) & \(0.080\pm 0.013\) & \(0.093\pm 0.005\) \\ \(q_{inst}\) (\(f\)=1) & \(0.094\pm 0.008\) & \(0.096\pm 0.004\) & \(0.086\pm 0.004\) & \(0.053\pm 0.001\) & \(0.091\pm 0.017\) & \(0.108\pm 0.006\) \\ Fillout (\%) & \(69\pm 2\) & \(83\pm 4\) & \(57\pm 4\) & \(83\pm 3\) & \(71\pm 2\) & \(82\pm 2\) \\ \(r_{1}\) (mean) & 0.578 & 0.605 & 0.604 & 0.625 & 0.596 & 0.578 \\ \(r_{2}\) (mean) & 0.259 & 0.239 & 0.219 & 0.216 & 0.238 & 0.271 \\ \(M_{1}/M_{\odot}\) & \(1.13\pm 0.05\) & \(1.12\pm 0.03\) & \(1.18\pm 0.02\) & \(1.48\pm 0.05\) & \(1.15\pm 0.11\) & \(1.05\pm 0.03\) \\ \(M_{2}/M_{\odot}\) & \(0.16\pm 0.02\) & \(0.11\pm 0.02\) & \(0.11\pm 0.01\) & \(0.11\pm 0.01\) & \(0.12\pm 0.0.03\) & \(0.16\pm 0.02\) \\ \(M_{V1}\) & \(3.76\pm 0.20\) & \(4.16\pm 0.10\) & \(3.97\pm 0.10\) & \(2.43\pm 0.21\) & \(3.77\pm 0.48\) & \(4.41\pm 0.15\) \\ \(A/R_{\odot}\) & \(2.04\pm 0.02\) & \(2.17\pm 0.01\) & \(2.25\pm 0.02\) & \(3.53\pm 0.03\) & \(2.44\pm 0.09\) & \(2.65\pm 0.01\) \\ \(R_{1}/R_{\odot}\) & \(1.18\pm 0.02\) & \(1.31\pm 0.01\) & \(1.36\pm 0.02\) & \(2.21\pm 0.03\) & \(1.45\pm 0.05\) & \(1.53\pm 0.01\) \\ \(R_{2}/R_{\odot}\) & \(0.53\pm 0.02\) & \(0.52\pm 0.01\) & \(0.49\pm 0.02\) & \(0.76\pm 0.02\) & \(0.58\pm 0.03\) & \(0.72\pm 0.01\) \\ \(\Delta\rho\) & -0.54 & -0.43 & -0.60 & -0.15 & -0.37 & -0.18 \\ UV Excess & -1.75 & -1.53 & – & -4.17 & -2.28 & -1.43 \\ \hline \end{tabular}
\end{table}
Table 5: Light curve solution and other absolute parameters for six investigated contact binary systems.
the components. The light curve solution provides an estimate of the fractional radii of the components (\(r_{1,2}\)) for three orientations. The geometric mean of these was used to estimate the absolute radii (\(R_{1,2}\)) of the components by applying \(R_{1}=r_{1}A\) and \(R_{2}=r_{2}A\) as per Awadalla & Hanna (2005). All the absolute parameters are summarised in Table 5.
It been well established that the secondary components have larger radii than main sequence stars of similar mass. Wadhwa et al. (2022b) also reported that the radius of the primary may also be more than 25% larger than corresponding main sequence stars. In addition to change in the radii of the components some researchers (Yildiz & Dogan, 2013) suggest that evolutionary mechanisms will likely lead to significant density variation between the components such that the secondary will always be denser and the the difference between density of the primary and secondary components (\(\Delta\rho\)) will always be less than zero (Kahler, 2004).
As noted above the light curve solution provides fractional radii for each component and one can use geometric mean of these along with Equation (3) from (Mochnacki, 1981) to calculate the density difference (\(\Delta\rho\)). All our system show that the density of the secondary is indeed higher and that \(\Delta\rho\) is negative. The results are summarised in Table 5.
### Orbital Stability
The merger potential of contact binary systems and their orbital stability has received significant attention recently (Wadhwa et al., 2021; Liu et al., 2023; Christopoulou et al., 2022). New mathematical relations linking the instability mass ratio, mass of the primary and the degree of contact have recently been reported by Wadhwa et al. (2021) who showed that for low mass primaries (\(0.6M_{\odot}<M_{1}<1.6M_{\odot}\)) the instability mass ratio (\(q_{inst}\)) is between:
\[q_{inst}=0.1269M_{1}^{2}-0.4496M_{1}+0.4403\ (f=1) \tag{2}\]
and
\[q_{inst}=0.0772M_{1}^{2}-0.3003M_{1}+0.3237\ (f=0). \tag{3}\]
The above equations represent the extremes of the instability mass ratio at marginal contact (\(f=0\)) and full over-contact (\(f=1\)).
We calculate the instability mass ratio range (\(q_{inst}\)) for each system and provide it Table 5. Although all six systems have extreme low mass ratios only three (A0842, A1037 and V565 Dra) can be classified as being potential merger candidates with modelled mass ratios within the error range for the instability mass ratio. A0540, A1044 and A2003 all have modelled mass ratios well above the instability mass ratio range and as such must be considered likely stable. As all systems described are relatively bright and well within the reach of modest instruments regular monitoring of the potential merger candidates even by advanced amateurs should be encouraged.
## 5 High Energy Indicators of Chromospheric Activity
Contact binary systems usually have periods of less than 24 hours with synchronised rotation. Magnetic activity is thought to be high in rapidly rotating systems including contact binaries (Gharami et al., 2019) resulting in increased stellar magnetic wind and magnetic breaking. Increased magnetic breaking will eventually lead to loss of angular momentum from the system and potential orbital instability (Li et al., 2004). The only significant photospheric indicator of increased magnetic activity is the presence of star spots usually manifesting as the O'Connell effect or asymmetric maxima of the light curve. The photosphere is dominated by high intensity low energy emissions which obscure lesser intensive chromospheric emissions therefore light curve analysis provides little indication of chromospheric activity. Direct measure of angular momentum loss is difficult, nevertheless, secondary indicators of enhanced magnetic and chromospheric activity (Vilhu, 1983; Rucinski & Vilhu, 1983; Li et al., 2004) are potentially easier to observe. The six systems described in this report do not demonstrate photometric features of enhanced magnetic/chromospheric activity. Significant chromospheric and magnetic activity however is not excluded. Emissions at higher energy levels such as the far ultraviolet band can provide a clearer indicator of such activity. The GALEX (Galaxy Evolution Explorer) satellite surveyed the sky in both the far-ultraviolet band (FUV) centered on 1539 A and near-ultraviolet band (NUV) centered on 2316 A. Only the FUV band can be relied upon for the detection of chromospheric activity as NUV emissions may also be contaminated by photospheric emissions (Smith & Redenbaugh, 2010).
As demonstrated by (Noyes et al., 1984; Henry et al., 1996) the \(R^{\prime}_{\rm HK}\) index and \(\log R^{\prime}_{\rm HK}\geq-4.75\) are characteristic indicators of a more active star. Smith & Redenbaugh (2010) matched GALEX FUV magnitudes (\(m_{\rm FUV}\)) to the
\(\log R^{\prime}_{\rm HK}\) for dwarf stars to derive the \(\Delta(m_{\rm FUV-B})\) colour excess:
\[\Delta(m_{\rm FUV-B})=(m_{\rm FUV}-B)-(m_{\rm FUV}-B)_{\rm base} \tag{4}\]
where
\[(m_{\rm FUV}-B)_{\rm base}=6.73(B-V)+7.43, \tag{5}\]
They concluded that chromospherically active stars have a UV colour excess below -0.5 while those with lesser chromospheric and magnetic activity have a colour excess above -0.5.
The GALEX mission observed five of our six systems with measured FUV magnitudes. Ultraviolet colour excess was calculated for all five using Equations 4 & 5. All five have ultraviolet colour excess well below -0.5, a value suggestive of enhanced magnetic/chromospheric activity in the absence of photospheric features. It is well known that magnetically active features such as plages (Hall, 2008) may be associated with star spots, however, the reverse is not always the case - chromospheric/magnetic activity without photospheric features is possible (Mandal et al., 2017). Table 5 summarises the the UV colour excesses for the five systems observed by GALEX
## 6 Summary and Conclusion
The confirmation that the red nova V1309 Sco in 2008 was indeed a merger event between components of a contact binary system (Tylenda et al., 2011) has significantly increased interest in the study of contact binaries. Although it has been known for some time that merger events are likely at low mass ratios (Rasio and Shapiro, 1995; Arbutina, 2007, 2009) most investigators until more recently have sought to define a minimum mass ratio at which orbital stability is likely. Recent theoretical updates (Wadhwa et al., 2021) would indicate that the global minimum mass ratio is of little practical use and instead orbital instability onset is dependent on the mass of the primary component and that for low mass systems instability can occur with mass ratios as high as 0.22 to below 0.05.
The number of identified contact binaries is ever increasing with new systems being continually added through various sky surveys. Large scale high resolution radial velocity observations are at present impractical thus limiting our search for potential merger candidates among systems demonstrating total eclipses and therefore suitable for light curve analysis. To this end Wadhwa et al. (2022) have introduced simplified techniques to identify potential extreme low mass ratio system from survey photometric data. The present study continues our programme of follow-up dedicated observations of potential merger candidates identified from such data. We report photometry and spectroscopic observations of six contact binaries identified as potential extreme low mass ratio systems from the ASAS-SN survey. All six are confirmed as being of extreme low mass ratio, having a mass ratio less than 0.15. Three systems fall into the instability category based on theoretical considerations. It must be noted that the instability mass ratio is highly dependent on the mass of the primary and a 10% change in the primary's mass can result in up to a 17% change in the instability mass ratio (Christopoulou et al., 2022) so confirmation of the mass of the primary, in particular for three potentially unstable systems (A0842, A1037 and V565 Dra) through high resolution spectral observations would be desirable.
Compared to other contact binaries the six systems reported here follow similar characteristics with significantly larger and brighter secondaries relative to main sequence counterparts. In addition, the secondary component in all cases is significantly denser than the primary component. Extreme low mass ratio contact binaries usually show photospheric signs of increased magnetic and chromospheric activity as a variation in the two maxima in the light curve due to the presence of star spots. The current sample of six did not show significant variation in maxima, however, non-photospheric markers such as increased high energy emissions, particularly in the far ultraviolet band, were present in all five of the systems observed by the GALEX mission.
Recent progress in the rapid identification of low mass ratio systems from survey photometry and theoretical considerations for orbital stability has significantly increased the detection of potentially unstable system. As noted above, we have already reported 15 such systems and the current study adds a further three. The study also highlights that not all extreme low mass ratio systems will be unstable, three systems from the current study and some previous large surveys with over 10 systems in each study did not detect any unstable system even though they all reported extreme low mass ratio systems (Liu et al., 2023; Gazeas et al., 2021; Christopoulou et al., 2022; Li et al., 2022).
Acknowledgements. Based on data acquired on the Western Sydney University, Penrith Observatory Telescope. We acknowledge the traditional custodians of the land on which the Observatory stands, the Dharug people, and pay our respects to elders past and present. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This work makes use of observations from the Las Cumbres Observatory global telescope network. This publication makes use of VOSA, developed under the Spanish Virtual Observatory ([https://svo.cab.inta-csic.es](https://svo.cab.inta-csic.es)) project funded by MCIN/AEI/10.13039/501100011033/ through grant PID2020-112949GB-I00. VOSA has been partially updated by using funding from the European Union's Horizon 2020 Research and Innovation Programme, under Grant Agreement number 776403 (EXOPLANETS-A). B. Arbutina acknowledges the funding provided by the Ministry of Science, Technological Development and Innovation of the Republic of Serbia through the contract 451-03-47/2023-01/200104. During work on this paper, G. Djurasevic and J. Petrovic were financially supported by the Ministry of Science, Technological Development and Innovation of the Republic of Serbia through contract 451-03-47/2023-01/200002
|
2307.10494 | A statistical learning framework for mapping indirect measurements of
ergodic systems to emergent properties | The discovery of novel experimental techniques often lags behind contemporary
theoretical understanding. In particular, it can be difficult to establish
appropriate measurement protocols without analytic descriptions of the
underlying system-of-interest. Here we propose a statistical learning framework
that avoids the need for such descriptions for ergodic systems. We validate
this framework by using Monte Carlo simulation and deep neural networks to
learn a mapping between low-field nuclear magnetic resonance spectra and proton
exchange rates in ethanol-water mixtures. We found that trained networks
exhibited normalized-root-mean-square errors of less than 1% for exchange rates
under 150 s-1 but performed poorly for rates above this range. This
differential performance occurred because low-field measurements are
indistinguishable from one another at fast exchange. Nonetheless, where a
discoverable relationship between indirect measurements and emergent dynamics
exists, we demonstrate the possibility of approximating it without the need for
precise analytic descriptions, allowing experimental science to flourish in the
midst of ongoing theoretical work | Nicholas Hindley, Stephen J. DeVience, Ella Zhang, Leo L. Cheng, Matthew S. Rosen | 2023-07-19T23:14:09Z | http://arxiv.org/abs/2307.10494v1 | A statistical learning framework for mapping indirect measurements of ergodic systems to emergent properties
###### Abstract
The discovery of novel experimental techniques often lags behind contemporary theoretical understanding. In particular, it can be difficult to establish appropriate measurement protocols without analytic descriptions of the underlying system-of-interest. Here we propose a statistical learning framework that avoids the need for such descriptions for ergodic systems. We validate this framework by using Monte Carlo simulation and deep neural networks to learn a mapping between low-field nuclear magnetic resonance spectra and proton exchange rates in ethanol-water mixtures. We found that trained networks exhibited normalized-root-mean-square errors of less than 1% for exchange rates under 150 s-1 but performed poorly for rates above this range. This differential performance occurred because low-field measurements are indistinguishable from one another at fast exchange. Nonetheless, where a discoverable relationship between indirect measurements and emergent dynamics exists, we demonstrate the possibility of approximating it without the need for precise analytic descriptions, allowing experimental science to flourish in the midst of ongoing theoretical work.
1 Athinoula A. Martinos Center for Biomedical Engineering, Massachusetts General Hospital, Charlestown, MA 02129, USA
2 Image X Institute, University of Sydney, Sydney, NSW, Australia
3 Scalar Magnetics, LLC, Ellicott City, MD 02143, USA
4 Harvard Medical School, Boston, MA 02115, USA
5 Department of Physics, Harvard University, Cambridge, MA 02138, USA
* [email protected]
## Introduction
In experimental science we often seek to measure stable, emergent properties of complex, stochastic processes. For instance, the temperature of a room emerges out of the random thermal motion of particles. In particular, temperature reflects the average kinetic energy of these particles and can be measured by a thermometer, but this measurement procedure probes the underlying stochastic behavior indirectly (e.g. by volume changes of a fluid or the emission of thermal radiation). Therefore, simple measurements can reflect a complex interplay between unobserved stochastic interactions, emergent behavior and a third observable property that links the two (Fig. 1). In the measurement of temperature, this interplay is mediated by precise analytic descriptions (e.g. via gas laws). In fact, the SI unit of temperature, the kelvin, was redefined as recently as 2019 by setting an exact numerical value for the Boltzmann constant (1). Indeed, the meter, the ampere and the mole were similarly redefined in the same year to reflect a shift away from using tangible objects to define fundamental physical quantities toward analytic definitions based on mathematical derivation. This indicates a nexus between theory and measurement where the development of novel experimental techniques, often by necessity, lags behind the discovery of analytic descriptions. Here we propose a statistical learning framework that avoids the need for such descriptions for a particular class of stochastic systems - namely, _ergodic_ systems - thereby allowing experimental science to flourish in the midst of ongoing theoretical work.
Initially introduced by Boltzmann while exploring the kinetic theory of gases, the ergodic hypothesis states that all accessible microstates are occupied with equal probability over a long enough time horizon (2). Further developed by Birkhoff (3, 4) and von Neumann (5, 6), ergodic theory has become a mainstay of statistical mechanics and provides mathematical grounding for common-sense notions of randomness. An important property of ergodic systems is that the time-average equals the space-average. That is, for a measure-preserving transformation \(T\colon X\to X\) on some measure space \((X,\Sigma,\mu)\) with \(\mu\)-integrable function \(f\), \(\lim\limits_{n\to\infty}\frac{1}{n}\Sigma_{k=0}^{n-1}f\big{(}T^{k}x\big{)}= \frac{1}{\mu(X)}\!\int f\ d\mu\) almost everywhere. Hence, studying an appropriate cross-section of an ergodic system across space suffices to infer emergent dynamics over time and _vice versa_. Here we leverage this property to learn a mapping between indirect measurements of ergodic processes and emergent behavior. More specifically, we use neural networks to learn such mappings from training data generated via Monte Carlo simulation.
Consider an ergodic system \(S^{i}\) with configuration \(i\), where some emergent property of interest has value \(\mathbf{y}^{i}\). Now, assume that apparatus \(A\) can be used to make indirect measurements of \(S^{i}\) such that \(A\big{(}S^{i}\big{)}=x_{j}^{i}\), and further that this measurement procedure can be modelled via Monte Carlo sampling to simulate a set of \(n\) indirect measurements \(x^{i}=\big{\{}x_{1}^{i},...,x_{n}^{i}\big{\}}\). We call these measurements "indirect" because they do not probe the ergodic process directly but instead capture its effects on some other observable property. Returning to the example of temperature, the average kinetic energy of the particles of a system is not measured by observing individual particles directly but by macroscopic effects such as volume changes in an interacting fluid. By the ergodic hypothesis, emergent value \(y^{i}\) can be deduced by studying an appropriate cross-section of the phase space of \(S^{i}\). However, since we do not have direct access to this cross-section, we must instead learn a mapping from indirect measurements of microstates to emergent values. In the absence of complete analytic descriptions, we suggest that such a mapping can nonetheless be learned using neural networks. This task is achieved using a corpus of \(m\) training examples of the form \(\{(\mathbf{x}^{1},\mathbf{y}^{1}),...,(\mathbf{x}^{m},\mathbf{y}^{m})\}\) and formulated as a problem of manifold approximation (Fig. 2).
Suppose there exist diffeomorphic functions \(f\) and \(g\) such that \(f\) projects every measurement \(x_{j}^{i}\) onto some smooth manifold \(X\) and g projects every emergent value \(\mathbf{y}^{i}\) onto some smooth manifold \(Y\). Without loss of generality, here we consider the case where every measurement \(x_{j}^{i}\) is a real-valued vector with dimensions \(d\) and every emergent value \(\mathbf{y}^{i}\) is a real number. Hence we have that \(X\) and \(Y\) are embedded in the ambient spaces \(R^{d}\) and \(R\) respectively. Now, some of the information contained in each measurement \(x_{j}^{i}\) may not be useful or necessary to compute \(\mathbf{y}^{i}\) and the range of observed emergent values will lie within some bound of \(R\). We wish to take advantage of these implicit constraints by learning embedded spaces \(X<R^{d}\) and \(Y<R\) as well as a diffeomorphic
Figure 1: **Indirect measurements can be used to deduce emergent properties (A) “Fuzzy” stochastic processes can be mapped to “sharp” emergent properties via indirect measurements. (B) A familiar example involves the use of a thermometer to measure temperatures which arise due to random thermal motion. With a gas thermometer, temperature can be calculated analytically using the ideal gas law (C) Here we explore the link between stochastic proton exchange and the emergent exchange rate via NMR spectroscopy. In high-field NMR, the exchange rate can be calculated analytically using the intensities of proton signals. (D) Ergodic systems have the special property that the space-average equals the time-average. This equivalence means that the emergent dynamics of an ergodic system over time can be deduced by studying a cross-section of the phase space at a particular time.**
mapping \(\varphi\) between \(X\) and \(Y\). Formulating the problem in this manner yields the composite transformation \(\hat{y}^{t}=g^{-1}\cdot\varphi\cdot f\big{(}x_{j}^{i}\big{)}\)on the joint manifold \(M_{X\times Y}=X\times Y\). The task is to approximate manifolds and mappings that produce \(\hat{y}^{t}\) close to \(y^{t}\). Importantly, since the desired composite transformation is a continuously differentiable function on a compact subset of \(R\), it should theoretically be learnable by the universal approximation theorem of neural networks [7].
Consider neural network \(N\) with parameters \(\theta\), such that \(N\big{(}x_{j}^{i};\,\theta\big{)}=\hat{y}^{i}\). Then \(N\) can be trained via stochastic gradient descent on the corpus \(\{(x^{1},y^{1}),...,(x^{m},y^{m})\}\) to determine optimal parameters: \(\bar{\theta}=\begin{array}{c}\arg min\\ \theta\end{array}_{L}(\hat{y}^{i},y^{i})\), where \(L\) is a loss function that captures differences between \(\hat{y}^{i}\) and \(y^{i}\). Importantly, the neural network must learn a mapping that produces the same emergent value across different measurements. That is, \(N\) must project measurements corresponding to different points in the ambient coordinate system \(R^{d}\times R\) to the _same_ point in the intrinsic coordinate system of \(M_{X\times Y}\). This data topology arises because each emergent value corresponds to a distribution of measurements. Hence, for each \(\big{(}x^{i},y^{i}\big{)}\), we consider a neighborhood of points encompassing \(\big{(}x_{1}^{i},y^{i}\big{)}\), \(\big{(}x_{2}^{i},y^{i}\big{)}\), \(\ldots\), \(\big{(}x_{n}^{i},y^{i}\big{)}\) in \(R^{d}\times R\) and the mapping \(\sigma=(f,g^{-1})\) between each \(\big{(}x_{j}^{i},y^{i}\big{)}\) and \(\big{(}z,\,\hat{y}^{i}\big{)}\), where \(z=f(x_{j}^{i})\) and \(\hat{y}^{i}=g^{-1}\cdot\varphi(z)\). Then \(\sigma\colon R^{d}\times R\to M_{X\times Y}\) maps to the same coordinate \(\big{(}x_{j}^{i},y^{i}\big{)}\to\big{(}z,\,\hat{y}^{i}\big{)}\) for all \(j=1,...,n\). Topologically speaking, we say \(\sigma\) defines a local coordinate chart of \(M_{X\times Y}\) near the neighbourhood of \(\big{(}x^{i},y^{i}\big{)}\). Learning this projection mapping can be thought of as extracting the key features across every measurement in \(\big{\{}x_{1}^{i},...,x_{n}^{i}\big{\}}\) that are sufficient to compute \(\hat{y}^{i}\). By producing these low-dimensional feature representations of each \(x_{j}^{i}\) we have \(X\ <\ R^{d}\). Additionally, since the emergent values \(y^{1},...,y^{m}\) will span some finite range of \(R\) we have \(Y<R\). Lastly, defining \(\bar{\theta}\) in the manner described above encourages mappings that produce \(\hat{y}^{i}\) close to \(y^{i}\).
To demonstrate the use of our proposed framework we consider an ergodic process in ethanol-water mixtures, namely, proton exchange between water and the hydroxyl group of ethanol which occurs at a rate that depends on the relative concentration of the two compounds. This process is ergodic because the exchange rate associated with a typical exchange-pair over time is equivalent to that at a particular time for an ensemble of pairs. Nuclear magnetic resonance (NMR) spectroscopy can be used to measure the exchange rate through the effects exchange has on the spectrum. In high-field NMR, the Bloch-McConnell equations provide a simple analytical solution to the spectral form in the presence of exchange [8]. Since resonance signals originate from individual proton spins, exchange simply represents a swap of resonance frequencies for the two protons, which is incorporated into the semi-classical differential equations describing the precessing spins. These can also be adapted to measure exchange via 2D NMR spectroscopy [9]. However, high-field NMR requires large, expensive superconducting magnets. We are developing novel methods to reduce the cost and footprint of NMR by using field strengths orders of magnitude lower. At such low fields, conventional NMR techniques do not work, and we instead utilize homonuclear J-coupling spectroscopy based on the J-synchronized echo (SyncE) sequence [10, 11]. Modeling exchange in this regime presents three challenges. First, J-coupling spectroscopy measures magnetic states of strongly-coupled spins grouped together into dressed states, and their energy levels are typically calculated numerically from the Schrodinger equation rather than via perturbation theory. Second, detection occurs during and following a series of pulses which occur during and on the same timescale as the exchange process. Third, there is no known analog to the Bloch-McConnell equations that provides analytic solutions in the strong-coupling regime. Therefore, we instead use Monte Carlo simulation to model J-coupling spectra at various exchange rates via numerical simulation of the system during the SyncE pulse sequence. The DRONE network [12] was trained to determine exchange rates from these simulated J-coupling spectra.
Figure 3: **Network performance was limited to a specific range of exchange rates (A) Plot of normalized-root-mean-squared-error (NRMSE) against exchange rate in Hz for the trained DRONE network when tested on averaged simulated J-coupling spectra (B) Scatter plot and Pearson correlation (r) between predicted and ground-truth exchange rates (with black dotted line representing perfect correlation).**
Figure 2: **Mathematical intuition and workflow for our proposed learning framework (A) The task of mapping from indirect measurements of an ergodic system to emergent properties can be considered as one of manifold approximation, in which \(f\) projects every measurement \(\mathbf{x}_{j}^{i}\) onto some smooth manifold \(X\), g projects every emergent value \(\mathbf{y}^{i}\) onto some smooth manifold \(Y\), \(\varphi\) is a between-manifold mapping from \(X\) to \(Y\), and the goal is to estimate emergent values from measurements via the composite transformation \(\mathbf{g^{-1}\cdot\varphi\cdot f}\). (B) A training corpus can be generated via Monte Carlo simulation to produce a set of measurements across a set of emergent values. (C) Deep learning can be used to train a neural network to learn a mapping between indirect measurements and emergent values using an appropriate training corpus (D) A trained neural network can be deployed to estimate emergent values given indirect measurements.**
Figure 4: **Exchange rates measured with SyncE and EXSY** Black dots represent the rate of proton exchange for ethanol, \(k_{E}\), measured with EXSY. Error bars for the first three points are smaller than the symbols. The dashed curve is a best-fit to the EXSY measurements at low water concentrations (\(<14\) M water). Red diamonds represent \(k_{E}\) determined by SyncE with DRONE. An expected error of 2% from the simulated data gives error bars smaller than the symbols.
Figure 5: **Real and predicted SyncE spectra** Measured SyncE spectra were compared with the predicted spectra using Monte Carlo simulation and the DRONE network. Here we show overlapping plots of real and predicted spectra for the best-performing exchange rate at 27.9 s\({}^{\text{-1}}\) (**A**) and the worst-performing exchange rate at 2 s\({}^{\text{-1}}\) (**B**).
## Results
**Performance on data generated by Monte Carlo simulation.** We trained the DRONE network using spectra simulated for exchange rates between 0 and 199 s-1 in integer steps, with eight individual spectra provided for each exchange rate. Due to the random timing of exchange events, individual spectra exhibited variations, which were more pronounced at slower exchange rates. This variability enabled robust learning without the introduction of artificial noise. We then tested the trained DRONE network on averaged simulated spectra (Fig. 3). As the system is ergodic, these averaged spectra approximate measurements for an ensemble of exchange pairs. We found good agreement between predictions and the ground-truth for exchange rates between 0 and 150 s-1 but network performance decreased for exchange rates beyond this range. This can be seen in the increasing normalized-root-mean-square-error (NRMSE) and deviations in accuracy.
**Performance on experimental data.** We first established an independent measurement of the proton exchange rate using the EXSY sequence in a high-field (600 MHz) NMR spectrometer (Fig. S1). For water concentrations between 0 and 14 M, we found the bidirectional exchange rate as a function of water concentration to be \(k=3.03\ \pm\ 0.04\ s^{-1}M^{-1}[\mathrm{H_{2}O}]\) (Fig. S2). From this relationship and the relative fractions of exchanging water and ethanol protons, we calculated a predicted curve for the rate of proton loss from ethanol, \(k_{E}\), which is the exchange parameter measured with our low-field technique. EXSY measurements at higher water concentrations deviated from the expect linear relationship. Next, we used DRONE to determine the exchange rate from real data of ethanol-water mixtures acquired with the SyncE sequence at 6.5 mT (276 kHz NMR frequency). For the range 0 to 17 M water, our low-field measurements lie along the EXSY curve. Above 17 M, the data lie above the curve but are still in rough agreement with the values measured with EXSY (Fig. 4). Some of the discrepancies at very low exchange rates are due to self-exchange among ethanol protons, which are captured by SyncE but not EXSY. SyncE measures a self-exchange rate of \(k_{EE}=3\ \ s^{-1}\) for anhydrous ethanol. We found good agreement between measured and predicted SyncE spectra using DRONE prediction (Fig. 5).
## Discussion
Our proposed learning framework relies on the existence of discoverable relationships between indirect measurements and emergent behavior. At the level of neural networks, these relationships can be understood by probing activations. Where there are detectable differences between two spectra, the activations must also be different. Figure 6 shows activation maps for spectra with exchange rates between 10 s-1 and 200 s-1. The four simulated spectra for 10 s-1 appear to have significant differences due to the natural randomness of the simulation, yet DRONE exhibited identical activation maps, showing that it is able to isolate the important features of the spectra. The spectra at 50 s-1 and 100 s-1 have different features and activation maps, showing that they are distinguishable from one another. However, 100 s-1 and above have nearly identical activation maps, resulting in a decrease in predictive power.
Our measured value of ethanol-water exchange is higher than was found by Luz, Gill, and Meiboom, who measured \(k=0.8\ s^{-1}M^{-1}[\mathrm{H_{2}O}]\) via lineshape analysis [(13)]. However, we did not attempt to adjust the acidity/basicity of the solutions as they did, and our value is in line with their measurements before they performed adjustments. As in their experiments, it is possible there are some acid or base impurities in the ethanol sample helping catalyze higher exchange rates. The deviation of both the EXSY and SyncE data from the expected linear relationship at high water concentrations also indicates that there are likely higher order exchange processes requiring further investigation.
Learning a mapping between indirect measurements of exchange events and emergent exchange rates required an appropriately curated dataset for training. In ergodic systems such as ours, the average of a series of simulations can be considered as equivalent to the measurement of a
large collection of identical systems. Therefore, one possibility is to train the neural network on this averaged measurement for each exchange rate. However, it can take a large number of inputs to produce an averaged result that properly reflects the desired distribution. Moreover, this smoothing may dampen legitimate features that are useful in inferring emergent properties. Alternatively, we show that it is possible to train over a distribution of individual spectra for each exchange rate. Training the network in this manner has four important advantages. First, this training corpus reflects the behavior of ergodic systems, in which a _distribution_ over microscopic, stochastic interactions is observed macroscopically as a _single_ emergent value. Second, training on individual simulated measurements significantly increases the number of training examples. In the present work, a neural network was trained on 8 simulated acquisitions for each exchange rate rather than the corresponding averages, resulting in an 8-fold enlargement of the training dataset. Third, our previous work on the DRONE network required the addition of Gaussian noise during training to promote robust learning [12, 14], but this was redundant in the present work given the inherent variability of the training data. Fourth, simulated data can be generated abundantly without consuming the resources required for physical experiments. Here we leverage these advantages to learn emergent dynamics from the indirect measurement of ergodic systems. By approximating unknown yet discoverable laws we suggest that this approach could be used to overcome a lack of precise, analytical descriptions in developing novel experimental tools.
Figure 6: **Spectra become less distinguishable from one another as exchange rate increases (A, C, E, G, I)** The left-hand panel shows J-coupling spectra (zoomed-in for nutation frequencies between 6 and 21 Hz where differences between spectra are most noticeable) across different exchange rates where each simulated measurement is denoted by a different color. (B, D, F, H, J) The right-hand panel shows activations for the first connected layer of the DRONE network, where each column corresponds to different input measurements from the exchange rate in each row.
## Materials and Methods
### Experimental Design.
We consider the task of mapping from indirect measurements of an ergodic system to emergent properties as one of manifold approximation via deep learning. This task is achieved using a corpus of \(m\) training examples of the form \(\{(x^{1},y^{1}),...,(x^{m},y^{m})\}\), where each is a set of \(n\) measurements \(x^{i}=\{x_{1}^{i},...,x_{n}^{i}\}\) for emergent value \(y^{i}\) generated by Monte Carlo simulation. A neural network is trained to learn the composite transformation \(\hat{y}^{i}=g^{-1}\cdot\varphi\cdot f\left(x_{j}^{i}\right)\) on the joint manifold \(M_{X\times Y}=X\times Y\), where \(f\) projects every measurement \(x_{j}^{i}\) onto some smooth manifold \(X\), g projects every emergent value \(y^{i}\) onto some smooth manifold \(Y\), \(\varphi\) is a between-manifold mapping from \(X\) to \(Y\). Neural network \(N\) with parameters \(\theta\), such that \(N\left(x_{j}^{i};\,\theta\right)=\hat{y}^{i}\), can be used to approximate these manifolds and mappings via stochastic gradient descent to learn optimal parameters \(\bar{\theta}=\begin{array}{c}\arg min\\ \theta\end{array}L(\hat{y}^{i},y^{i})\), where \(L\) is a loss function that captures differences between \(\hat{y}^{i}\) and \(y^{i}\). We validate this framework by using low-field NMR spectra to infer proton exchange rates in water-ethanol mixtures.
Low-field (6.5 mT) J-coupling spectra and high-field exchange spectroscopy (EXSY) measurements were acquired for mixtures prepared from anhydrous ethanol (Sigma Aldrich) and deionized water. These contained between 0% v/v water and 50% v/v water, corresponding to between 0 and 31 M concentrations of water. The DRONE neural network was used to learn a mapping between simulated J-coupling spectra and proton exchange rates. These mapped values from experimentally acquired J-coupling spectra were then compared to exchange rates measured with EXSY.
### Low-field NMR Measurements.
Spectra at 276 kHz (6.5 mT) were measured in a custom-built high-homogeneity electromagnet-based MRI scanner with a Tecmag Redstone(tm) console described previously [15]. For the presently described work, a solenoidal sample coil was used, designed to hold 10 mm NMR tubes, and a B\({}_{0}\) field-frequency lock was used to maintain the resonance frequency within \(\pm\)0.25 Hz. The scanner was shimmed to achieve a linewidth of deionized water of better than 0.5 Hz. RF pulses directly from the synthesizer were used, resulting in a 90deg pulse length of 1 ms using about 4 \(\mu\)W.
Homonuclear J-coupling spectra were acquired using the multi-acquisition J-synchronized echo pulse sequence (SyncE) previously described [16]. Echo times \(\tau\) varied between 750 ms and 8.3 ms, giving a range of \(\sim\)0 to 30 Hz for equivalent nutation frequency, \(\nu_{n}\). The actual pulse delays were adjusted appropriately for the pulse width so that the time between the 180deg pulse centers was 2\(\tau\). We used a total pulse train time \(T=3\) s with \(n=1\) to 179 loops, giving a resolution of 0.167 Hz. All pulses were performed on-resonance with the single \({}^{1}\)H line of the conventional NMR spectrum. Pulses were calibrated with a Rabi experiment. Echo acquisitions were 8 ms long centered between 180deg pulses (16 points with 500 \(\mu\)s dwell time). The delay between measurements was at least 5 T\({}_{1}\).
### High-Field NMR.
Exchange spectroscopy was performed at 14.1 T (600 MHz) with a Bruker Bio-Spin Avance NMR spectrometer. Bidirectional exchange rate \(k\) was calculated from the intensity of the water and ethanol hydroxyl protons, \(I_{WW}\) and \(I_{EE}\), respectively, and the cross peaks \(I_{EW}\) and \(I_{WE}\) following Perrin et al. [9] and others. Intensities were determined via integration of the appropriate peaks of the 2D NMR spectra. For each mixing time a corrected peak ratio was then calculated taking into account the relative proton concentrations of the two species:
\[r=4X_{W}X_{E}\left(\frac{I_{WW}+I_{EE}}{I_{WE}+I_{EW}}\right)-(X_{W}-X_{E})^{2}. \tag{1}\]
Here \(X_{W}\) and \(X_{E}\) are the fraction of exchanging protons coming from water and ethanol, respectively. The value \(1/r\) was plotted against mixing time \(t_{m}\) and the data were fit with the function
\[\frac{1}{r}=\ A\left(\frac{1-\exp(-kt_{m})}{1+\exp(-kt_{m})}\right)\, \tag{2}\]
where \(A\) and \(k\) are free parameters found by the fit.
EXSY measures the bidirectional exchange rate \(k=k_{EW}+k_{WE}\), where \(k_{EW}\) and \(k_{WE}\) are the rates of proton exchange from ethanol to water and water to ethanol, respectively. However, SyncE measures \(k_{E}\), the rate of proton loss from ethanol. In the simplest case exchange only takes place with water and \(k_{E}=k_{EW}\). From the mass conservation relationship
\[X_{E}k_{EW}=X_{W}k_{WE}\;, \tag{3}\]
we find
\[k_{E}=k_{EW}=X_{W}k\;. \tag{4}\]
However, we find that SyncE also measures an exchange in the absence of water due to exchange between ethanol molecules, with rate \(k_{EE}=3\;s^{-1}\). The observed exchange rate is then
\[k_{E}=k_{EE}+k_{EW}=k_{EE}+X_{W}k \tag{5}\]
Rates \(k_{EE}\) and \(k\) depend on the molar concentrations of ethanol and water, respectively. Assuming linear relationships, the measurements imply that \(k_{EE}=0.18\;s^{-1}M^{-1}[\text{EtOH}]\) and \(k=3.03\;\pm 0.04\;s^{-1}M^{-1}[\text{H}_{2}\text{O}]\).
**Spectral Simulation.** Custom software was written to efficiently simulate the J-coupling spectrum measured by the multiacquisition SyncE sequence. The software propagates the time-dependent Schrodinger equation for the spin system and calculates the remaining x-axis magnetization, M\({}_{x}\), at the center of each echo. To model exchange, we divide the propagation between pulses into a number of steps and calculate the probability \(p\) of an exchange event during each step. The probability is
\[p\;=\;t_{step}\;*\;k_{E}, \tag{6}\]
where \(t_{step}\) is the step length and \(k_{E}\) is the rate of proton loss from ethanol.
For each step, a random number between 0 and 1 is drawn from a uniform distribution, and if the number is less than p, an exchange event occurs. When an exchange occurs, we follow a modified version of the method in Barskiy et al. (17) and replace the hydroxyl proton with one whose spin state is M\({}_{x}\). All coherences with the hydroxyl proton are set to zero. (In Barskiy et al., the new hydroxyl proton state is instead assumed to be random, i.e. unpolarized, because their experiment is at zero magnetic field.) As a check, some simulations were also performed with the Spinach package in MATLAB.
**Network Architecture.** The DRONE network (12) was used to predict exchange rates from simulated homonuclear J-coupling spectra. A four layer fully-connected neural network was defined using the Pytorch machine learning framework (18). The input layer consisted of 150 nodes corresponding to the 150 nutation frequencies probed between 5 and 30 Hz during the simulated echo sequences. Data for frequencies below 5 Hz were discarded since they were not useful in determining proton exchange rates. Cropping the data in this manner encouraged the network to focus on the relevant features for inference. As each spectrum was used to compute the corresponding exchange rate, there was only one output node. Between the input and output layers were two hidden layers with 300 nodes each, hence the network required storage of 300 \(\times\) 300 = 90,000 coefficients. A hyperbolic tangent function was used for the hidden layers while sigmoid activation was used for the output layer.
Simulated spectra were generated for ethanol-water mixtures with proton exchange rates between 0 and 199 s-1 in integer steps. Each neural network was then trained using 8 unique simulations for each exchange rate, yielding a training corpus of 1600 spectra and a training:validation split of 90:10 was used. Additionally, each input spectrum was standardized to have zero mean and unit standard deviation while output exchange rates were normalized between 0 and 1. These transformations were used to reflect the shapes of the hyperbolic tangent and sigmoid activation functions, respectively, and were found to drastically improve both training time and validation accuracy. Training proceeded by minimizing the mean squared error between predicted and ground-truth exchange rates using the ADAM optimization algorithm (19) with a learning rate of 0.01 and a batch size of 512. Each network was trained for 50 epochs to ensure convergence, requiring approximately 1 min on a NVIDIA GTX 1080 Ti. |
2308.00002 | An Overview Of Temporal Commonsense Reasoning and Acquisition | Temporal commonsense reasoning refers to the ability to understand the
typical temporal context of phrases, actions, and events, and use it to reason
over problems requiring such knowledge. This trait is essential in temporal
natural language processing tasks, with possible applications such as timeline
summarization, temporal question answering, and temporal natural language
inference. Recent research on the performance of large language models suggests
that, although they are adept at generating syntactically correct sentences and
solving classification tasks, they often take shortcuts in their reasoning and
fall prey to simple linguistic traps. This article provides an overview of
research in the domain of temporal commonsense reasoning, particularly focusing
on enhancing language model performance through a variety of augmentations and
their evaluation across a growing number of datasets. However, these augmented
models still struggle to approach human performance on reasoning tasks over
temporal common sense properties, such as the typical occurrence times,
orderings, or durations of events. We further emphasize the need for careful
interpretation of research to guard against overpromising evaluation results in
light of the shallow reasoning present in transformers. This can be achieved by
appropriately preparing datasets and suitable evaluation metrics. | Georg Wenzel, Adam Jatowt | 2023-07-28T01:30:15Z | http://arxiv.org/abs/2308.00002v3 | # An Overview Of Temporal Commonsense Reasoning and Acquisition
###### Abstract
Temporal commonsense reasoning refers to the ability to understand the typical temporal context of phrases, actions, and events, and use it to reason over problems requiring such knowledge. This trait is essential in temporal natural language processing tasks, with possible applications such as timeline summarization, temporal question answering, and temporal natural language inference. Recent research on the performance of large language models suggests that, although they are adept at generating syntactically correct sentences and solving classification tasks, they often take shortcuts in their reasoning and fall prey to simple linguistic traps. This article provides an overview of research in the domain of temporal commonsense reasoning, particularly focusing on enhancing language model performance through a variety of augmentations and their evaluation across a growing number of datasets. However, these augmented models still struggle to approach human performance on reasoning tasks over temporal common sense properties, such as the typical occurrence times, orderings, or durations of events. We further emphasize the need for careful interpretation of research to guard against overpromising evaluation results in light of the shallow reasoning present in transformers. This can be achieved by appropriately preparing datasets and suitable evaluation metrics.
**Keywords:** Commonsense Reasoning, Temporal Common Sense, Temporal Reasoning, Transformer Architecture, Temporal Natural Language Processing
**MSC Classification:** 68T50
**ACM Classification:** I.2.7
## 1 Introduction
Humans generally perform well in interpreting implicit information in text and speech by leveraging _commonsense reasoning_. This ability is reflected in the way we communicate. For example, when we read the phrase "I couldn't get out of bed this morning.", we generally assume that this refers to a state of mind and not a physical inability to get out of bed. When we read "He had butterflies in his stomach.", we understand this as a figure of speech for an anxious or nervous feeling. Rather than specifying the literal meaning, we rely on the recipient's implicit prior understanding of certain concepts and expressions in our language.
Commonsense reasoning can manifest in different forms. Datasets such as CIDER (Ghosal et al, 2021), Cosmos QA (Huang et al, 2019), GLUCOSE (Mostafazadeh et al, 2020), and COM2SENSE (Singh et al, 2021) aim to serve as benchmarks to better understand the commonsense reasoning capabilities of current state-of-the-art machine learning models. In the process, these capabilities are often grouped into taxonomies, composed of categories such as physical common sense, social common sense, motivations, reactions, causality, and several others. Furthermore, collecting commonsense knowledge can be a primary goal for some knowledge bases, such as the ConceptNet(Speer et al, 2017) and ATOMIC\({}^{20}_{20}\)(Huang et al, 2021)_knowledge graphs_ (KGs), which have the goal of both bolstering the general reasoning capabilities of _language models_ (LMs) and training them to be able to express their implicit knowledge directly for evaluation purposes.
Historically, building machine learning systems with commonsense reasoning was a problem that was relatively difficult to tackle. One of the reasons for the first AI winter, a period of reduced funding and interest in artificial intelligence, was the lack of algorithmic problem-solving approaches, with many developers instead attempting to build systems that "think humanly" (Toosi et al, 2021). However, due to advances in computing power and neural models, these approaches have seemingly become possible in many _natural language processing_ (NLP) tasks (Radford et al, 2019). A driving force behind this change is the use of transformer models (Vaswani et al, 2017) and the LMs they enable, such as BERT (Kenton and Toutanova, 2019) and GPT (Radford et al, 2018).
This article focuses on _temporal commonsense_ (TCS) reasoning. TCS encompasses a variety of traits. For example, given the pair of sentences "Mary went to the hospital. She broke her leg.", the likely sequence of events is that Mary first broke her leg and then went to the hospital, despite this not being explicitly expressed in the text. Understanding event durations is another such property. We intuitively know that going on a walk takes less time than going on vacation, even though the structure of both phrases is very similar.
Although the specific notion of TCS is relatively new, many of its applications are not. In this survey, we first provide some background on the field of temporal reasoning, where some tasks, such as event relation extraction, which directly relate to proposed TCS dimensions, have already been explored since the early 2000s (Pustejovsky et al, 2003; Pustejovsky, 2003; Verhagen et al, 2007).
In addition to the apparent benefit of incorporating TCS reasoning into such tasks, time-aware LMs for downstream NLP tasks are also becoming increasingly popular.
Recently, models such as TempoBERT (Rosin et al, 2022) and BiTimeBERT (Wang et al, 2022) have been proposed, which aim to temporalize the embeddings provided by LMs such as BERT via the document creation time or explicit temporal expressions in the training corpus. Another approach is to temporalize the attention mechanism of the transformer itself (Rosin and Radinsky, 2022). Generally, these approaches are evaluated in domains such as semantic change detection or document dating, where the use of explicit timestamps may not only be encouraged by the available datasets, but may even be required to perform the task in the first place. These models often outperform previous non-transformer-based state-of-the-art solutions in their respective domains.
Similarly, LMs with TCS may achieve higher performance in domains where explicit temporal expressions or document dates are not as widely available. Ghosal et al (2021) utilize the COMET transformer model (Hwang et al, 2021), which was trained on the ATOMIC\({}_{20}^{20}\) KG, to incorporate commonsense knowledge such as "I called 911 to report the accident." occurring before "The police soon arrived." into a sentence ordering task, achieving state-of-the-art results on several datasets. Zhang et al (2021) use temporal knowledge embedded in the ASER (Zhang et al, 2020) KG to enrich an audio tagging ontology. LMs with more precise world models, including an understanding of TCS properties such as typical event orderings or durations, could also be helpful in tasks such as timeline summarization (Pasquali et al, 2021), sequencing (Agrawal et al, 2016) or question answering (Wang et al, 2020). Tasks such as timeline summarization and question answering, while currently often based on and evaluated against document collections with explicit document creation times, can also rely heavily on the contextual understanding of a user's query and the temporal interaction between documents.
In the remainder of this survey, we mainly focus on recently proposed benchmark datasets and LMs incorporating TCS. From this research, we can draw various conclusions for future work, analyse the currently best-performing methods, and identify research gaps. The rest of this article is structured as follows. Section 2 describes our method to collect relevant literature for this survey and lists related work. Section 3 provides background knowledge regarding the field of temporal reasoning. Section 4 illustrates the shift from structural, rule-based reasoning to a more data-driven approach on several NLP tasks, the different types of TCS knowledge, and pre-transformer approaches. Section 5 lists recent benchmark datasets and examines proposed ways to improve them. Section 6 gives an overview of how researchers have been attempting to improve performance on TCS tasks in recent years. In Section 7, we propose possible avenues for future work and discuss the current state of the art. Finally, Section 8 summarizes the content of the article and provides an outlook for future research.
## 2 Survey Scope and Related Work
In this section, we illustrate the scope of this literature review by placing the field of TCS reasoning in its surrounding context within the NLP landscape. This allows us to
clearly define which type of research will be included in the survey. Notable differences from recent related literature are also highlighted.
### Survey Scope
The field of TCS reasoning is semantically embedded within both commonsense and temporal reasoning. Figure 1 shows some example tasks from both domains. In this survey, we focus specifically on datasets and models for TCS reasoning, in contrast to related fields, which we will discuss in this section.
#### 2.1.1 Commonsense Reasoning
As noted in Section 1, many datasets exist already to benchmark different types of common sense. Typically, such datasets focus on several dimensions of common sense. Consequently, TCS was considered just one of several categories or even completely overlooked. We choose not to survey such datasets, as more recent research provides several datasets specifically to benchmark TCS reasoning. Additionally, it is likely that many models specifically developed to reason over temporal properties would not perform well on other types of commonsense reasoning.
#### 2.1.2 Temporal Reasoning
This category encompasses a wide range of research. In this survey, we do not focus on purely algorithmic approaches for temporal reasoning, such as dependency tree parsing or logical propositions. We also differentiate between _temporal factual knowledge_, where an LM is evaluated on its knowledge of the temporal scope of certain
Figure 1: Temporal reasoning and commonsense reasoning both encapsulate TCS reasoning, but also contain many other tasks
facts (Dhingra et al, 2022), and _TCS knowledge_, which is centred around an implicit understanding of common temporal attributes. For example, knowing that a presidential term has a duration of years rather than minutes is TCS. However, knowledge of the identity of the President of the United States in 2009 is temporal factual knowledge. One possible ambiguity emerges when temporal factual knowledge tasks, such as temporal slot filling, are tackled using common sense that is not inherently temporal, such as knowledge of certain "world invariants" (Wang and Jiang, 2020; Zhou et al, 2020). However, to keep the scope of the survey reasonable, we do not explore such approaches, as they technically do not leverage TCS reasoning.
#### Commonsense Causality Reasoning
Commonsense causality reasoning is perhaps the most closely related field to TCS reasoning. Like TCS reasoning, it finds its roots in both temporal and commonsense reasoning. Naturally, temporal awareness is almost certainly required to reason about causality (Zhang et al, 2022). Conversely, causality can greatly inform certain TCS dimensions, such as event ordering. However, causality and the properties proposed in TCS reasoning are ultimately different. In line with our goal of keeping the scope of the survey reasonable, we thus do not study such approaches.
### Research Goals
In this survey, we aim to provide a broad overview of the field of TCS reasoning. This includes scoping TCS, as well as identifying and collecting relevant datasets, LM structures, evaluation metrics, and state-of-the-art results.
Our first major objective is to provide a full overview of datasets specifically developed to evaluate certain dimensions of TCS, as well as survey said datasets for common evaluation metrics, findings, and possible identified methods to improve the robustness of both the collection process for new datasets and the reporting process for existing ones.
We find that current research into LMs with TCS mainly revolves around data engineering of input- and output structures of the transformer architecture. Consequently, our second major goal is to summarize and categorize the attempted augmentations as well as their perceived effectiveness, and to locate possible avenues for future work.
### Literature Collection
The literature collection for this survey was conducted as follows. For the primary collection of state-of-the-art TCS models and datasets, _Google Scholar_, _Semantic Scholar_, and _dblp_ were queried using the search string "temporal commonsense". Additionally, we restricted the field of study to "computer science" on the Semantic Scholar platform.
The transformer architecture and subsequent models significantly improved the state-of-the-art performance in many NLP tasks (Radford et al, 2019). This survey will show that transformers form the basis of nearly all state-of-the-art models in TCS reasoning. Therefore, we only considered research from 2018 to 2023, in line with the
2017 release of the "Attention is all you need" paper (Vaswani et al, 2017) and the subsequent 2018 release of the BERT and GPT models.
We evaluated 21 papers from dblp (the full set of results) and the top 50 results from Google Scholar and Semantic Scholar, discarding research that was not written in English or did not mention TCS in the abstract. We then performed one iteration of backward snowballing from the result set to identify previous work. However, except for the ROCStories dataset, most previous work could not clearly be described as aiming to acquire or measure TCS understanding.
First, we provide a brief background on temporal reasoning to ground the origin of tasks and dimensions proposed within TCS reasoning. Our main objective is then the collection of modern benchmark datasets for TCS reasoning, as well as proposed models evaluated on said datasets. A summary of the most important literature is shown in Table 1. Note that we prioritize work that matches the TCS domain in this table, and out-of-domain resources (e.g., out-of-domain datasets used for evaluation) may not be fully listed in Table 1. On top of the articles highlighted in this table, we cite important work from the previously discussed related domains to highlight findings that can be applied to TCS reasoning research in the future.
We further break down the information in Table 1 by publication date and contribution type. Figure 2 shows the distribution of publication dates in the core literature. Since 2020, there appears to be a constant stream of research into TCS reasoning, and this trend does not seem to fade in 2023. Based on the publication dates, it is likely that the transformer architecture is a strong contributing factor to the large number of new publications in this field.
Figure 2: The distribution of publication dates in the surveyed core literature
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Paper & Dataset(s) & Model & Augment & Task & Metrics \\ \hline Mostafazadeh et al. (2016a) & **ROCStories** & DSSM & & Story Completion & Acc \\ \hline Ning et al. (2018b) & **MATRES** & Av. Perceptron & & Event Ordering & P, R, F1 \\ \hline Almquist and Jatowt (2019) & **Almquist2019** & SVC & **WS** & **Temporal Validity Duration** & F1 \\ \hline Zhou et al. (2019) & **McTaco** & BERT & **ENC** & **Cloze Question Answering** & F1, EM \\ \hline Guan et al. (2020) & ROCStories & GPT-2 & **EXT** & Story Completion & Human, PPL, BLEU, \\ & & & & Cov., Rep., distinct-4 \\ \hline Ning et al. (2020b) & **TORQUE** & RoBERTa & Event Ordering & F1, EM, Consistency \\ \hline Pereira et al. (2020) & McCaco & RoBERTa & **ADV** & Cloze Question Answering & F1, EM \\ \hline Zhang et al. (2020b) & **WikiHow** & RoBERTa & & Step Ordering & Acc \\ \hline Vashishtha et al. (2020) & **Vashishtha2020** & RoBERTa & & Event Duration Inference & Acc \\ & & & & Event Order Inference & Acc \\ \hline Yang et al. (2020) & McCaco & BERT & **WS, ENC** & Event Duration Prediction & F1, EM \\ \hline Gardner et al. (2020) & McCaco & RoBERTa & & Cloze Question Answering & **Contrast EM** \\ & & & & **Consistency** \\ \hline Sun et al. (2020) & SST-5 & RoBERTa & **ENC** & Natural Language Inference & Acc, F1, BLEU \\ & SNLI & & & & \\ \hline Zhou et al. (2020a) & McCaco & BERT & **EXT, WS** & Cloze Question Answering & F1, EM \\ \hline Qin et al. (2021) & **TIMEDIAL** & BERT & & Cloze Dialogue Completion & 2-best Acc \\ \hline Ghosal et al. (2021b) & ROCStories & BERT & **EXT** & Sentence Ordering & Kendall’s \(\tau\), Acc \\ \hline Pereira et al. (2021) & MATRES & & Event Ordering & Acc, F1, EM \\ & McCaco & RoBERTa & **ADV** & Cloze Question Answering & Acc, F1, EM \\ \hline Zhang et al. (2021) & SONYC & D-GCN & **EXT** & Audio Tagging & mAP, mAUC \\ & & & & & \\ \hline Kimura et al. (2021) & McCaco & BERT & **ENS, EXT** & Cloze Question Answering & F1, EM \\ \hline Zhou et al. (2021) & **TRACIE** & T5 & **LSR** & Event Ordering Inference & Acc \\ \hline Cao and Wang (2022) & TempWrigBio & BART & **ENC** & Text Completion & Human, BLEU \\ & & & & & METEOR, BERTScore \\ \hline Rosin and Radinsky (2022) & SemEval & BERT & **ENC** & Semantic Change Detection & Pearson’s \(r\) \\ & & & & & Spearman’s \(\rho\) \\ \hline \multirow{3}{*}{Rosin et al. (2022)} & LivepoolFC & \multirow{3}{*}{BERT} & \multirow{3}{*}{**ENC**} & \multirow{3}{*}{Semantic Change Detection} & Pearson’s \(r\) \\ & SEREval & & & & Spearman’s \(\rho\) \\ & NYT & & & & \\ \hline \multirow{3}{*}{Wang et al. (2022)} & EventTime & \multirow{3}{*}{BERT} & \multirow{3}{*}{**ENC**} & Event Occurrence Time & \multirow{3}{*}{Acc, MAE, F1, EM} \\ & WOTD & & & Document Dating & \\ & NYT & & & & \\ & TDA & & & & \\ \hline Zhou et al. (2022) & McCaco & BART & **ENC** & Cloze Question Answering & Acc, F1 \\ \hline Cai et al. (2022) & McCaco & BERT & **LSR** & Cloze Question Answering & F1, EM \\ \hline Yu et al. (2022) & ROCStories & \multirow{3}{*}{BERT} & \multirow{3}{*}{**EXT**} & Event Ordering & Acc, F1 \\ & MATRES & & & Story Completion \\ \hline Hosokawa et al. (2023) & **TNET** & RoBERTa & **EXT** & **Temporal Validity Inference** & Acc \\ \hline Cai et al. (2023) & TIMEDIAL & RoBERTa & **LSR** & Cloze Dialogue Completion & 2-best Acc \\ \hline Lynden et al. (2023) & **CoTAK** & BERT & & Action Perform/Effect & Acc \\ & & & & Duration Prediction & \\ \hline \hline \end{tabular}
\end{table}
Table 1: A summary of the most important literature to our survey. The main contribution to the field is highlighted in bold for each paper. Augmentations are introduced in the main sections of this paper and are abbreviated as follows: EXT - External Knowledge; ENC - Data Encoding; LSR - Logical or Symbolic Reasoning; ADV - Adversarial Learning; WS - Weak Supervision; ENS - Ensemble;
Figure 3 shows the distribution of the main contribution type in the surveyed core literature. Specifically, these categories correspond to the highlighted text within Table 1. They signify whether an article proposed a new model (or augmentation) for an existing dataset, a new dataset, a new task or taxonomy, or an evaluation metric. One article may have multiple such contributions. The chart shows that there is a strong focus on models and model augmentations. This is not necessarily bad, as it shows a community effort to improve performance on well-defined problems, but a lack of focus on more rigid evaluation metrics may contribute to suboptimal evaluation practices on many of the proposed model augmentations, which will be discussed in the main sections of the survey.
### Related Work
Several recent surveys are studying commonsense knowledge embedded in LMs and how it could be improved (Storks et al, 2019; Bhargava and Ng, 2022; Lymperaiou and Stamou, 2022; Yin et al, 2022). However, such surveys tend to only consider TCS as one of several possible domains of common sense, if at all, and do not provide a full spectrum of recent research.
Davis (2023) provides a comprehensive survey of benchmark datasets for different categories of commonsense reasoning, including temporal. However, their survey aimed to qualitatively analyse a large variety of common sense benchmark datasets to detect potential flaws and propose improvements. As there is no categorization by the type of common sense required and no further in-depth comparative discussion of results within the domain of TCS research, this survey does not provide a clear overview of the current state of the art.
Figure 3: The distribution of contribution types in the surveyed core literature
Helwe et al (2021) showcase shallow reasoning behaviours in transformer models on different tasks. While not all proposed behaviours are related to commonsense reasoning, some examples from the TCS domain, also discussed in this survey, are mentioned. Furthermore, some general behaviours found in LMs, such as the possibility of mis-priming and a lack of understanding for negated phrases, have substantial implications for tasks posed in the TCS domain and should be considered when training models on temporal data (Qin et al, 2021).
Ji et al (2021) survey KGs, specifically mentioning them as a possible way to empower commonsense reasoning in knowledge-aware models. However, many alternative approaches can be chosen to encode additional temporal information in LMs, which are not discussed in this survey.
Of course, surveys can also be found on downstream tasks related to time, such as temporal information retrieval (Campos et al, 2014) and temporal information extraction (Sousa et al, 2023). As mentioned in Section 1, it stands to reason that models incorporating TCS could help solve such tasks. However, because of the relative novelty of the topic, they are not often discussed.
Compared to previous work, our survey focuses specifically on the growing field of TCS reasoning. We draw parallels to long-standing temporal reasoning tasks and highlight the technologies that enabled the current state-of-the-art performance. We then survey datasets explicitly created to benchmark the TCS understanding of machine learning models, as well as models evaluated on those datasets. Finally, from both types of surveyed research and related work, we categorize the current state-of-the-art solution space and propose improvements for both datasets and models in future work.
## 3 Temporal Reasoning
We first briefly summarize the topic of _temporal reasoning_. One major evolution in temporal reasoning tasks over the years is a shift from a more syntactic, rule-based problem-solving approach to a more semantic, data-driven one. Specifically, in this paper, we consider as _syntactic_ approaches algorithms or models that focus on the structural or grammatical aspects of language. For example, syntactic parse trees are used to represent and parse information from the grammatical structure of sentences. Rule-based classifications, which rely on predefined rules for identifying features, are another form of syntactic analysis we consider. On the other hand, _semantic_ approaches describe algorithms or models that are concerned with the meaning and interpretation of language. This includes models based on word embeddings, which are typically learned through various machine learning tasks incorporating the usual local context of words, as well as other data-driven methods that capture nuances and relationships between words and phrases implicitly through patterns in the training data.
Temporal reasoning consists of "formalizing the notion of time and providing means to represent and reason about the temporal aspects of knowledge" (Vila, 1994). Much early temporal reasoning research in NLP can be linked back to Allen (1983) documenting an algebra for storing and updating temporal relations between events in the form of intervals, which were connected using a set of 13 different relations such as
during_, _before_, _after_ or _overlaps_. This algebra stood out from previous work in that it did not require precise timestamps or orderings to be known and could be used to express facts such as "event A happens before or after event B", similar to how temporal facts can be expressed in natural language, without explicit timestamps and without the strict requirement of an ordered notion of time between events.
In text, the technology development program TIDES first led to annotation guidelines for explicit temporal expressions using the TIMEX standard (Ferro et al, 2001). The 2002 TERQAS workshop then led to the creation of the specification language _TimeML_(Pustejovsky et al, 2003) for connecting temporal expressions to specific events in natural language. TimeML effectively converted many of the constraints previously defined by Allen into tags for natural language. Pustejovsky et al (2003) define four fundamental problems in event-temporal identification.
1. Timestamping of events.
2. Ordering events with respect to one another.
3. Reasoning with underspecified temporal expressions (such as "last week").
4. Reasoning about the persistence of events.
TimeML defines tags such as temporal links between events and definitions for intensional temporal expressions, embeddings, and signal words. Therefore, it was suitable for annotating events and their temporal dimensions in textual content. It was subsequently used for the creation of _TimeBank_(Pustejovsky, 2003), a text corpus of 300 documents from various news-related sources, manually annotated using TimeML tags. Over the following years, TimeBank was further refined to ensure that it could be used as a gold standard for temporal relation extraction (Pustejovsky et al, 2006).
TimeML and TimeBank proved to be crucial resources for benchmarking temporal reasoning in the following years. A notable resource that promoted the development of TimeML as an annotation language was the _TempEval_ tasks proposed in several SemEval workshops between 2007 and 2013 (Verhagen et al, 2007, 2010; UzZaman et al, 2013). Over the years, the task definitions in _TempEval_ were adapted to gradually require an increasing reasoning scope in recognizing, extracting, and tagging temporal expressions and events from free-form text. In this sense, the development of these challenges and the corresponding solution space highlight a potential origin of the idea of "temporal common sense".
In 2007's TempEval challenge (Verhagen et al, 2007), participants were required to extract and provide simplified TimeML annotations for already supplied events and temporal links. Rule-based systems and syntactic analysis (such as dependency tree parsing or syntactic tree generation) were used to solve the task.
In contrast, 2010's TempEval-2 tasks (Verhagen et al, 2010) extended the initial TempEval task set with the automatic recognition of events and time expressions. In contrast to previous tasks regarding the extraction of temporal expressions, where rule-based information extraction systems such as the Edinburgh IE system (Grover et al, 2010) and HeidelTime (Strotgen and Gertz, 2010) dominated, a conditional random field (CRF) model provided the best F1 score on event extraction (Llorens et al, 2010). The authors of this CRF model show that the inclusion of semantic features, such as semantic role labels or other lexical semantics such as WordNet ontology classes, can
improve the model's ability to generalize and lead to a higher recall as a result. The success of this approach already showed a movement towards data-driven methods and the use of latent semantic information for classification purposes rather than purely syntactic parsing.
This trend continued in 2013, where the TempEval-3 task set (UzZaman et al, 2013) included an end-to-end task requiring the systems to fully extract events and their temporal links from scratch and tag all extracted data with appropriate properties. The dataset used for the previous challenges was expanded with a new platinum test set containing previously unseen text with expert annotations as well as an automatically annotated silver set, using an ensemble of best-performing methods from the previous TempEval challenge. This extended dataset effectively allowed teams to leverage precomputed weak supervision. Again, while rule-based systems dominated on pure normalization of time expressions, machine-learning-based systems performed much better on the event extraction task, with all high-performing systems using some form of machine learning, usually in the form of probability classifiers such as MaxEntropy, CRF, or support vector machines (SVM). The automatically annotated silver data and semantic features, such as WordNet synsets and semantic role labels, also proved very helpful in solving this challenge.
This difference in best-performing solutions raises the question of what distinguishes a task like temporal expression normalization from more event-centric tasks. The following example of events in text from the TimeML specification (Pustejovsky et al, 2003; Saurei et al, 2005), with events highlighted in bold, illustrates why it may be difficult for a rule-based system to perform event extraction.
**[kicked]** the ball, and it **[rose]** into the air.
The **[rains] [caused]** the **[flooding]**.
John **[caused]** the **[fire]**.
All 75 people **[on board]** the Aeroflot Airbus **[died]**.
According to the TimeML annotation guidelines, events "cover situations that happen or occur. [...] We also consider as events those predicates describing states or circumstances in which something obtains or holds true". Compared to the limited number of possible explicit temporal expressions, it is quite hard to formalize such a proposition in an algorithm, as these events are not bound to a specific syntactic form. Even prepositional phrases such as "on board" could be considered an event. Thus, given enough data and computational power, solving such tasks via data-driven models appears to be more feasible.
The problem of sentences with similar meanings being composed in a variety of different syntactic forms was also cited as a reason for the creation of various new annotation frameworks, such as AMR (Banarescu et al, 2013), which strips syntactic sugar and uses PropBank framesets with pre-defined slots to represent the meaning of a sentence, and UCCA (Abend and Rappoport, 2013), which aims to produce similar annotations for sentences with a similar meaning rather than based on the grammatical structure, by representing them as scenes of processes or states. UCCA's definition of
a _scene_ is similar to that of an event in TimeML, being composed of either a process that evolves over time, or a state that does not.
Similar to TCS reasoning, various temporal reasoning domains are also being benchmarked and evaluated using LMs in recent years. For example, Tan et al. (2023) evaluate the temporal reasoning capabilities of LMs in closed book QA, open book QA, and reasoning QA formats, finding that such models are often incapable of extrapolating their reasoning capabilities to settings outside the contemporary training period and proposing methods, such as time-sensitive reinforcement learning, to mitigate this issue. As mentioned, we make note of some findings in LM-based temporal reasoning research that can possibly be leveraged in TCS reasoning, but a complete overview is out of the scope of this survey.
## 4 Pre-Transformer Temporal Common Sense
Before providing an overview of modern TCS reasoning models and datasets, we first introduce commonly cited dimensions of TCS reasoning and connect them to previously proposed temporal reasoning tasks. In addition, we showcase some of the main technologies that enabled models to reason over TCS.
### Defining Temporal Common Sense
The eventual objective of fully automating temporal reasoning is not new. The TimeML authors note the surge in research regarding the automatic recognition of temporal and event expressions in natural language text (Pustejovsky et al, 2003), posing potential benefits in domains such as question answering. For example, the question "Did the Enron merger with Dynegy take place?" requires a model to understand whether an event mentioned in a news article has actually occurred, rather than simply finding any mention of the event.
The TimeBank authors mention that "from a practical computational perspective, it will become possible to consider training and evaluating algorithms which determine event ordering and timestamp, and to explore their utility in question answering" (Pustejovsky, 2003).
The authors of the TempEval-3 tasks state that the ultimate aim of their research is the "automatic identification of temporal expressions, events, and temporal relations within a text as specified in TimeML annotation" (UzZaman et al, 2013).
However, as introduced in Section 1, such a fully automatic end-to-end pipeline requires models to be able to reason over temporal contexts even when information is only provided implicitly or must be inferred via common sense. Hence, simple data-driven reasoning over explicit contexts is not sufficient to fulfil these visions.
To connect these previous ambitions with current work, we refer to the relatively novel dimensions of TCS as proposed by Zhou et al (2019). These five dimensions are as follows:
* _Event typical time_: At what time do we expect certain events to happen?
* _Event duration_: How long does an event typically take?
* _Event ordering_: What happens before or after a specific event?
* _Event frequency_: How frequently does a recurring event typically occur?
* _Stationarity_: Does a state hold for a long time or indefinitely?
Similar to previous temporal reasoning tasks, these dimensions are also very event-centric. However, the specific phrasing of the dimensions and the resulting reasoning tasks are more open-ended in nature, substituting reasoning over explicit temporal information with a best-effort guess based on common sense. We ask ourselves when an event typically happens or how long it typically takes, but this does not always have to be the case. For example, we may expect most people to shower in the morning, but others may shower in the evening, in the afternoon, or at night. When we talk about TCS, we talk about the average person's expectations for certain temporal properties. It is difficult to explicitly model such concepts, which is why the increased performance of data-driven methods in NLP due to new technologies has been so beneficial to this field.
Notably, the proposed TCS dimensions quite heavily correspond to annotations within TimeML. For example, a temporal link between an event and a timestamp can denote the time at which an event occurs. A temporal link between two events can denote the order of these two events. Certain temporal expressions (e.g., "on Mondays") can also signify the frequency of recurring events. Event duration tags for TimeML were also proposed (Pan et al, 2006). The difference in TCS reasoning is that we do not seek the explicit answer for a specific temporal property in a source text, but rather use pre-existing knowledge to find a likely generalization that applies to an incomplete context.
Systems that possess TCS could thus be expected to understand one or more of these dimensions to reason over downstream tasks. For example, estimating event durations naturally requires systems to know the typical duration of events. Temporal relation extraction mainly requires systems to have knowledge of typical event orderings. Other properties, such as temporal validity (Almquist and Jatowt, 2019; Hosokawa et al, 2023; Lynden et al, 2023), may require knowledge of a combination of dimensions, such as stationarity and typical event duration.
### Early Temporal Common Sense Systems
In the last decade, due to increases in computing power and the evolution of neural models, a focus on semantics helped push the field of TCS reasoning forward. Embedding methods such as _Word2Vec_(Mikolov et al, 2013) and _GloVe_(Pennington et al, 2014) were introduced to generate semantic word embeddings. Although these methods were prone to issues such as a lack of inherent word sense disambiguation, they were used frequently in newer temporal reasoning challenge tasks such as Clinical TempEval (Bethard et al, 2016). Although the best performing methods still used CRFs and SVMs based primarily on lexical features, neural network structures and static word embeddings were used by many groups, including a proposal for a possible improvement to an RNN-based solution by using the long short-term memory (LSTM) architecture (Fries, 2016).
In the scope of TCS, ROCStories(Mostafazadeh et al, 2016) was one of the first noteworthy datasets to specifically benchmark the understanding of implicit causal
and temporal relationships between events in machine learning systems. Although prior work on story comprehension and text understanding existed, for example, in the form of MCTestRichardson et al (2013), these datasets did not specifically focus on a temporal or causal aspect nor commonsense reasoning. ROCStories, on the other hand, focused on rich causal and temporal context that was not trivial to resolve, for example, via the sentence order.
Further, CaTeRSMostafazadeh et al (2016) was devised as a new annotation scheme for causal and temporal relations, replacing some TimeML links with causal links based on a previous causal model Wolff and Song (2003). This paper also affirmed the story quality of ROCStories and its temporal properties. ROCStories thus remains an important benchmark dataset in temporal- and causal commonsense reasoning to date.
## 5 Modern Temporal Common Sense Benchmarking
Since 2017, there has been a steady increase in benchmarking datasets for TCS, as well as models aiming to solve the corresponding tasks. In this section, we first briefly describe how the transformer architecture made commonsense reasoning more approachable. We then discuss emerging datasets measuring TCS understanding in LMs and summarize how such datasets may be improved in the future.
### Transformer Architecture in Natural Language Processing
In 2017, the well-known paper "Attention is all you need" was published Vaswani et al (2017), which first introduced the transformer architecture. Over the following years, many LMs based on this architecture would emerge, such as the previously mentioned GPT and BERT, as well as newer models, such as GPT-3 Brown et al (2020) and T5 Raffel et al (2020). The trend with these models is an ever-increasing parameter size and massive amounts of raw text as unsupervised training data, to the point where training their parameters from scratch is often no longer feasible for smaller datasets. Researchers and developers thus often use these models in their pre-trained form, only adding classification layers or extracting the generated word embeddings for downstream tasks and -processing. Another option is to use smaller model sizes, which are less likely to overfit, but may not be able to provide the same reasoning capabilities.
In 2019, a BERT-based model already outperformed existing state-of-the-art systems on temporal relation extraction simply by adding a classification layer on top of the pre-trained model Han et al (2019). Furthermore, the largest out-of-the-box GPT-2 model outperformed state-of-the-art solutions in 7 of 8 evaluated language modelling tasks in a zero-shot setting Radford et al (2019). LMs have become so powerful that the largest models do not even have to be fine-tuned to perform specific tasks. For example, T5 determines and solves various tasks through a natural language prefix attached to the input, whereas GPT-3 can often reason over both the task and corresponding few-shot samples in the input itself. Recently, ChatGPT has shown how prompts, rather than fine-tuning, can be used to solve certain tasks in NLP. However, it is still outperformed by fine-tuned task-specific models on certain
tasks, such as sequence tagging (Qin et al, 2023). Naturally, this raises the question of how these models reason over TCS.
### Temporal Common Sense Benchmark Datasets
Table 2 lists TCS benchmark datasets. Except for ROCStories, datasets for TCS benchmarking only began to emerge after the surge in popularity of transformer models. Notably, while the listed datasets are explicitly concerned with TCS reasoning, other temporal reasoning datasets, such as TimeQA (Chen et al, 2021), MATRES (Ning et al, 2018), or RED (O'Gorman et al, 2016) are sometimes used for benchmarking TCS models as well, although they are not surveyed in this article. We briefly describe the surveyed datasets in the following.
**ROCStories**: ROCStories (Mostafazadeh et al, 2016) is formulated as a "story cloze test", where a model reads the first four sentences of a story and has to choose the correct ending out of two possible options. A rigid crowdsourcing process aims to ensure that stories have sufficient causal and temporal context for a model to choose the correct ending.
**Almquist2019**: This dataset (Almquist and Jatowt, 2019) consists of sentences sampled from news, Wikipedia, and blog posts. The authors classify the temporal validity duration of the content. For example, a sentence like "Joe Biden is the President of the United States" contains valid information for a longer duration than "The weather is nice".
**McTaco**: McTaco(Zhou et al, 2019) is a multiple-choice question answering dataset that specifically probes all proposed TCS dimensions. Each item contains a short context, such as "Ratners' chairman, Gerald Ratner, said the deal remains of substantial benefit to Ratners.", followed by a commonsense question, such as "How long did the chairman speak?" A model then has to reason over four possible answer candidates in a binary classification format.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Year & Dataset & Task Type & Focus & Size & Context Source & Data Collection \\ \hline
2016 & ROCStories & Classification & No Focus & 50k & Crowdsourcing & Crowdsourcing \\
2019 & Almquist2019 & Classification & Duration & 1.7k & Riggs, News & Crowdsourcing \\
2019 & McTaco & Classification & No Focus & 13k & MultiRC & Crowdsourcing \\
2020 & TORQUE & Extraction & Ordering & 30.7k & TempEval3 & Crowdsourcing \\
2020 & Vashishtha2020 & Classification & Duration, & 1m & TE3, TB-D, & Recasting \\
2020 & WikiHow & Classification & Ordering & 839k & WikiHow & Crowdsourcinga \\
2021 & TIMEDIAL & Classification & No Focus & 1.1k & Dailydialog & Crowdsourcing \\
2021 & TRACIE & Classification & Ordering & 5.4k & ROCStories & Crowdsourcing \\
2023 & TNLI & Classification & Duration & 10.7k & Flickr30k & Crowdsourcing \\
2023 & CoTAK & Classification & Duration & 300k & WikiHow & Crowdsourcing \\ \hline \hline \end{tabular}
\end{table}
Table 2: Temporal common sense benchmark datasets
**TORQUE**: TORQUE (Ning et al, 2020) is a reading comprehension dataset focused on temporal ordering. For each text passage, a model must determine which events in the text occur before or after some target event. They focus on a very robust evaluation process to ensure models do not end up scoring high on the dataset through trivial answers.
**Vashishtha2020**: Vashishtha et al (2020) recast several event duration and event ordering datasets into a _natural language inference_ (NLI) format, in which a given duration of an event or an ordering of a pair of events forms the hypothesis.
**WikiHow**: WikiHow (Zhang et al, 2020) is a dataset containing steps of WikiHow articles. Among others, they propose a step ordering task. For example, in an article titled "Clean Silver", a model would have to determine whether "dry the silver" occurs before or after "handwash the silver".
**TIMEDIAL**: Items in TIMEDIAL (Qin et al, 2021), similar to MCTaco, consist of a context and several answer candidates. In contrast to MCTaco, TIMEDIAL answer candidates are cloze-style options for a missing temporal quantifier. The contexts themselves are dialogues. An example dialogue is shown below. Models have to evaluate each option in a binary classification format.
A: May we see the wine list, please.
B: Sure. Our special wine today is a 1989 Chardonnay.
A: I'd like a bottle, please.
B: I'll need to see your ID, please.
A: Here you go.
B: Sorry about the inconvenience, you look so young. I had to make sure you are over
**TRACIE**: TRACIE (Zhou et al, 2021), similar to Vashishtha2020, poses event ordering as a textual entailment task. However, their entailment instances are formulated over intervals rather than discrete points in time, for example: "event a _starts_ before event b _ends_".
**TNLI**: Hosokawa et al (2023) formulate the _temporal natural language inference_ (TNLI) task, in which a model has to determine whether a follow-up sentence supports, invalidates, or is neutral with respect to the temporal validity of actions in the target sentence. For example, the temporal validity of the sentence "A musician sings into a microphone while playing a guitar." is invalidated by the follow-up sentence "The musician eats at his favourite restaurant.", as the former action cannot still be ongoing when the latter is observed.
**CoTAK**: The **Co**mmonsense **T**emporal **A**ction **K**nowledge dataset (Lynden et al, 2023), similar to WikiHow, contains steps of WikiHow articles. Based on the heading of a given step (and possibly the title of the article itself), the goal is to predict the _action perform duration_, which is effectively the typical event duration, as well as the _action effect duration_, which the authors equate with the temporal validity duration of the action.
### Categorization of Benchmarked Datasets
Several traits listed in Table 2 can be more closely analysed. We observe two distinct categories of task types, _classification_ and _extraction_, in the surveyed datasets. We denote a task as a _classification_ task if a model is expected to reason over the likelihood of a set of provided answer candidates. This is the case in most proposed datasets. The actual specific task varies, as seen in Section 5.2. Of the proposed datasets, only the questions in TORQUE are more open-ended in the form of an extractive question answering task. However, this task can also be resolved by a binary classification of each token in the text passage.
Notably, further categories might include _regression_ or _generation_ tasks, but these are so far absent from the surveyed datasets. Only a few articles, such as Yang et al (2020), aim to model certain dimensions of existing datasets, such as event duration estimation, as regression tasks, but this can lead to mismatches between the intended task and the evaluated results. For example, by mapping McTaco answer candidates from text to a normalized duration for regression comparison, an evaluation of the model's ability to reason over different representation of the same interval (e.g., "90 minutes" versus "1.5 hours") is lost. For this reason, datasets that actively encourage different label representations could be much more effective for benchmarking such approaches. Likewise, datasets for generative approaches, in which a model would autonomously provide a reasoning for giving a certain answer or provide its best effort guess without answer candidates, may more closely model the current trend of foundation models and prompt-engineering. In Figure 4, we visualize these trends in a graphic, highlighting the need for datasets suitable for different output formats.
Figure 4: Distribution of task types in surveyed datasets
As for the TCS focus dimension being benchmarked, it is usually event ordering or event durations. In other cases, there may be no specific categorization of dimensions, and TCS is presumed to be measured in its entirety. Given the discussed history of temporal reasoning and how it ties into newer TCS research, the focus on event ordering and -durations is unsurprising, as these dimensions most closely model previous temporal reasoning tasks. The three datasets McTaco, TIMEDIAL, and ROCStories mainly focus on contextual understanding of the temporal properties of a given background story without considering the specific commonsense dimensions too closely. However, the authors of McTaco, as their paper defines the dimensions discussed earlier, categorize each question into one of the five dimensions. Although it could be argued that temporal validity tasks, such as those proposed by Almquist and Jatowt (2019) or Hosokawa et al (2023), also require reasoning over stationarity, in existing literature, reasoning over stationarity is mostly circumvented by filtering stationary data from the dataset. However, this filtering in itself can be a difficult process, especially due to the lack of state-of-the-art models that reason over the stationarity property, which further highlights the need for new datasets that can be used to train and benchmark models on the remaining TCS dimensions. The focus dimension distribution is shown in Figure 5, for all datasets that have at least one focus dimension.
Also of note is that the proposed datasets rely heavily on crowdsourcing. Except for Vashishtha2020, all datasets use crowdsourcing during dataset construction. Authors of the WikiHow dataset use crowdsourcing to train a BERT-model to predict whether the steps in articles are ordered for downstream processing, but not
Figure 5: Distribution of TCS focus dimension categories in surveyed datasets (only datasets that have a focus)
for determining the commonsense properties of the texts themselves. All remaining authors create their dataset at least partially via crowdsourced annotations. Several authors also source contexts from existing datasets, which in turn may also have been created by crowdsourcing. It is relatively logical to rely on crowdsourcing to create datasets for commonsense reasoning purposes, but it can be prone to errors or fraudulent activity by workers, which potentially requires manual intervention to ensure that the dataset quality remains high (Hosokawa et al, 2023). Platforms such as CrowdDAQ (Ning et al, 2020) can potentially help to properly train and vet crowdworkers for specific tasks. However, it has been shown that items in common sense datasets often do not stand up to expert vetting regardless (Davis, 2023), which can be an issue when those datasets are used to benchmark model performance.
The size of the proposed datasets also varies significantly, and the average dataset size is relatively small. However, the authors of TORQUE show that this may not always be an issue, as the performance of their baseline approach converges before much of the available training data is ingested into the model (Ning et al, 2020). As most datasets only aim to benchmark performance rather than teaching new reasoning capabilities to a model, the model only has to be post-trained enough to understand the given task format.
There are two significant outliers with respect to size. Vashishtha et al (2020) create a considerable number of NLI pairs by recasting existing temporal relation extraction datasets into an NLI format. For example, for the phrase "We waited until 2:25 PM and then left", we can formulate a hypothesis such as "The waiting started before the leaving started", for which the answer is known from existing annotations. However, it should be considered that, in this case, not all samples are guaranteed to measure TCS understanding, as the answer may already be provided explicitly by the statements in question. Similarly, Zhang et al (2020) infer the step ordering from the WikiHow articles directly, after first using the previously mentioned BERT-model to determine whether the article contains ordered steps. CoTAK is the largest dataset with crowdsourced target labels, at over 300,000 samples.
### Lessons from Existing Benchmarking Datasets
For the remainder of this section, the goal is to summarize the dataset authors' reflections and draw parallels with work in related fields. We list important findings and proposed methods to improve the robustness of future work.
#### 5.4.1 Evaluation Metrics
When reporting on classification tasks, commonly reported quantitative metrics, such as accuracy and F1 score, are generally the most generous interpretations of model performance. A model can achieve high accuracy or F1 score simply by exploiting patterns in the data. The most straightforward example is, of course, a model that simply predicts the majority class. In binary classification settings (such as multiple choice question answering, where every question-answer pair resolves to either true or false), such a system is guaranteed to obtain an accuracy score of at least 0.5. Although the F1 score is somewhat more robust (assuming that the problem has a reasonable
class distribution), a model can still achieve a high F1 score by finding simple patterns in the dataset without actually understanding the problem statement.
Thus, using a context-level _exact match_ (EM) metric may be preferable, where applicable. The rationale is that a system that can genuinely reason over a specific property (such as TCS) should be evaluated on the number of contexts it can reason over flawlessly (e.g., the number of questions for which a system can correctly classify all possible answers as true or false). On the other hand, a metric like accuracy measures performance on a case-by-case basis and disregards consistency within the model.
For example, suppose that a system can identify "1.5 months" as a correct answer to a given question but not "6 weeks". In that case, likely, the system is not using TCS to arrive at this conclusion, but is instead taking a shortcut in the reasoning, such as pattern matching one of the proposed answer candidates. The datasets McTaco and TRACIE provide EM scoring at the context level in their baseline performance reporting. TIMEDIAL is evaluated on _2-best accuracy_, in which both correct answer candidates must be ranked as more likely than both incorrect answer candidates, which provides somewhat of a middle ground between accuracy and context-level EM.
#### 5.4.2 Contrast Sets
To even further decrease the likelihood of overvaluing incidental correct responses, Gardner et al (2020) propose creating contrast sets. These datasets contain data points that are as similar as possible to data points within the dataset, while leading to a different classification result. They show that while humans succeed on such contrast sets, models often do not, specifically mentioning McTaco as an example, which drops from 0.38 EM to 0.14 EM on such a contrast set. In the example below (Gardner et al, 2020), the contrast instance differs only in a few words from the original context, but drastically changes the expected likelihood of the candidate answer.
**Context**: She renews in Ranchipur an acquaintance with a former lover, Tom Ransome, _now a dissolute alcoholic_.
**Contrast**: She renews in Ranchipur an acquaintance with a former lover, Tom Ransome, _who keeps very healthy habits_.
**Question**: How frequently does Tom drink?
**Candidate Answer**: Every other night.
The authors of TORQUE try to implement this measure by specifically negating their temporal ordering questions to maximize the difference in the desired output. For example, if a question in the dataset is "What happened after he ate his breakfast?", the contrast questions "What happened when he was eating his breakfast?" and "What happened before he was eating his breakfast?" should also be posed. In total, the answer to these questions should cover all possible events in the context, and each event should resolve only to the correct question. They then report EM consistency, the percentage of contrast question sets for which a model's predictions match exactly.
To highlight the impact of reporting more robust metrics, we list some evaluation metrics reported from the baseline models for McTaco and TORQUE in Table 3.
Here, the previously discussed EM consistency is denoted as C for TORQUE. These results strongly highlight that model performance decreases much more than human performance on such metrics.
#### 5.4.3 Measuring Model Understanding
Although humans tend to succeed more than TCS models when evaluation metrics are stricter, dataset authors should take care to report metrics that aim to measure the model's understanding of the problem as accurately as possible. In extractive question answering, token-level F1 and EM score are two metrics that are typically used for evaluation. However, unlike classification problems, where answers are unambiguously correct or incorrect, there is often some ambiguity when it comes to extracted text spans.
The authors of TORQUE provide token-level F1 and EM score, rather than context-level. In their dataset, this appears not to pose a problem, as their questions are effectively a natural-language recasting of known temporal relations between events. However, this does not apply to every problem. Bulian et al (2022) note that both token-level F1 and EM fail to recognize cases in which a model may remove incorrect information or add further relevant information to its response due to the symmetry of the metrics. They propose an asymmetric _answer equivalence_ metric, as well as a BERT-based estimator for said metric. Although they concede that their approach does not address a potential temporal dimension of answer candidates (e.g., "4 months ago" and "February 2022" is only equivalent in June 2022), this asymmetric type of answer scoring may provide a better solution for extractive tasks and may pave the way for eventual generative approaches, where a generated answer candidate could be reasonably compared against the reference answer.
In summary, dataset authors should take care in their baseline reporting to identify and propose appropriate evaluation metrics. Where possible, these metrics should discourage models from obtaining high scores simply by finding a reasoning shortcut for a subset of the data. They should be as strict as reasonably possible while still allowing the model to be expressive in its responses, should the task format allow for it. Although classification tasks, which form the basis of many currently existing
\begin{table}
\begin{tabular}{l l r r r} \hline \hline Dataset & Metric & Model & Human & \(\Delta\) \\ \hline McTaco & F1 &.699 &.871 &.172 \\ TORQUE & F1 &.752 &.953 &.201 \\ McTaco & EM &.427 &.758 &.331 \\ TORQUE & EM &.511 &.845 &.334 \\ TORQUE & C &.345 &.825 &.480 \\ \hline \hline \end{tabular}
\end{table}
Table 3: A list of reported evaluation metrics in McTaco and TORQUE, sorted by performance difference between humans and the best-performing model presented in the paper (\(\Delta\))
datasets, inherently do not pose a risk of restricting the expressiveness of the model, they also do not always align closely with downstream tasks, since answer candidates are not always available. For more autonomous models that do not rely on answer candidates, different metrics will thus need to be used.
#### 5.4.4 Linguistic Traps
The authors of TimeQA showcase "shallow pattern matching" performed by transformer models through their split of _easy_ and _hard_ questions. For example, if an athlete was on a team between 1973 and 1975, an easy question might be, "What team did [player] play for between 1973 and 1975?" A hard question might instead be "What team did [player] play for in June 1974?" or "What team did [player] play for between April 1974 and December 1974?", since the exact temporal spans are not reused. While human annotation only incurs a performance decline of 2 percent in the EM metric from such questions, it is roughly 13.7 percent for the best-performing transformer system.
This dependence on simple pattern matching can also be seen in the TIMEDIAL dataset. Here, crowdworkers were explicitly instructed during dataset creation to try to reuse explicit temporal quantifiers from the question in incorrect answer candidates wherever possible. They show that a simple BERT model picks such incorrect options 52 percent of the time over the correct answers. For example, when the context mentions a meeting starting at "three o'clock" for which the speaker does not want to be late, models were more likely to estimate "half past three" as a possibility for the current time than "quarter to two".
Although more powerful transformer models, such as T5, are more robust to pattern matching, it remained the most common error type. It can also be linked to previously reported issues such as mispriming. For example, a BERT model may fill the mask in "Samsung. The iPhone is produced by [MASK]" with "Samsung" due to the previous mention of the in-domain phrase. Similarly, other pitfalls, such as ignoring negation and word order, have also been reported in transformer-based LMs (Helwe et al, 2021).
#### 5.4.5 Debiasing
While not TCS research, the WinoGrande (Sakaguchi et al, 2021) paper highlights the importance of debiasing in NLP. They create a crowdsourced dataset for commonsense pronoun disambiguation and evaluate fine-tuned models on the well-known Winograd Schema Challenge dataset (Levesque et al, 2012). Although their own WinoGrande dataset was crowdsourced, they achieved better performance on the original expert-crafted dataset, citing their debiasing strategy as a critical reason for this success.
Debiasing generally comes in the form of some adversarial learning, which ensures that solving the instances in the dataset is not trivial for the model. In the case of WinoGrande, this is done by using linear classifiers based on RoBERTA (Liu et al, 2019) embeddings to remove the easiest instances to classify from the dataset repeatedly.
The authors of the CIDER common sense dataset, which also has some temporal dimensions, apply adversarial filtering to remove stylistic patterns from their confounding-option candidates.
## 6 Improving Temporal Commonsense Reasoning
In the previous section, we show how TCS datasets can be constructed, as well as some techniques that can make their evaluation more robust to typical strategies exploited by modern LMs. In this section, we discuss proposed methods to improve the current state of the art in TCS reasoning.
### Baseline Models and Human Performance
First, we examine the results that the dataset authors provide as baselines. Table 4 shows the performance of baseline models. Note that the provided results in Table 4 are not necessarily the best result reported in the paper, as we will examine the impact of augmentations in a later section, but that the results do showcase the performance of the best model structure in its base form. ROCStories was published before the transformer architecture was popularized. As such, the authors report performance by another neural network architecture called DSSM (deep semantic similarity model). Despite being published in 2019, the authors of Almquist2019 do not report the performance of any transformer-based model, instead using a support vector classifier (SVC). TNLI's SelfExplain baseline uses the model by Sun et al (2020), whose embeddings are also based on RoBERTa. However, SelfExplain adds some additional layers which greatly improve performance on TNLI.
\begin{table}
\begin{tabular}{l l l r r} \hline \hline Dataset & Base Model & Metric & Performance & Human \\ \hline ROCStories & DSSM & Acc &.585 & Noneb \\ Almquist2019 & SVC & F1 &.702 & None \\ McTaco & BERT & F1 &.699 &.871 \\ TORQUE & RoBERTa-large & F1 &.752 &.953 \\ Vashishta2020 & RoBERTa-large & Acc &.809c & None \\ WikHow & RoBERTa & Acc &.835 &.975 \\ TRACIE & RoBERTa-large & Acc &.784 &.825d \\ TIMEDIAL & T5-large & 2-best Acc &.748 &.978 \\ TNLI & SelfExplain & Acc &.873 & None \\ CoTAK & BERT-base-uncased & Acc &.775e & None \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of baseline models reported by dataset authors
In the TRACIE dataset, human performance is only reported for a "no-story" set, in which only the hypothesis of a story is known, but not its context. On the other hand, the reported model performance is based on the dataset that contains complete information. Hence, humans had to solve a more difficult task, as they had much less available information. Nevertheless, they still outperformed the best-performing baseline model.
### Proposed Augmentations
Notably, none of the baseline models proposed in Section 6.1 appear to understand temporal attributes well enough to match human performance. Consequently, their out-of-the-box reasoning over TCS dimensions can still be improved. As noted in Section 2, augmenting existing transformer-based LMs is a common approach in TCS reasoning. Figure 6 shows how often we observe specific augmentation types in the surveyed literature. Data encoding and external knowledge sources appear to be some of the most frequently observed augmentations, but other methods, such as logical or symbolic reasoning and adversarial learning, have also led to performance improvements that should not be overlooked. In the remainder of this section, we explore the proposed techniques to improve the TCS understanding of LMs in more detail.
#### 6.2.1 External Knowledge
It is likely that a significant reason for the lack of commonsense understanding in transformer models is reporting bias. Due to the nature of language, using the frequency of event occurrences in text as a baseline for commonsense knowledge is generally not
Figure 6: Distribution of observed augmentation categories in the core literature
ideal (Gordon and Van Durme, 2013). Also known as the "black sheep problem", we intuitively understand that one is much more likely to mention a "black sheep" than to specify the colour of a regular sheep, which may confuse statistical models. These artefacts can be seen in the event likelihood estimate of transformer models. For example, BERT, which was trained on Wikipedia, may overestimate the likelihood of death. Similarly, RoBERTa, which was trained on the web, overestimates the probability of newsworthy events such as being murdered (Shwartz and Choi, 2020).
Several recent papers have attempted to mitigate this bias by using KGs with LMs. Specifically, the two previously mentioned KGs ConceptNet and ATOMIC\({}_{20}^{20}\) have frequently been proposed for such methods due to their specific temporal relations (e.g., "X causes Y", or "After doing X, person Y will want to..."). KGs can be used in TCS models to provide "knowledge embeddings" of phrases (Hosokawa et al, 2023), or to directly post-train the LM on KG triples converted to natural language (Guan et al, 2020).
The CoCoLM model (Yu et al, 2022) uses the ASER KG for pre-training. Unlike the KGs mentioned above, ASER triples are automatically constructed from raw text, which means that the KG contains more instances, but may contain noise. CoCoLM shows significant gains on ROCStories using a base BERT model and random walk over ASER to generate multi-hop reasoning phrases as training instances.
Although knowledge graphs currently appear to be the main source of external commonsense information in TCS, other sources, such as script knowledge systems (DeJong, 1982), could also be bootstrapped similarly, including possibly synthetically generating such scripts via LMs.
Another source of external knowledge can also be out-of-domain tasks, including multitask training on datasets from related domains (e.g., temporal reasoning), or auxiliary tasks that provide additional information that is relevant to the main task. For example, TacoLM (Zhou et al, 2020) is a BERT-based model that predicts temporal upper bounds for events, as well as relative hierarchies between events, to improve its TCS reasoning. CoCoLM also introduces auxiliary tasks, such as discourse relation prediction, to improve performance on the ROCStories dataset.
#### Weak Supervision
Weak supervision in TCS reasoning is often based on event co-occurrences with temporal expressions, which can be used to train an LM. Almquist and Jatowt (2019) propose using the co-occurrence of temporal expression with subjects, verbs, objects, and their combinations as a feature in an SVM-based classifier. A similar approach was proposed in the form of TemProb, a statistical knowledge base showing common relations between events extracted from 20 years worth of NYT articles (Ning et al, 2018).
TacoLM also uses syntactic rules to extract large quantities of event durations from text, which can then be used as labels for the previously mentioned auxiliary tasks. The resulting model can predict the duration and frequency of events much better than a standard BERT model and has considerably more TCS knowledge.
Similarly, Yang et al (2020) extract events and their corresponding duration expressions using rule-based patterns, and use them as weak supervision labels to train a regression-based model for event duration estimation.
The previously discussed CoCoLM model also uses weak supervision, as instances in ASER are automatically extracted in such a manner. Weak supervision can be powerful, as there is plenty of raw text from which large datasets can be created. However, only high-precision patterns should be used to automatically extract information from raw text, as any noise can significantly hinder the training objective. For example, the automatically generated ASER KG only outperforms ATOMIC\({}_{20}^{20}\) as a knowledge source in CoCoLM when multi-hop reasoning is used to generate training phrases, but not when the model is trained directly on its triples (Yu et al, 2022).
#### Symbolic or Logical Reasoning
Another approach is the introduction of symbolic or logical reasoning into common-sense models. The SymTime model (Zhou et al, 2021) is an example of symbolic reasoning. An encoder-decoder model estimates the duration of an event and the distance between two events into a set of classes. The softmax distribution of the duration and distance estimates is then used to symbolically reason over the feature vectors to determine whether the estimated duration of event A is longer than the estimated distance between event A and event B. This information is then used to solve the event ordering task. Here, the relationship between duration and distance is explicitly modelled, rather than relying on an LM to learn it implicitly.
Another example is the SLEER model (Cai et al, 2022), which also explicitly models the relationship between temporal dimensions in the form of logical propositions. An example of such a proposition is as follows:
\[\text{DUR}(\textit{e1, year})\Rightarrow\text{FREQ}(\textit{e1,decade}) \vee\text{FREQ}(\textit{e1,century})\]
This proposition states that an event with a duration span of year(s) cannot occur more than yearly, and thus must have a frequency of either decades or centuries. The SLEER model uses probabilistic soft logic to express the truths of such propositions on a continuous scale. The distance between expected true statements and the prediction of the model can then be used as a parameter in the loss function to train the LM.
The recently proposed LECTER model (Cai et al, 2023) similarly uses a combination of temporal expression defuzzifying together with probabilistic logic programming to greatly improve the performance on TIMEDIAL. After normalizing and embedding temporal expressions, a logic induction layer generates the probability distribution of relationships between said expressions, and DeepProbLog is used to apply logical entailment to the loss function of the model. With the symbolic temporal logic induction module, LECTER may also be more explainable than common LM-based methods.
In general, explicitly leveraging such logical relationships seems to improve the results of the corresponding LMs. This may indicate that explicitly coding our understanding of relationships between temporal dimensions into reasoning models may outperform implicitly encoding them in LMs using auxiliary tasks.
#### 6.2.4 Information Encoding
Going beyond the standard token-level text encoding that transformers usually leverage may also be helpful in some instances. For example, Zhou et al (2022) propose an approach to modelling text on an event level rather than a token level. Additionally, they propose _event optimal transport_ (EOT) as a loss function to better align texts where a regular token-level similarity may lead to poor results. For example, "Investors bought stocks" may be considered a better approximation of "Investors sold stocks" on a token level than "British investors sold stocks", but event-based encoding and event optimal transport help identify similar events and event orders even when they are not aligned. They show that this approach performs well on event ordering and event infilling tasks.
In temporal reasoning, researchers also commonly consider how time can be embedded in an LM. Methods such as prepending a time-specific token to the input (Cao and Wang, 2022), altering the transformer architecture directly to temporalize the attention mechanism (Rosin and Radinsky, 2022), or masking and predicting temporal expressions or the document timestamp (Rosin et al, 2022; Wang et al, 2022) have been proposed.
In general, the ideal encoding of a model also somewhat depends on the downstream task and the available information. For example, online content such as news articles or blog posts may contain more readily available document creation date information. On the contrary, such a model may fail on the narratives proposed in ROCStories, which do not contain explicit timestamps.
#### 6.2.5 Adversarial Learning
The adversarial augmentation proposed in Section 5.4 can also occur at the model level, as shown by the ALICE (Pereira et al, 2020) and ALICE++ (Pereira et al, 2021) models. In these models, the inputs are minimally perturbed during training to maximize the predicted change in the output. In ALICE++, this perturbation additionally occurs on layers besides the input, up to some top layer of the model. Through these engineered samples, the robustness of the model to small changes increases. In practice, this learning method appears to be very effective. For example, ALICE++ outperforms the previously mentioned SymTime model, which is based on T5, on the MATRES dataset, despite using RoBERTa for its training, which is a much smaller LM. It also outperforms models such as TacoLM in datasets like MCTaco. Overall, similar to adversarial samples in the dataset, this type of learning can enhance performance on the model level as well.
#### 6.2.6 Ensembling
Finally, using a combination of multiple classifiers can also enhance model performance. In the TCS domain, the performance on MCTaco was improved by constructing an ensemble of multiple BERT models, each fine-tuned on different datasets, using a majority vote to determine the final class (Kimura et al, 2021).
This ensemble approach is intriguing, not just because of the growing number of benchmarking datasets for TCS, but also for its potential to evaluate combinations
of the augmentations proposed in the rest of this section. Rather than performing an ablation study of the different augmentation types on a single model, evaluating ideal weights for an ensemble classifier of different models, each enhanced with a specific augmentation type, is another option to provide further insight into the value of each model type and possibly improve performance on TCS tasks.
## 7 Discussion
In this section, our goal is to highlight the results of our survey and propose possible future research opportunities.
### Defining and Benchmarking Temporal Common Sense
In this survey, we have highlighted several similarities between early temporal algebras, temporal reasoning tasks such as temporal relation extraction and event ordering, and proposed TCS reasoning dimensions. We pose that the main difference between TCS reasoning and other temporal knowledge which a model may have (such as temporal factual knowledge or reasoning capabilities over an explicit temporal context) is the inherently probabilistic nature of common sense. While we can make assumptions about the likely order, duration, or time of occurrence of individual actions, common sense does not make any guarantees. By design, the context in a TCS task does not give us a concrete answer, but we should use our prior understanding of the world to derive a likely one. We can consider TCS reasoning to be a probabilistic recasting of existing temporal reasoning tasks.
Based on our survey, we propose the following future work for training and benchmarking TCS.
* Datasets focusing specifically on _typical event times_, _stationarity_, and _event frequency_ would help improve the general TCS understanding of models. While training data for typical event order and duration can often be derived from text or existing temporal reasoning datasets, this has not been attempted as frequently for the three remaining dimensions.
* Regardless of the focus dimension, where possible, explicitly stating which type of TCS is expected (e.g., event ordering or event duration) can help other researchers better identify tasks or benchmarks that a specific model should be able to solve. While the general question of how much TCS understanding transformer models possess is interesting, many downstream tasks may not require a model to be able to reason over all proposed dimensions. For example, for a sentence ordering task, the typical order of events is much more important than the typical frequency of an event.
* Most TCS reasoning datasets pose a closed-ended QA or NLI task, which is almost always solved via binary classification (either the candidate answer fits or it does not fit). However, for downstream tasks, different task formats, such as ordinal classification, extractive question answering, regression, or text generation, could be beneficial in providing more detailed training and evaluation data. Additionally, a model can better learn different temporal properties, like how long an action is
actually expected to take, rather than simply understanding if certain predetermined answer candidates apply.
* When creating new datasets or evaluating a model on existing ones, care should be taken that relatively simple metrics (such as accuracy) do not skew the perceived capabilities of the model. Contrast sets and exact match metrics can help ensure that a machine learning model can genuinely reason over a set of items, rather than relying on a shortcut that the model may have found to distinguish between the target classes. On the other hand, extractive and generative models should have the freedom to deviate from reference answers, as long as the provided answer still solves the problem.
### Improving Temporal Commonsense Reasoning
We have discussed proposed augmentations for the transformer architecture to improve TCS reasoning. Table 5 shows an example for each of the proposed augmentation categories, as well as the resulting improvement in performance over their respective base models. We also summarize the advantages and disadvantages of the proposed augmentation types in Table 6, and show an overview of augmentation categories and specific observed implementations of said categories in Figure 7.
Within the surveyed approaches, several trends can be observed. Often, the difference in performance between the different transformer architectures (especially BERT and RoBERTa in their varying sizes) is more noticeable than the impact of the proposed augmentations. The actual task being used for benchmarking and the reported performance metrics can also significantly impact how a model's performance may be
Figure 7: A summary of the augmentation categories and the implementations that were observed
\begin{table}
\begin{tabular}{l l l c c c l} \hline \hline Dataset & Base Model & Augmentation & Metric & Base & Augmented & Ref. \\ \hline TRACIE & RoBERTa-large & Symbolic Reasoning & F1 &.784 &.806 & Zhou et al (2021) \\ McCaco & BERT & Weak Supervision & EM &.421 &.427 & Zhou et al (2020) \\ TNLI & SelfExplain & External Knowledge & Acc &.873 &.878 & Hosokawa et al (2023) \\ McCaco & RoBERTa-large & Adversarial Learning & EM &.511 &.599 & Pereira et al (2021) \\ McCaco & BERT & Ensembling & EM &.396 &.465 & Kimura et al (2021) \\ McCaco & BART-large & Information Encoding & F1 & N/A &.623 & Zhou et al (2022) \\ TIMEDIAL & RoBERTa-base & Symbolic or Logical Reasoning & 2-best Acc &.593 &.715 & Cai et al (2023) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Sample augmentations to transformer models and their performance impact
perceived. This is noticeable in the substantial difference in reported values in Table 5 depending on whether accuracy, F1, or exact match score is used for evaluation.
In addition, ablation studies based on the same base model are not always part of the proposed approaches, making it more difficult to assess the impact of the augmentation itself.
\begin{table}
\begin{tabular}{l l} \hline \hline Augmentation Type & Pros \& Cons \\ \hline External Knowledge & **Advantages** \\ & - Exposes the model to a large quantity of (typically human-verified) knowledge that may be infrequently observed in text. **Disadvantages** \\ & - Requires a (usually large, often manually constructed) external knowledge source. \\ Weak Supervision & **Advantages** \\ & - Can efficiently generate a large amount of additional training data for the primary or auxiliary tasks. **Disadvantages** \\ & - Requires a high-precision approach for extracting labels. \\ & - Different methods may be more or less sensitive to noisy labels. \\ Symbolic and Logical Reasoning & **Advantages** \\ & - Resulting labels may be more well-founded and explainable than other methods. \\ & - The interplay between temporal dimensions can be exploited. **Disadvantages** \\ & - Sound rules must be defined and translated into a format that is appropriate for the architecture. \\ Information Encoding & **Advantages** \\ & - Often comparatively simple to implement. \\ & - Many different approaches that can be task-specific or task-agnostic. \\ & **Disadvantages** \\ & - The resulting performance gains may not be as explainable as other methods. \\ Adversarial Learning & **Advantages** \\ & - Can partially counteract some shallow reasoning behaviours in transformers. \\ & - May make the model less sensitive to outliers. **Disadvantages** \\ & - Requires synthetic generation of adversarial samples. \\ Ensembling & **Advantages** \\ & - Relatively simple to implement. \\ & **Disadvantages** \\ & - More computationally heavy to train and generate target labels for multiple models. \\ \hline \hline \end{tabular}
\end{table}
Table 6: Summary of advantages and disadvantages of different augmentations
It is also often apparent that the proposed augmentations' performance gains shrink as the base model's performance increases. For example, CoCoLM's implementation provides a 19.3 percent increase in performance over a base BERT model on debiased ROCStories. However, it improves only 1.3 percent over a RoBERTa-large model. These results indicate that larger models may already possess most of the reasoning capabilities that some of the proposed enhancements can offer.
The prevalence of MCTaco compared to other benchmarking datasets is also notable and may be due to their more detailed taxonomy. Approaches that focus on different TCS dimensions can use a subset of MCTaco to test performance in the corresponding dimension (e.g., Zhou et al (2022); Cai et al (2022)).
We propose the following future work to enhance TCS reasoning.
* Models are generally fine-tuned and tested on only one of the proposed benchmark datasets. Transfer learning could be explored further in several ways, including whether models trained on one dataset perform better on others in a few-shot setting or whether ensembling similar models trained on different TCS reasoning datasets improves overall reasoning capabilities.
* A more thorough investigation of the performance of commonsense LMs in downstream tasks would be interesting. For example, Wang et al (2022) apply their BERT-based model for document dating as a component in a temporal question answering system, improving overall performance on the downstream task. Possible application areas for the proposed models could be timeline summarization or question answering.
* In general, despite increased efforts, the models proposed so far do not reach human performance, even on overly forgiving metrics such as accuracy and F1-score, which do not strongly discredit shallow reasoning compared to metrics like EM or contrast set consistency. Dimensions such as _event typical time_, _stationarity_, and _event frequency_ are especially underexplored. New models aiming to reason over these properties could add a new perspective to the overall understanding of TCS and provide new possibilities for downstream applications (such as user status tracking or recommender systems).
### Foundation Models and Trade-offs
Foundation models, such as GPT-4, have become increasingly influential in recent years. With larger and more rigorously trained models such as RoBERTa outperforming BERT, a logical conclusion might be that the trend towards large foundation models would render much of the previous research obsolete. For several reasons, we do not believe this to be the case.
* Current consumer hardware quickly reaches its limits when training and prompting state-of-the-art LMs. Even if the GPT-4 weights were publicly known, it is unlikely that most people could run the model locally at a reasonable speed. For privacy reasons, among others, it is therefore unreasonable to expect individuals and businesses to rely fully on API-based prompting.
* While foundation models are trained in general language understanding, locally trained models have weights that are specifically fine-tuned on a certain task, making
them more of a "master of one" than a "jack of all trades". For downstream task applications, this is likely preferable.
* When training a task-specific machine learning model, we can reliably force the output to be of a certain shape and represent certain task-specific properties. While it is possible to prompt systems like GPT-4 to provide a certain output format, they are not constrained to this shape and can deviate, potentially leading to errors.
In addition to the arguments provided, recent research shows that ChatGPT seemingly does not fully comprehend TCS. For example, Bian et al (2023) measure ChatGPT with an accuracy of 52 percent on McTaco, far below previous state-of-the-art work. This creates a strong argument that TCS will remain a valuable research field in the near future.
## 8 Summary and Outlook
In this survey, the history of temporal reasoning and the shift to LMs for commonsense reasoning has been explored. Temporal reasoning mainly started as a purely logical proposition over specific data structures, such as Allen's interval algebra. However, as computers became more powerful, syntactic approaches started to emerge. These approaches first centred on manual annotation of events and temporal quantifiers in free-form text, and then gradually moved to how these annotations could be automated using syntactic features such as parse trees and the meaning of specific signal words.
However, this syntactic analysis lacks the TCS understanding that can be vital in reasoning over free-form text, as time is often only implicitly described in language. This first led to the implementation of semantic features such as semantic role labels and later to a shift to data-driven approaches to solve specific tasks, such as event extraction. The introduction of word embeddings and the subsequent rise of deep neural networks such as LSTMs and transformers allowed for better performance on new tasks and ones derived from previous temporal reasoning propositions. Models trained on these new tasks no longer wholly rely on explicit temporal context, making them useful in domains where such context may not usually be available.
In light of this, it is easy to say that transformers are a plug-and-play solution to TCS tasks, as they outperform previous state-of-the-art methods by a wide margin even when not fine-tuned. However, on closer inspection, it is clear that while the semantic reasoning performed by transformer models is powerful, they are prone to linguistic traps, are not always reliable in their answers, and do not reason over temporal properties as well as we would like. Specifically, mispriming and reporting bias are still significant problems in transformer models when using them for common sense purposes. In addition, they can behave somewhat erratic when the input slightly changes or, conversely, they can ignore critical negations or contrasting data instances.
Several methods have been proposed to overcome this problem. For example, more training data specific to temporal properties can be created via crowdsourcing, rule-based extraction from the web (leading to weak supervision), or from KGs, which themselves can be either manually created (such as ATOMIC\({}^{20}_{20}\)) or constructed automatically (such as ASER). Ensembling of several neural models has also been shown
to somewhat increase performance, which is particularly interesting due to the variety of TCS datasets which are now available.
Another intriguing trend is infusing hard-coded symbolic or logical reasoning back into transformer models through methods such as probabilistic soft logic or using the neural model's output as symbolic information to explicitly reason over with traditional methods. This augmentation is especially beneficial in leveraging our innate understanding of language and how temporal traits such as duration, distance, and frequency of events interact. This explicit knowledge often seemingly outperforms attempts to implicitly infuse this understanding into models through auxiliary objectives during training.
Additionally, the transformer landscape is still evolving, and the importance of aspects such as training objectives, training data, encoding, and hyperparameters such as masking probability cannot be overstated. This can easily be observed by inspecting the difference in accuracy between base BERT and RoBERTa on many tasks. Raffel et al (2020) provide further insight on the importance of such properties. On the other hand, the architecture surrounding the transformer model also impacts performance. With models reaching sizes that make training a state-of-the-art transformer model from scratch difficult or even infeasible for the average person, it is currently much easier to try to improve the architecture surrounding the transformer model rather than the structure of the model itself.
Ultimately, the goal would be to achieve human performance on the currently proposed and future TCS tasks, allowing for more downstream task applications. To that end, the research community should strive to make transformer-based models more resilient to mispriming and linguistic traps and teach them to better recognize the temporal properties of events and their meaning. To achieve human performance, a model's understanding of explicit and implicit temporal signals must go beyond the individual meaning of each word and co-occurrence statistics learned during pre-training. The significant performance gap between model- and human performance when using more robust metrics signals that models often end up with the correct answer more by clever guesswork, pattern recognition, and biases, but cannot truly reason over words in the same way humans can. Novel methods for infusing logical traits of language and symbolic knowledge into transformer models would likely go a long way to improving this situation.
Overall, the field of TCS reasoning is all but solved, and many avenues for improvement are unexplored so far. Despite preliminary results on GPT-4 indicating unprecedented performance on many reasoning tasks, its size and the lack of reporting on the model structure and weights render it unreasonable for local deployment and usage. Additionally, TCS reasoning capabilities may be somewhat limited even in models like GPT-4. Therefore, it is unlikely that foundation models will subsume research into smaller and more portable transformer models.
## Declarations
### Funding
The authors did not receive support from any organization for the submitted work.
No funding was received to assist with the preparation of this manuscript.
No funding was received for conducting this study.
No funds, grants, or other support was received.
### Competing Interests
The authors have no relevant financial or non-financial interests to disclose.
The authors have no conflicts of interest to declare that are relevant to the content of this article.
All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
The authors have no financial or proprietary interests in any material discussed in this article.
### Author Contributions
CRediT author statement:
**Georg Wenzel**: Conceptualization, Methodology, Validation, Investigation, Data Curation, Writing - Original Draft **Adam Jatowt**: Writing - Review & Editing, Supervision
|
2301.03000 | Density estimation and regression analysis on S^d in the presence of
measurement error | This paper studies density estimation and regression analysis with
contaminated data observed on the unit hypersphere S^d. Our methodology and
theory are based on harmonic analysis on general S^d. We establish novel
nonparametric density and regression estimators, and study their asymptotic
properties including the rates of convergence and asymptotic distributions. We
also provide asymptotic confidence intervals based on the asymptotic
distributions of the estimators and on the empirical likelihood technique. We
present practical details on implementation as well as the results of numerical
studies. | Jeong Min Jeon, Ingrid Van Keilegom | 2023-01-08T08:59:06Z | http://arxiv.org/abs/2301.03000v1 | # Density estimation and regression analysis on \(\mathbb{S}^{d}\)
###### Abstract
This paper studies density estimation and regression analysis with contaminated data observed on the unit hypersphere \(\mathbb{S}^{d}\) for \(d\in\mathbb{N}\). Our methodology and theory are based on harmonic analysis on general \(\mathbb{S}^{d}\). We establish novel nonparametric density and regression estimators, and study their asymptotic properties including the rates of convergence and asymptotic distributions. We also provide asymptotic confidence intervals based on the asymptotic distributions of the estimators and on the empirical likelihood technique. We present practical details on implementation as well as the results of numerical studies.
_Key words: Hyperspherical data, Measurement errors, Nonparametric density estimation, Nonparametric regression_
## 1 Introduction
Statistical analysis with data involving measurement errors has been a challenging problem in statistics. When some variables are not precisely observed due to measurement errors, direct application of existing methods designed for error-free variables results in incorrect inference. To explain this, let us consider a simple case where both the covariate \(X\) and the response \(Y\) are real-valued. To estimate the regression function \(m(x)=\mathrm{E}(Y|X=x)\) at a point \(x\), one may apply 'local smoothing' to \(Y_{i}\) around each point \(x\). For example, the Nadaraya-Watson estimator of \(m\) is to take a weighted average of \(Y_{i}\) corresponding to \(X_{i}\) that fall in a neighborhood of each point \(x\). This makes sense since \(Y_{i}\) corresponding to \(X_{i}\) near \(x\) have 'correct' information about \(m(x)\). Now, suppose that \(X_{i}\) are not available
but \(Z_{i}=X_{i}+U_{i}\) are where \(U_{i}\) are unobserved measurement errors. In this case, the naive approach, simply taking a weighted average of \(Y_{i}\) corresponding to \(Z_{i}\) that fall in a neighborhood of the point \(x\), should fail since \(X_{i}\) corresponding to such \(Z_{i}\) may locate far away from \(x\) and thus the corresponding \(Y_{i}\) may not have correct information about \(m\) at \(x\). To treat this issue, appropriate correction methods have been proposed. To list only a few, [57] introduced a deconvolution kernel density estimator, and [19] and [20] studied its rate of convergence and asymptotic distribution, respectively. Based on the deconvolution kernel, [21] investigated the rate of convergence for a Nadaraya-Watson-type regression estimator and [14] studied the asymptotic distribution of a local-polynomial-type regression estimator. For an introduction to the measurement error problems, we refer to [46] and [13]. However, the aforementioned works are restricted to Euclidean data.
Analyzing non-Euclidean data is becoming an important topic in modern statistics due to rapidly emerging non-Euclidean data in various fields. It is challenging since there is no vector space structure on non-Euclidean spaces in general. For a recent review on non-Euclidean data analysis, we refer to [45]. Data observed on the unit hypersphere \(\mathbb{S}^{d}=\{x\in\mathbb{R}^{d+1}:\|x\|=1\}\) for \(d\in\mathbb{N}\), called hyperspherical data, are one of the most abundant non-Euclidean data. Hyperspherical data include circular data (\(d=1\)), spherical data (\(d=2\)) and other higher dimensional data (e.g. [55], [26]). Previous works on error-free circular, spherical or general hyperspherical data include density estimation ([27], [23]), regression analysis ([8], [51], [52], [32]) and statistical testing ([11], [5], [24]). Among them, [8] did not cover a measurement error problem and simply considered the case where both response and predictor are spherical variables and the response is symmetrically distributed around the product of an unknown orthogonal matrix and the predictor. For a recent review on hyperspherical data analysis, we refer to [49].
Some areas that hyperspherical data arise are meteorology and astronomy. However, data from such areas are prone to contain measurement errors due to the technical limitations of measuring devices. For example, measuring the exact wind direction, positions of sunspots on the sun or direction from the earth to an astronomical object is not easy since they move very fast and/or they are very far (e.g. [2], [22]). Also, such measurements are sometimes disturbed by some substances in between. In addition, each observation vector in Euclidean data is sometimes normalized to have the unit norm to ensure that data analysis is only
affected by the relative magnitudes of vector elements rather than the absolute magnitudes of vectors themselves. If the original Euclidean data contain measurement errors, then the resulting hyperspherical data also contain measurement errors.
In spite of the importance of analyzing contaminated hyperspherical data, there exist only few works, and most of the existing works are restricted to deconvolution density estimation on either \(\mathbb{S}^{1}\) or \(\mathbb{S}^{2}\) (e.g. [18], [28], [40], [41]). Some other works for other types of contaminated non-Euclidean data include deconvolution density estimation on special orthogonal groups ([38]), compact and connected Lie groups ([42]), the Poincare upper half plane ([31]) and the 6-dimensional Euclidean motion group ([44]). All the aforementioned works on deconvolution density estimation studied only the rates of convergence of their estimators. Recently, [33] studied density estimation and regression analysis with contaminated Lie-group-valued predictor. To the best of our knowledge, [33] is the unique work that considered regression analysis with contaminated manifold-valued variables. However, since \(\mathbb{S}^{d}\) for \(d=2\) and \(d\geq 4\) are not a Lie group, it is important to study such unexplored cases.
In this paper, our primary aim is to develop the a deconvolution regression estimator on \(\mathbb{S}^{2}\) and investigate its rates of convergence. We also aim to construct the asymptotic distributions and asymptotic confidence intervals for both deconvolution density and regression estimators on \(\mathbb{S}^{2}\). Those have not been studied in the literature despite their importance. To achieve them in a more general setting, we instead study deconvolution density estimation and regression analysis on \(\mathbb{S}^{d}\) for \(d\in\mathbb{N}\). These general problems also have not been considered in the literature. Our deconvolution density estimator on \(\mathbb{S}^{d}\) generalizes the deconvolution density estimator on \(\mathbb{S}^{1}\) introduced in [18] and the one on \(\mathbb{S}^{2}\) introduced in [28]. Our deconvolution regression estimator on \(\mathbb{S}^{d}\) also generalizes the deconvolution regression estimator on \(\mathbb{S}^{1}\) introduced in [33]. We build up a theoretical foundation for those general estimators. We establish several finite-sample properties of the estimators. We also study the uniform consistency of the density estimator and the rates of convergence for both density and regression estimators. In addition, we derive the asymptotic distributions and two types of asymptotic confidence intervals for both estimators under a high-level condition. The high-level condition is verified for certain cases. Moreover, we present several numerical studies and some practical details on implementation which have received less attention in the literature in spite of their importance. We emphasize that deriving the results in this
paper is quite different from the ways in the Euclidean case since it is based on hyperspherical harmonic analysis which is less considered in statistics. Also, dealing with \(\mathbb{S}^{d}\) is more challenging than dealing with \(\mathbb{S}^{2}\) since general hyperspherical harmonic analysis is much more complex than harmonic analysis on \(\mathbb{S}^{2}\). Indeed, it leads to more complex analysis for every result and requires broader discussions.
This paper is organized as follows. In Section 2, we introduce general hyperspherical harmonic analysis with some practical examples and our estimators with some finite-sample properties. The rates of convergence and asymptotic distributions of our estimators are shown in Section 3. We construct the asymptotic confidence intervals in Section 4, and present the simulation studies and real data analysis in Section 5. The Supplementary Material contains additional practical details and all technical proofs.
## 2 Preliminaries and methodology
### Preliminaries
Our methodology is largely based on harmonic analysis on the \(d\)-dimensional unit hypersphere \(\mathbb{S}^{d}\) for \(d\in\mathbb{N}\). Here, we give a brief introduction on it. Further details can be found in [1] and [17].
A function \(f:\mathbb{R}^{d+1}\to\mathbb{C}\) is called a harmonic homogeneous polynomial of degree \(l\in\mathbb{N}_{0}:=\{0\}\cup\mathbb{N}\) in \(d+1\) variables if \(f\) takes the form
\[f(t_{1},\ldots,t_{d+1})=\sum_{\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{ d+1})\in\mathbb{N}_{0}^{d+1}:\sum_{i=1}^{d+1}\alpha_{i}=l}c_{\boldsymbol{ \alpha}}\cdot\prod_{i=1}^{d+1}t_{i}^{\alpha_{i}}\]
for \(c_{\boldsymbol{\alpha}}\in\mathbb{C}\) and satisfies \(\sum_{i=1}^{d+1}\partial^{2}f(t_{1},\ldots,t_{d+1})/\partial t_{i}^{2}\equiv 0\). For such \(f\), the domain restricted function \(f|_{\mathbb{S}^{d}}:\mathbb{S}^{d}\to\mathbb{C}\) is called a spherical harmonic of order \(l\) in \(d+1\) variables. We denote the space of all spherical harmonics of degree \(l\) in \(d+1\) variables by \(\mathfrak{B}^{l}(\mathbb{S}^{d})\) and call it the spherical harmonic space of order \(l\) in \(d+1\) variables.
It is known that \(\mathfrak{B}^{l}(\mathbb{S}^{d})\) is a vector space of dimension
\[N(d,l)=\frac{(2l+d-1)\cdot(l+d-2)!}{l!\cdot(d-1)!}.\]
Direct computations show that \(N(1,l)=2\) and \(N(2,l)=2l+1\) for \(l\in\mathbb{N}\), and \(N(d,0)=1\). It is also known that the vector space spanned by \(\{\mathfrak{B}^{l}(\mathbb{S}^{d}):l\in\mathbb{N}_{0}\}\) is a dense subspace of the \(L^{2}\) space \(L^{2}((\mathbb{S}^{d},\nu),\mathbb{C})\), where \(\nu\) is the scaled spherical measure on \(\mathbb{S}^{d}\) defined by
\[\nu(A)=\frac{\operatorname{area}(\mathbb{S}^{d})}{\operatorname{ Leb}(B(0,1))}\cdot\operatorname{Leb}(\{ta:t\in[0,1],a\in A\})\]
for any Borel subset \(A\) of \(\mathbb{S}^{d}\), where \(\operatorname{area}(\mathbb{S}^{d})\) is the surface area of \(\mathbb{S}^{d}\), Leb is the Lebesgue measure on \(\mathbb{R}^{d+1}\) and \(B(0,1)\) is the closed ball centered at zero with radius one. We note that \(\nu(\mathbb{S}^{d})=\operatorname{area}(\mathbb{S}^{d})\). We also note that \(L^{2}((\mathbb{S}^{d},\nu),\mathbb{C})\) is a separable Hilbert space with inner product \(\langle f,g\rangle_{2}=\int_{\mathbb{S}^{d}}f(x)\overline{g(x)}\,d\nu(x)\), where \(\overline{g(x)}\) is the conjugate of \(g(x)\). If \(\{B_{q}^{l}:1\leq q\leq N(d,l)\}\) is an orthonormal basis of \(\mathfrak{B}^{l}(\mathbb{S}^{d})\), then it is known that \(\{B_{q}^{l}:l\in\mathbb{N}_{0},1\leq q\leq N(d,l)\}\) forms an orthonormal basis of \(L^{2}((\mathbb{S}^{d},\nu),\mathbb{C})\). Hereafter, \(l\) that appears in \(B_{q}^{l}\) or in other superscripts does not denote an exponent but denotes an index for notational simplicity. We note that the constant function \(B_{1}^{0}\equiv(\nu(\mathbb{S}^{d}))^{-1/2}\) is the orthonormal basis of \(\mathfrak{B}^{0}(\mathbb{S}^{d})\) since every spherical harmonic of order \(0\) in \(d+1\) variables is a constant function. Below, we summarize the examples of an orthonormal basis of \(\mathfrak{B}^{l}(\mathbb{S}^{d})\) for \(l\in\mathbb{N}\).
**Example 1**.:
1. _(_\(d=1\)_) We note that each_ \(x\in\mathbb{S}^{1}\) _can be written as_ \(x=(\cos\varphi_{x},\sin\varphi_{x})^{\top}\) _for some_ \(\varphi_{x}\in[0,2\pi)\)_. We define_ \(B_{q}^{l}:\mathbb{S}^{1}\to\mathbb{C}\) _for_ \(l\in\mathbb{N}\) _by_ \(B_{1}^{l}(x)=\cos(l\varphi_{x})/\sqrt{\pi}\) _and_ \(B_{2}^{l}(x)=\sin(l\varphi_{x})/\sqrt{\pi}\)_. Then,_ \(\{B_{q}^{l}:1\leq q\leq 2\}\) _forms an orthonormal basis of_ \(\mathfrak{B}^{l}(\mathbb{S}^{1})\)_; see Chapter_ 2.2 _in_ _[_1_]_ _for more details._
2. _(_\(d=2\)_) We note that each_ \(x\in\mathbb{S}^{2}\) _can be written as_ \[x=(\cos\varphi_{x}\sin\theta_{x},\sin\varphi_{x}\sin\theta_{x},\cos\theta_{x})^ {\top}\] _for some_ \(\varphi_{x}\in[0,2\pi)\) _and_ \(\theta_{x}\in[0,\pi)\)_. We define_ \(B_{q}^{l}:\mathbb{S}^{2}\to\mathbb{C}\) _for_ \(l\in\mathbb{N}\) _by_ (2.1) \[B_{q}^{l}(x)=\sqrt{\frac{2l+1}{4\pi}}\cdot e^{\sqrt{-1}\cdot(q-l-1)\varphi_{x} }\cdot d_{q(l+1)}^{l}(\theta_{x}),\] _where_ \(d_{qr}^{l}(\theta)\in\mathbb{R}\) _for_ \(1\leq q,r\leq 2l+1\) _and_ \(\theta\in[0,\pi)\) _is defined by_ (2.2) \[c_{qr}^{l}\cdot\sum_{k=\max\{0,r-q\}}^{\min\{2l+1-q,r-1\}}\frac{(-1)^{k+q-r}( \cos(\theta/2))^{2l-2k+r-q}(\sin(\theta/2))^{2k+q-r}}{(2l+1-q-k)!(r-1-k)!(k+q-r)!k!}\]
_for_ \(c^{l}_{qr}=((2l+1-q)!(q-1)!(2l+1-r)!(r-1)!)^{1/2}\)_. Then,_ \(\{B^{l}_{q}:1\leq q\leq 2l+1\}\) _forms an orthonormal basis of_ \(\mathfrak{B}^{l}(\mathbb{S}^{2})\)_; see Theorem_ 2.1.1 _in_ _[_58_]__, Chapter 12.9 in_ _[_10_]_ _and Chapter 3.9 in_ _[_54_]_ _for more details._
3. _(_\(d\geq 3\)_) An orthonormal basis of_ \(\mathfrak{B}^{l}(\mathbb{S}^{d})\) _for_ \(l\in\mathbb{N}\) _and_ \(d\geq 3\) _can be obtained recursively using an orthonormal basis of_ \(\mathfrak{B}^{j}(\mathbb{S}^{d-1})\) _for_ \(0\leq j\leq l\)_. To describe this, we define the Legendre polynomial_ \(P_{l,d+1}:[-1,1]\to\mathbb{R}\) _of degree_ \(l\in\mathbb{N}_{0}\) _in_ \(d+1\) _variables by_ \[P_{l,d+1}(t)=l!\,\Gamma(d/2)\sum_{k=0}^{[l/2]}\frac{(-1)^{k}(1-t^{2})^{k}t^{l- 2k}}{4^{k}k!(l-2k)!\,\Gamma\left(k+\frac{d}{2}\right)}.\] _We also define the normalized associated Legendre function_ \(\tilde{P}_{l,d+1,j}:[-1,1]\to\mathbb{R}\) _for_ \(0\leq j\leq l\) _by_ \[\tilde{P}_{l,d+1,j}(t)=\frac{((2l+d-1)(l+d+j-2)!)^{1/2}(1-t^{2})^{j/2}}{2^{(d- 1)/2+j}((l-j)!)^{1/2}\Gamma\left(j+\frac{d}{2}\right)}P_{l-j,d+1+2j}(t).\] _We let_ \(\{B^{j}_{r}:1\leq r\leq N(j,d-1)\}\) _be an orthonormal basis of_ \(\mathfrak{B}^{j}(\mathbb{S}^{d-1})\) _for_ \(0\leq j\leq l\)_. We note that each_ \(x\in\mathbb{S}^{d}\) _can be written as_ (2.3) \[\begin{split} x=\bigg{(}&\cos\varphi_{x}\prod_{k=1}^ {d-1}\sin\theta_{kx},\sin\varphi_{x}\prod_{k=1}^{d-1}\sin\theta_{kx},\\ &\cos\theta_{1x}\prod_{k=2}^{d-1}\sin\theta_{kx},\cos\theta_{2x} \prod_{k=3}^{d-1}\sin\theta_{kx},\ldots,\cos\theta_{(d-1)x}\bigg{)}^{\top} \end{split}\] _for some_ \(\varphi_{x}\in[0,2\pi)\) _and_ \(\theta_{kx}\in[0,\pi)\) _for_ \(1\leq k\leq d-1\)_. We define_ \(B^{l}_{r,j}:\mathbb{S}^{d}\to\mathbb{C}\) _for_ \(l\in\mathbb{N}\) _by_ \[B^{l}_{r,j}(x)=\tilde{P}_{l,d+1,j}(\cos\theta_{(d-1)x})B^{j}_{r}(s(\varphi_{x},\theta_{1x},\ldots,\theta_{(d-2)x})),\] _where_ \(s(\varphi_{x},\theta_{1x},\ldots,\theta_{(d-2)x})\) _is the point on_ \(\mathbb{S}^{d-1}\) _defined as the right hand side of (_2.3_) with_ \(d-1\) _being replaced by_ \(d-2\)_. Then,_ \(\{B^{l}_{r,j}:1\leq r\leq N(j,d-1),0\leq j\leq l\}\) _forms an orthonormal basis of_ \(\mathfrak{B}^{l}(\mathbb{S}^{d})\)_; see Chapter_ 2.11 _in_ _[_1_]_ _for more details._
### Methodology
In this section, we introduce our methodology. We let \(X\) be a random vector taking values in \(\mathbb{S}^{d}\) and \(f_{X}\) be its density with respect to \(\nu\). We assume that \(f_{X}\) is square integrable.
We let \(SO(d+1)\) denote the space of all \((d+1)\times(d+1)\) real special orthogonal matrices. We recall that a \((d+1)\times(d+1)\) real matrix \(A\) is called a special orthogonal matrix if \(A^{\top}A=AA^{\top}=I_{d+1}\) and \(\det(A)=1\), where \(I_{d+1}\) is the \((d+1)\times(d+1)\) identity matrix. We suppose that we do not observe \(X\) but we only observe \(Z=UX\), where \(U\) is an unobservable measurement error taking values in \(SO(d+1)\) and \(UX\) is the matrix multiplication between the matrix \(U\) and vector \(X\). We note that \(Z\in\mathbb{S}^{d}\) since \(\|UX\|^{2}=X^{\top}U^{\top}UX=X^{\top}X=1\). This measurement error is also natural since every matrix in \(SO(d+1)\) rotates each point in \(\mathbb{S}^{d}\) in a certain direction. For example, every matrix in \(SO(2)\) can be written as
\[\left(\begin{smallmatrix}\cos\varphi&-\sin\varphi\\ \sin\varphi&\cos\varphi\end{smallmatrix}\right) \tag{2.4}\]
for some \(\varphi\in[0,2\pi)\) and it rotates each point in \(\mathbb{S}^{1}\) in the counter-clockwise direction by the angle \(\varphi\). In addition, \(SO(d+1)\) acts transitively on \(\mathbb{S}^{d}\), i.e., for any \(x_{1},x_{2}\in\mathbb{S}^{d}\), there exists \(u\in SO(d+1)\) such that \(ux_{1}=x_{2}\). Our first aim is to estimate \(f_{X}\) based on \(n\) i.i.d. observations \(\{Z_{i}:1\leq i\leq n\}\). Our second aim is to estimate the regression function \(m:\mathbb{S}^{d}\rightarrow\mathbb{R}\) in the model
\[Y=m(X)+\epsilon \tag{2.5}\]
based on \(n\) i.i.d. observations \(\{(Y_{i},Z_{i}):1\leq i\leq n\}\), where \(Y\) is a real-valued response and \(\epsilon\) is an error term satisfying \(\mathrm{E}(\epsilon|X)=0\). We note that this regression problem has not been covered for \(d=2\) and \(d\geq 4\). Throughout this paper, we assume that \(U\) is independent of \((X,\epsilon)\). This type of assumption is common in the literature of measurement error problems.
Below, we introduce a convolution property which is essential for our methodology. We define the convolution \(g*f\in L^{2}((\mathbb{S}^{d},\nu),\mathbb{C})\) of any two functions \(g\in L^{2}((SO(d+1),\mu),\mathbb{C})\) and \(f\in L^{2}((\mathbb{S}^{d},\nu),\mathbb{C})\) by
\[(g*f)(x)=\int_{SO(d+1)}g(u)f(u^{-1}x)\,d\mu(u),\]
where \(u^{-1}\) is the inverse matrix of \(u\), \(u^{-1}x\) is the matrix multiplication between the matrix \(u^{-1}\) and vector \(x\), and \(\mu\) is the normalized Haar measure on \(SO(d+1)\). We recall that \(\mu\) is the unique Borel probability measure on \(SO(d+1)\) satisfying the left-translation-invariant property \(\mu(A\mathcal{S})=\mu(\mathcal{S})\) for every \(A\in SO(d+1)\) and Borel subset \(\mathcal{S}\subset SO(d+1)\), where \(A\mathcal{S}=\{AS:S\in\mathcal{S}\}\). We define the hyperspherical Fourier transform of \(f\) at degree
and order \(1\leq q\leq N(d,l)\) by
\[\phi_{q}^{l}(f)=\int_{\mathbb{S}^{d}}f(x)\overline{B_{q}^{l}(x)}d\nu(x).\]
The hyperspherical Fourier transform is an analogue of the Euclidean Fourier transform with Euclidean domain, Lebesgue measure and Fourier basis function being replaced by \(\mathbb{S}^{d}\), \(\nu\) and \(B_{q}^{l}\), respectively. We also define a function \(\mathscr{B}_{q}^{l}(\cdot;u):\mathbb{S}^{d}\to\mathbb{C}\) by \(\mathscr{B}_{q}^{l}(x;u)=B_{q}^{l}(ux)\) for \(u\in SO(d+1)\). Since \(\mathscr{B}_{q}^{l}(\cdot;u)\) belongs to \(\mathfrak{B}^{l}(\mathbb{S}^{d})\) (Proposition 4.7 in [17]) and \(\{B_{q}^{l}:1\leq q\leq N(d,l)\}\) is an orthonormal basis of \(\mathfrak{B}^{l}(\mathbb{S}^{d})\), it holds that
\[B_{q}^{l}(ux)=\sum_{r=1}^{N(d,l)}\left\langle\mathscr{B}_{q}^{l}(\cdot;u),B_{r }^{l}\right\rangle_{2}B_{r}^{l}(x). \tag{2.6}\]
Finally, we define
\[\tilde{\phi}_{qr}^{l}(g)=\int_{SO(d+1)}g(u)D_{qr}^{l}(u)\,d\mu(u),\]
where
\[D_{qr}^{l}(u)=\overline{\left\langle\mathscr{B}_{q}^{l}(\cdot;u),B_{r}^{l} \right\rangle_{2}}=\int_{\mathbb{S}^{d}}\overline{B_{q}^{l}(ux)}B_{r}^{l}(x) \,d\nu(x). \tag{2.7}\]
We call \(\tilde{\phi}_{qr}^{l}(g)\) the \((q,r)\)th element of the rotational Fourier transform of \(g\) at degree \(l\). Some practical examples of \(\tilde{\phi}_{qr}^{l}(f_{U})\) and \(D_{qr}^{l}(u)\) are given in the Supplementary Material S.1. Then, the following convolution property holds.
**Proposition 1**.: _Let \(f\in L^{2}((\mathbb{S}^{d},\nu),\mathbb{C})\) and \(g\in L^{2}((SO(d+1),\mu),\mathbb{C})\). Then, \(\phi_{q}^{l}(g*f)=\sum_{r=1}^{N(d,l)}\tilde{\phi}_{qr}^{l}(g)\phi_{r}^{l}(f)\) for all \(l\in\mathbb{N}_{0}\) and \(1\leq q\leq N(d,l)\)._
Proposition 1 is a generalization of Lemma 2.1 in [28] that considered the case where \(d=2\). Now, we apply Proposition 1 to our setting. We let \(f_{U}\) be the density of \(U\) with respect to \(\mu\). We assume that \(f_{U}\) is square integrable. Then, one can show that the density \(f_{Z}\) of \(Z=UX\in\mathbb{S}^{d}\) with respect to \(\nu\) exists and is given by \(f_{Z}=f_{U}*f_{X}\). Hence, the following convolution property follows from Proposition 1:
\[\phi_{q}^{l}(f_{Z})=\sum_{r=1}^{N(d,l)}\tilde{\phi}_{qr}^{l}(f_{U})\phi_{r}^{l }(f_{X}). \tag{2.8}\]
Defining \(\phi^{l}(f)\) by the \(N(d,l)\)-vector whose \(q\)th element equals \(\phi_{q}^{l}(f)\), and \(\tilde{\phi}^{l}(g)\) by the \(N(d,l)\times N(d,l)\) matrix whose \((q,r)\)th element equals \(\tilde{\phi}_{qr}^{l}(g)\), (2.8) can be written as
\(\tilde{\phi}^{l}(f_{U})\phi^{l}(f_{X})\). Throughout this paper, we assume that the matrix \(\tilde{\phi}^{l}(f_{U})\) is invertible. Then, it can be rewritten as
\[\phi^{l}(f_{X})=(\tilde{\phi}^{l}(f_{U}))^{-1}\phi^{l}(f_{Z}). \tag{2.9}\]
The invertibility of \(\tilde{\phi}^{l}(f_{U})\) is assumed in the literature of deconvolution density estimation on \(\mathbb{S}^{1}\) or \(\mathbb{S}^{2}\) (e.g. [18], [28], [40], [41]). In fact, many popular distributions of \(U\) such as the Laplace, Gaussian and von Mises-Fisher distributions on \(SO(d+1)\) satisfy the invertibility. We give more concrete examples in the next section. In the literature of deconvolution density estimation on \(\mathbb{S}^{2}\), it is always assumed that \(f_{U}\) is known so that \(\tilde{\phi}^{l}(f_{U})\) is known. A Euclidean version of the latter assumption is also frequently assumed in the literature of Euclidean measurement error problems (e.g. [57], [19], [20], [21], [14], [3]). Throughout this paper, we also focus on the case where \(f_{U}\) is known, to build up a theoretical foundation in this new problem. This case is already challenging and the procedure starting from the case of known measurement error distribution has been adopted in the past new problems. In case \(f_{U}\) is unknown, we may estimate it from additional data as in [18], [16], [34], [35] and [12], or by assuming a parametric distribution for \(f_{U}\) and estimating its parameters without additional data as in [4].
Now, we introduce our deconvolution estimator of \(f_{X}\). Since \(\{B_{q}^{l}:l\in\mathbb{N}_{0},1\leq q\leq N(d,l)\}\) forms an orthonormal basis of \(L^{2}((\mathbb{S}^{d},\nu),\mathbb{C})\), it holds that
\[f_{X}=\sum_{l=0}^{\infty}\sum_{q=1}^{N(d,l)}\phi_{q}^{l}(f_{X})B_{q}^{l} \tag{2.10}\]
in the \(L^{2}\) sense. The series at (2.10) is called the Fourier-Laplace series of \(f_{X}\). Under certain smoothness conditions on \(f_{X}\), the series converges in the pointwise sense. We introduce such smoothness conditions in the next section. From (2.9) and (2.10), it holds that
\[f_{X}=\sum_{l=0}^{\infty}\sum_{q=1}^{N(d,l)}\left(\sum_{r=1}^{N(d,l)}(\tilde{ \phi}^{l}(f_{U}))_{qr}^{-1}\phi_{r}^{l}(f_{Z})\right)B_{q}^{l}, \tag{2.11}\]
where \((\tilde{\phi}^{l}(f_{U}))_{qr}^{-1}\) is the \((q,r)\)th element of \((\tilde{\phi}^{l}(f_{U}))^{-1}\). Since \(\phi_{r}^{l}(f_{Z})=\mathrm{E}(\overline{B_{r}^{l}(Z)})\) by definition, plugging the sample mean \(n^{-1}\sum_{i=1}^{n}\overline{B_{r}^{l}(Z_{i})}\) in the place of \(\phi_{r}^{l}(f_{Z})\) at (2.11) gives an estimator of \(f_{X}\). However, the estimator having the infinite sum \(\sum_{l=0}^{\infty}\) is subject to a large variability since \((\tilde{\phi}^{l}(f_{U}))_{qr}^{-1}\) tends to infinity as \(l\to\infty\). This tendency is analogous to the phenomenon
in the Euclidean measurement error problems where the reciprocal of the Euclidean Fourier transform tends to infinity in the tails. To overcome this issue, we truncate the infinite sum \(\sum_{l=0}^{\infty}\). Specifically, we let \(0<T_{n}<\infty\) be a truncation level diverging to infinity as \(n\to\infty\). Based on this truncation, we define
\[\hat{f}_{X}(x)=n^{-1}\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{i})), \tag{2.12}\]
where
\[K_{T_{n}}(x,z)=\sum_{l=0}^{[T_{n}]}\sum_{q=1}^{N(d,l)}\left(\sum_{r=1}^{N(d,l) }(\tilde{\phi}^{l}(f_{U}))_{qr}^{-1}\overline{B_{r}^{l}(z)}\right)B_{q}^{l}(x)\]
for \(z\in\mathbb{S}^{d}\) and \(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))\) is the real part of \(K_{T_{n}}(x,Z_{i})\). We note that \(\hat{f}_{X}\) is the first density estimator in this general setting. [18] and [28] introduced a similar density estimator defined by \(n^{-1}\sum_{i=1}^{n}K_{T_{n}}(x,Z_{i})\) for \(d=1\) and \(d=2\), respectively. However, \(K_{T_{n}}(x,Z_{i})\) is not necessarily real-valued, while \(f_{X}\) is real-valued. Hence, it is natural to take \(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))\) instead of \(K_{T_{n}}(x,Z_{i})\) as in our density estimator \(\hat{f}_{X}\). Taking the real part is also necessary to derive the asymptotic distribution of the estimator in Section 3 and the asymptotic confidence intervals for \(f_{X}\) in Section 4. The below proposition tells that \(\hat{f}_{X}\) is a reasonable density estimator in the sense that it integrates to one. This kind of property has not been noted for \(d=2\) and \(d\geq 4\) in the literature.
**Proposition 2**.: \(\int_{\mathbb{S}^{d}}K_{T_{n}}(x,z)d\nu(x)=1\) _for all \(z\in\mathbb{S}^{d}\), so that \(\int_{\mathbb{S}^{d}}\mathrm{Re}(K_{T_{n}}(x,z))d\nu(x)=1\) for all \(z\in\mathbb{S}^{d}\) and \(\int_{\mathbb{S}^{d}}\hat{f}_{X}(x)d\nu(x)=1\)._
We now introduce our deconvolution estimator of the regression function \(m\). The proposed estimator of \(m(x)=\mathrm{E}(Y|X=x)\) is given by
\[\hat{m}(x)=\hat{f}_{X}(x)^{-1}n^{-1}\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{ i}))Y_{i}. \tag{2.13}\]
We note that \(\hat{m}\) is the first regression estimator for \(d=2\) and \(d\geq 4\). Unlike the analysis of \(\hat{f}_{X}(x)\), the analysis of \(\hat{m}(x)\) requires an additional property on \(\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))|X_{i})\) due to the additional term \(Y_{i}\). To describe this, we define
\[K_{T_{n}}^{*}(x,x^{*})=\sum_{l=0}^{[T_{n}]}\sum_{q=1}^{N(d,l)}\overline{B_{q}^ {l}(x^{*})}B_{q}^{l}(x) \tag{2.14}\]
for \(x^{*}\in\mathbb{S}^{d}\). In view of (2.10) and the fact \(\mathrm{E}(\overline{B^{l}_{q}(X)})=\phi^{l}_{q}(f_{X})\), one may use \(K^{*}_{T_{n}}\) to estimate \(f_{X}\) in case the true values \(\{X_{i}:1\leq i\leq n\}\) are observed. For instance, one may estimate \(f_{X}(x)\) by
\[\hat{f}^{*}_{X}(x)=n^{-1}\sum_{i=1}^{n}\mathrm{Re}(K^{*}_{T_{n}}(x,X_{i})),\]
or simply by \(n^{-1}\sum_{i=1}^{n}K^{*}_{T_{n}}(x,X_{i})\), where the latter is the estimator studied by [29] for the error-free case. One may also estimate \(m(x)\) by
\[\hat{m}^{*}(x)=\hat{f}^{*}_{X}(x)^{-1}n^{-1}\sum_{i=1}^{n}\mathrm{Re}(K^{*}_{T _{n}}(x,X_{i}))Y_{i}. \tag{2.15}\]
The below proposition tells that \(\mathrm{E}(K_{T_{n}}(x,Z)|X)\) equals \(K^{*}_{T_{n}}(x,X)\). This kind of property has not been investigated for \(d=2\) and \(d\geq 4\) in the literature.
**Proposition 3**.: \(\mathrm{E}(K_{T_{n}}(x,Z)|X)=K^{*}_{T_{n}}(x,X)\) _for all \(x\in\mathbb{S}^{d}\), so that \(\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))|X)=\mathrm{Re}(K^{*}_{T_{n}}(x,X))\) for all \(x\in\mathbb{S}^{d}\)._
The above property that we term as 'hyperspherical unbiased scoring' property gives that
\[\mathrm{E}\big{(}\hat{m}(x)\hat{f}_{X}(x)\big{|}X_{1},\ldots,X_{n}\big{)}= \mathrm{E}\big{(}\hat{m}^{*}(x)\hat{f}^{*}_{X}(x)\big{|}X_{1}\ldots,X_{n}\big{)}.\]
The above identity tells that \(K_{T_{n}}\) removes the effect of measurement errors in the bias of the nominator of \(\hat{m}(x)\). We note that a Euclidean version of the above property was introduced in [57] and used in [21] for the Euclidean measurement error problems. Such unbiased scoring properties are very important in regression analysis with measurement errors.
## 3 Asymptotic properties
### Smoothness of measurement error distribution
The asymptotic properties of our estimators depend on the smoothness of the measurement error density \(f_{U}\). In the literature of Euclidean measurement error problems, two smoothness scenarios have been considered. They are ordinary-smooth and super-smooth scenarios; see [19], for example. A typical example of ordinary-smooth distributions is the Laplace distribution, and a typical example of super-smooth distributions is the Gaussian distribution. In the literature of deconvolution density estimation on \(\mathbb{S}^{2}\), three smoothness scenarios
have been considered, namely ordinary-smooth, super-smooth and log-super-smooth scenarios. We extend the three scenarios to general \(\mathbb{S}^{d}\). For this, we let \(\|\cdot\|_{\rm op}\) denote the operator norm for complex matrices. For a \(N(d,l)\times N(d,l)\) complex matrix \(A\), it is defined by \(\|A\|_{\rm op}=\sup\{\|Av\|_{\mathbb{C}^{N(d,l)}}:v\in\mathbb{C}^{N(d,l)},\|v\|_ {\mathbb{C}^{N(d,l)}}=1\}\), where \(\|\cdot\|_{\mathbb{C}^{N(d,l)}}\) is the standard complex norm on \(\mathbb{C}^{N(d,l)}\).
* (Ordinary-smooth scenario of order \(\beta\geq 0\)) There exist constants \(c_{1},c_{2}>0\) such that, for all \(l\in\mathbb{N}\), (i) \(\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{\rm op}\leq c_{1}\cdot l^{\beta}\) and (ii) \(\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{\rm op}\geq c_{2}\cdot l^{\beta}\).
* (Super-smooth scenario of order \(\beta>0\)) There exist constants \(c_{1},c_{2},\gamma>0\) and \(\alpha\in\mathbb{R}\) such that, for all \(l\in\mathbb{N}\), (i) \(\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{\rm op}\leq c_{1}\cdot l^{\alpha}\cdot\exp (\gamma\cdot l^{\beta})\) and (ii) \(\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{\rm op}\geq c_{2}\cdot l^{\alpha}\cdot\exp (\gamma\cdot l^{\beta})\).
* (Log-super-smooth scenario of order \(\beta>0\)) There exist constants \(c_{1},c_{2},\gamma>0\) and \(\alpha,\xi_{1},\xi_{2}\in\mathbb{R}\) such that, for all \(l\in\mathbb{N}\), (i) \(\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{\rm op}\leq c_{1}\cdot l^{\alpha}\cdot\exp (\gamma\cdot l^{\beta}(\log l-\xi_{1}))\) and (ii) \(\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{\rm op}\geq c_{2}\cdot l^{\alpha}\cdot\exp (\gamma\cdot l^{\beta}(\log l-\xi_{2}))\).
We note that the conditions (i) in the scenarios (Sj) for \(j\in\{1,2,3\}\) are for deriving the rates of convergence, while the conditions (ii) are used to verify a high-level condition for the asymptotic distributions. Distributions on \(SO(d+1)\) are broadly studied in the literature (e.g. [43], [56], [50], [47], [7]). We provide some examples of distributions satisfying the above scenarios. The ordinary-smooth scenario includes the case where there is no measurement error. In this case, \(P(U=I_{N(d,l)})=1\), which gives \(\tilde{\phi}^{l}(f_{U})=I_{N(d,l)}\), where \(I_{N(d,l)}\) is the \(N(d,l)\times N(d,l)\) identity matrix. Hence, the case belongs to the ordinary-smooth scenario of order \(\beta=0\). The Laplace distribution on \(SO(d+1)\) with parameter \(\lambda>0\) whose \(\tilde{\phi}^{l}(f_{U})\) is defined by \((1+\lambda^{2}\cdot l(l+d-1))^{-1}I_{N(d,l)}\) is another ordinary-smooth distribution. It satisfies (S1) with \(\beta=2\). When \(d=1\), its density is given by \(f_{U}(u)=\pi(\exp(-\varphi_{u}/\lambda)/(1-\exp(-2\pi/\lambda))+\exp(\varphi_ {u}/\lambda)/(\exp(2\pi/\lambda)-1))/\lambda\), where \(\varphi_{u}\in[0,2\pi)\) is the angle corresponding to \(u\) as given in (2.4). When \(d=2\), its density is given by \(f_{U}(u)=\lambda^{-2}\pi\cos(a_{\lambda}(\pi-r_{u}))/(\cos(a_{\lambda}\pi) \sin(r_{u}/2))\cdot I(r_{u}>0)\), where \(a_{\lambda}=\sqrt{1/4-\lambda^{-2}}\in\mathbb{C}\) and \(r_{u}=\arccos((\text{Trace}(u)-1)/2)\in[0,\pi]\) (Theorem 3.5 in [28]). Also, the Rosenthal distribution on \(SO(3)\) with parameters \(\theta\in(0,\pi]\) and \(p>0\) whose density is given by \(f_{U}(u)=\sum_{l=0}^{\infty}(2l+1)(\sin((2l+1)\theta/2)/((2l+1)\sin(\theta/2)) )^{p}\sum_{q=-l}^{l}D_{qq}^{l}(u)\) has \(\tilde{\phi}^{l}(f_{U})=(\sin((2l+1)\theta/2)/((2l+1)\sin(\theta/2)))^{p}I_{2l+1}\) ([40]), where \(D_{qq}^{l}(u)\) is defined in (2.7). Hence, it satisfies (S1) with \(\beta=p\)
An example of super-smooth distributions is the Gaussian distribution on \(SO(d+1)\) with parameter \(\lambda>0\) whose \(\tilde{\phi}^{l}(f_{U})\) is defined by \(\exp(-\lambda^{2}\cdot l(l+d-1)/2)I_{N(d,l)}\). It satisfies (S2) with \(\beta=2,\alpha=0\) and \(\gamma=\lambda^{2}/2\). When \(d=1\), its density is given by \(f_{U}(u)=\sqrt{2\pi}/\lambda\cdot\sum_{s\in\mathbb{Z}}\exp(-(\varphi_{u}+2\pi s )^{2}/(2\lambda^{2}))\). When \(d=2\), its density is given by \(f_{U}(u)=\sum_{l=0}^{\infty}(2l+1)\exp(-\lambda^{2}\cdot l(l+1)/2)\sum_{q=-l} ^{l}D_{qq}^{l}(u)\).
Now, we consider the log-super-smooth scenario. Using Theorem 3 in [39] or the result of Section 5.3 in [42], one may prove that the von Mises-Fisher distribution on \(SO(d+1)\) with concentration parameter \(\lambda>0\) and mean direction \(A\in SO(d+1)\) is log-super-smooth. Its density is given by \(f_{U}(u)=c(\lambda,A)^{-1}\exp(\lambda\cdot\text{Trace}(A^{-1}u))\), where \(c(\lambda,A)\) is the normalizing constant. When \(d=1\), it satisfies (S3) with \(\beta=1,\alpha=0,\gamma=1,\xi_{1}=1+\log\lambda\) and \(\xi_{2}=1+\log(2\lambda)\). When \(d=2\), it satisfies (S3) with \(\beta=1,\alpha=4,\gamma=1,\xi_{1}=1+\log\lambda\) and \(\xi_{2}=1+\log(3\lambda)\).
### Rates of convergence
In this section, we discuss the uniform consistency and \(L^{2}\) error rates of the density estimator \(\hat{f}_{X}\) defined at (2.12), and the \(L^{2}\) error rates of the regression estimator \(\hat{m}\) defined at (2.13). To state the required conditions, we denote the space of \(s\)-times continuously differentiable real-valued functions on \(\mathbb{S}^{d}\) by \(C^{s}(\mathbb{S}^{d})\).
1. For some \(k\in\mathbb{N}\) with \(k>d/4\), (i) \(f_{X}\in C^{2k}(\mathbb{S}^{d})\) and (ii) \(m\in C^{2k}(\mathbb{S}^{d})\).
2. (i) \(f_{X}\) is bounded away from zero on \(\mathbb{S}^{d}\) and (ii) \(\text{E}(Y^{2}|X=\cdot)\) is bounded on \(\mathbb{S}^{d}\).
The condition (A1) is a smoothness condition on \(f_{X}\) and \(m\). Under (A1)-(i), the series at (2.10) converges uniformly absolutely to \(f_{X}\) by Theorem 2 in [37]. The uniform absolute convergence means that the absolute convergence holds uniformly. The condition (A2) is a standard regularity condition in nonparametric estimation. We also consider the following diverging speeds for the smoothing parameter \(T_{n}\):
1. (In the case of (S1)) \(n^{-1/2}T_{n}^{\beta+d}=o(1)\).
2. (In the case of (S2)) \(n^{-1/2}T_{n}^{\alpha+d}\exp(\gamma\cdot T_{n}^{\beta})=o(1)\).
3. (In the case of (S3)) \(n^{-1/2}T_{n}^{\alpha+d}\exp(\gamma\cdot T_{n}^{\beta}(\log T_{n}-\xi_{1}))=o(1)\).
Now, we are ready to state the asymptotic properties. We first introduce the uniform consistency of the density estimator \(\hat{f}_{X}\). This is necessary to obtain the \(L^{2}\) error rates of \(\hat{m}\) and is also important in its own right.
**Proposition 4**.: _Assume that the Fourier-Laplace series at (2.10) converges uniformly to \(f_{X}\). Then, under either of the conditions (S1)-(i)+(T1), (S2)-(i)+(T2) and (S3)-(i)+(T3), it holds that_
\[\sup_{x\in\mathbb{S}^{d}}|\hat{f}_{X}(x)-f_{X}(x)|=o_{p}(1).\]
We note that the series at (2.10) converges uniformly to \(f_{X}\) under (A1)-(i) since the uniform absolute convergence implies the uniform convergence. Other weaker sufficient condition is that \(f_{X}\in C^{s,\kappa}(\mathbb{S}^{d})\) for some \(s\geq 0\) and \(\kappa\in(0,1]\) with \(s+\kappa>(d+1)/2-1\), where \(C^{s,\kappa}(\mathbb{S}^{d})\) is the space of real-valued functions whose \(s\)th order partial derivatives are Holder continuous with exponent \(\kappa\) (Theorem 2.36 in [1]). Now, we provide the \(L^{2}\) rates of convergence for \(\hat{f}_{X}\) and \(\hat{m}\).
**Theorem 1**.: _Assume that the condition (A1)-(i) holds. Then,_
1. _Under (S1)-(i) and (T1), it holds that_ \[\int_{\mathbb{S}^{d}}|\hat{f}_{X}(x)-f_{X}(x)|^{2}d\nu(x)=O_{p}(T_{n}^{-4k}+n^ {-1}T_{n}^{2\beta+d}).\] _The same rate holds for_ \(\int_{\mathbb{S}^{d}}|\hat{m}(x)-m(x)|^{2}d\nu(x)\) _under the additional conditions (A1)-(ii) and (A2)._
2. _Under (S2)-(i) and (T2), it holds that_ \[\int_{\mathbb{S}^{d}}|\hat{f}_{X}(x)-f_{X}(x)|^{2}d\nu(x)=O_{p}(T_{n}^{-4k}+n ^{-1}T_{n}^{2\alpha+d}\exp(2\gamma\cdot T_{n}^{\beta})).\] _The same rate holds for_ \(\int_{\mathbb{S}^{d}}|\hat{m}(x)-m(x)|^{2}d\nu(x)\) _under the additional conditions (A1)-(ii) and (A2)._
3. _Under (S3)-(i) and (T3), it holds that_ \[\int_{\mathbb{S}^{d}}|\hat{f}_{X}(x)-f_{X}(x)|^{2}d\nu(x)=O_{p}(T_{n}^{-4k}+n ^{-1}T_{n}^{2\alpha+d}\exp(2\gamma\cdot T_{n}^{\beta}(\log T_{n}-\xi_{1}))).\] _The same rate holds for_ \(\int_{\mathbb{S}^{d}}|\hat{m}(x)-m(x)|^{2}d\nu(x)\) _under the additional conditions (A1)-(ii) and (A2)._
We note that the above \(L^{2}\) error rates converge to zero as \(n\to\infty\). The term \(T_{n}^{-4k}\) in the rates comes from the bias parts of the estimators, and the remaining term in each rate is originated from a stochastic part contributing to the variance. We may optimize each \(L^{2}\) error rate by taking a suitable speed of \(T_{n}\to\infty\). Specifically, we consider the following speeds:
1. [label=(T0\()]
2. (In the case of (S1)) \(T_{n}\asymp n^{1/(4k+2\beta+d)}\).
3. (In the case of (S2)) \(T_{n}=K\cdot(\log n)^{1/\beta}\) for \(0<K<(2\gamma)^{-1/\beta}\).
4. (In the case of (S3)) \(T_{n}=K\cdot(\log n/\log\log n)^{1/\beta}\) for \(0<K<(2\gamma/\beta)^{-1/\beta}\).
The speed (T1\({}^{\prime}\)) is optimal in the sense that it balances the asymptotic bias \(T_{n}^{-4k}\) and asymptotic variance \(n^{-1}T_{n}^{2\beta+d}\). In the cases of super-smoothness and log-super-smoothness, however, there exists no such speed that makes the corresponding asymptotic bias and variance be of the same magnitude. This is because \(T_{n}\) also appears in the exponents \(\exp(2\gamma\cdot T_{n}^{\beta})\) and \(\exp(2\gamma\cdot T_{n}^{\beta}(\log T_{n}-\xi_{1}))\), respectively. The choices of \(T_{n}\) given in (T2\({}^{\prime}\)) and (T3\({}^{\prime}\)) have specific constant factors \(K\) with constraints. The upper bounds of \(K\) are actually the thresholds, beyond which \(n^{-1}T_{n}^{2\alpha+d}\exp(2\gamma\cdot T_{n}^{\beta})\) and \(n^{-1}T_{n}^{2\alpha+d}\exp(2\gamma\cdot T_{n}^{\beta}(\log T_{n}-\xi_{1}))\), respectively, diverge to infinity, while they are dominated by \(T_{n}^{-4k}\) for \(K\) smaller than the thresholds. We note that similar constraints have been put on bandwidths in the Euclidean super-smooth scenario; see Theorem 1 in [19], and Theorem 1 and Remark 1 in [21], for example.
**Corollary 1**.: _Assume that the condition (A1)-(i) holds. Then,_
1. _[label=()]_
2. _Under (S1)-(i) and (T1_ \({}^{\prime}\)_), it holds that_ \[\int_{\mathbb{S}^{d}}|\hat{f}_{X}(x)-f_{X}(x)|^{2}d\nu(x)=O_{p}(n^{-4k/(4k+2 \beta+d)}).\] _The same rate holds for_ \(\int_{\mathbb{S}^{d}}|\hat{m}(x)-m(x)|^{2}d\nu(x)\) _under the additional conditions (A1)-(ii) and (A2)._
3. _Under (S2)-(i) and (T2_ \({}^{\prime}\)_), it holds that_ \[\int_{\mathbb{S}^{d}}|\hat{f}_{X}(x)-f_{X}(x)|^{2}d\nu(x)=O_{p}((\log n)^{-4k /\beta}).\] _The same rate holds for_ \(\int_{\mathbb{S}^{d}}|\hat{m}(x)-m(x)|^{2}d\nu(x)\) _under the additional conditions (A1)-(ii) and (A2)._
_._
3. _Under (S3)-(i) and (T3_\({}^{\prime}\)_), it holds that_ \[\int_{\mathbb{S}^{d}}|\hat{f}_{X}(x)-f_{X}(x)|^{2}d\nu(x)=O_{p}((\log n/\log\log n )^{-4k/\beta}).\] _The same rate holds for_ \(\int_{\mathbb{S}^{d}}|\hat{m}(x)-m(x)|^{2}d\nu(x)\) _under the additional conditions (A1)-(ii) and (A2)._
It is natural that the rate in (a) of Corollary 1 gets slower as the dimension \(d\) increases, due to the well known phenomena called the curse of dimensionality. However, the rates in (b) and (c) of Corollary 1 are independent of \(d\) since the rates are dominated by the asymptotic bias \(T_{n}^{-4k}\) which is independent of \(d\). We note that similar log-type error rates were obtained by [19] and [21] for the Euclidean measurement error problems under the Euclidean super-smooth scenario.
### Asymptotic distributions
In this section, we discuss the asymptotic distributions of our density and regression estimators. Recently, [32] derived some asymptotic distributions for their deconvolution estimators on compact and connected Lie groups. We note that \(\mathbb{S}^{1}\) and \(\mathbb{S}^{3}\) are such Lie groups. To the best of our knowledge, however, no asymptotic distribution has been derived for \(\mathbb{S}^{d}\) with \(d=2\) and \(d\geq 4\) in the literature of measurement error problems.
We first derive the asymptotic distribution of \(\hat{f}_{X}\). Before we state the result, we introduce a high-level condition. In the following high level condition, \(a_{n}\gtrsim b_{n}\) for two positive sequences \(a_{n}\) and \(b_{n}\) means that there exists a constant \(c>0\) such that \(a_{n}\geq c\cdot b_{n}\) for all \(n\). We also define \(a_{n}\lesssim b_{n}\) in the obvious way.
1. (In the case of (S1)) There exists a constant \(0\leq q\leq d\) such that, for each \(x\in\mathbb{S}^{d}\), \(\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})\gtrsim T_{n}^{2\beta+q}\).
2. (In the case of (S2)) There exists a constant \(0\leq q\leq d\) such that, for each \(x\in\mathbb{S}^{d}\) and \(0<\eta<1\), \(\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})\gtrsim T_{n}^{2\alpha+q}\exp(2 \gamma(\eta\cdot T_{n})^{\beta})\).
3. (In the case of (S3)) There exist constants \(0\leq q\leq d\) and \(\zeta\in\mathbb{R}\) such that, for each \(x\in\mathbb{S}^{d}\) and \(0<\eta<1\), \(\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})\gtrsim T_{n}^{2\alpha+q}\exp(2 \gamma(\eta\cdot T_{n})^{\beta}(\log T_{n}-\zeta))\).
The lower bounds to \(\mathrm{E}\big{(}\mathrm{Re}(K_{T_{n}}(x,Z))^{2}\big{)}\) in the conditions (B1)-(B3) are motivated by the upper bounds to \(\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})\), which are of the magnitude
\[T_{n}^{2\beta+d},\quad T_{n}^{2\alpha+d}\exp(2\gamma\cdot T_{n}^{\beta})\quad \text{or}\quad T_{n}^{2\alpha+d}\exp(2\gamma\cdot T_{n}^{\beta}(\log T_{n}- \zeta))\]
depending on the smoothness scenarios. The verification of (B1)-(B3) with \(q=d\) is particularly important in constructing asymptotic confidence intervals for \(f_{X}\) and \(m\). We verify them for \(d\in\{1,2\}\) and various \(f_{U}\) in the next section. We put the range \(0\leq q\leq d\) in (B1)-(B3) to give flexibility. We also give more flexibility in choosing \(T_{n}\) instead of choosing the specific ones in (T1\({}^{\prime}\))-(T3\({}^{\prime}\)). The following flexible speeds cover the speeds in (T1\({}^{\prime}\))-(T3\({}^{\prime}\)).
* \(T_{n}\asymp n^{p}\) for some \(0<p<1/(2d-q)\), where \(q\) is the constant in (B1).
* \(T_{n}\lesssim(\log n)^{1/\beta}\) for \(\beta\) in (S2).
* \(T_{n}\lesssim(\log n/\log\log n)^{1/\beta}\) for \(\beta\) in (S3).
**Theorem 2**.: _Assume that the Fourier-Laplace series at (2.10) converges pointwise to \(f_{X}\). Then, under either of the conditions (S1)-(i)+(T1\({}^{\prime\prime}\))+(B1), (S2)-(i)+(T2\({}^{\prime\prime}\))+(B2) and (S3)-(i)+(T3\({}^{\prime\prime}\))+(B3), it holds that, for all \(x\in\mathbb{S}^{d}\),_
\[\sqrt{n}\cdot\frac{\hat{f}_{X}(x)-f_{X}(x)-\big{(}\mathrm{E}(\mathrm{Re}(K_{T_ {n}}(x,Z)))-f_{X}(x)\big{)}}{\sqrt{\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z)))} }\overset{d}{\longrightarrow}N(0,1).\]
We note that the condition on the Fourier-Laplace series in Theorem 2 is weaker than the corresponding condition in Proposition 4. Now, we investigate the asymptotic distribution of \(\hat{m}\) in the ordinary-smooth scenario. Deriving it for the super-smooth and log-super-smooth scenarios has a technical issue. The issue is that
\[\sqrt{n}\cdot\frac{\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))\cdot(\hat{f }_{X}(x)-f_{X}(x))}{\sqrt{\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})}}=o_{ p}(1) \tag{3.1}\]
does not hold in the super-smooth and log-super-smooth scenarios. (3.1) is an important part in the proof; see Remark 1 immediately after the proof of Theorem 3 for more details. For the asymptotic distribution of \(\hat{m}\) in the ordinary-smooth scenario, we make an additional condition.
* \(\mathrm{E}(|Y|^{2+\delta}|X=\cdot)\) is bounded on \(\mathbb{S}^{d}\) for some \(\delta>0\) and \(\mathrm{E}(\epsilon^{2}|X=\cdot)\) is bounded away from zero on \(\mathbb{S}^{d}\).
The condition (B4) is a standard regularity condition in nonparametric regression. We also consider a new flexible range on \(T_{n}\) for the ordinary smooth scenario. The following range based on the constants \(\beta\) in (S1) and \(k\) in (A1) covers the speed in (T1\({}^{\prime}\)).
* \(T_{n}\asymp n^{p}\) for some \(1/(2\beta+d+8k)\leq p<1/(2\beta+2d)\).
We note that the range of \(p\) in (T1\({}^{\prime\prime\prime}\)) is valid since \(k\) in (A1) satisfies \(k>d/4\). The upper bound \(1/(2\beta+2d)\) in the range is required to make \(T_{n}\) satisfy (T1). To state the next theorem, we denote by \(\Delta_{\mathbb{S}^{d}}\) the Laplace-Beltrami operator associated with \(\mathbb{S}^{d}\) (twice differential operator acting on twice continuously differentiable functions on \(\mathbb{S}^{d}\)), and by \(\Delta_{\mathbb{S}^{d}}^{s}\) the compositions of \(\Delta_{\mathbb{S}^{d}}\) for \(s\) times. We note that, if a function \(f:\mathbb{S}^{d}\to\mathbb{R}\) is \(2s\)-times continuously differentiable on \(\mathbb{S}^{d}\), then the function \(\Delta_{\mathbb{S}^{d}}^{s}(f):\mathbb{S}^{d}\to\mathbb{R}\) is well defined and is continuous on \(\mathbb{S}^{d}\).
**Theorem 3**.: _Assume that the conditions (S1)-(i), (A1), (A2)-(i), (B1) with \(q=d\), (B4) and (T1\({}^{\prime\prime\prime}\)) hold, and that the Fourier-Laplace series of \(\Delta_{\mathbb{S}^{d}}^{k}(f_{X})\) and of \(\Delta_{\mathbb{S}^{d}}^{k}(m\cdot f_{X})\) converge absolutely on \(\mathbb{S}^{d}\). Then, it holds that, for all \(x\in\mathbb{S}^{d}\),_
\[\sqrt{n}\cdot\frac{\hat{m}(x)-m(x)-\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m (x)))/f_{X}(x)}{\sqrt{\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))}/f_{X} (x)}\stackrel{{ d}}{{\longrightarrow}}N(0,1).\]
Theorem 3 also holds for general \(q\) in (B1) with more complex versions of (A1) and (T1\({}^{\prime\prime\prime}\)) although we only state it with \(q=d\) for simplicity. Regarding the condition on the absolute convergence in Theorem 3, we recall that, if a function \(f:\mathbb{S}^{d}\to\mathbb{R}\) is \(2s\)-times continuously differentiable for \(s>d/4\), then its Fourier-Laplace series is absolutely convergent. However, for certain \(d\), much weaker sufficient conditions exist. For example, the Fourier-Laplace series of a function on \(\mathbb{S}^{1}\) is absolutely convergent if the function is Holder continuous with exponent greater than \(1/2\) or if the function is of bounded variation and Holder continuous with positive exponent; see [36].
## 4 Asymptotic confidence intervals
In this section, we verify the high-level conditions (B1)-(B3) with \(q=d\) for certain cases and provide two types of asymptotic confidence intervals for both \(f_{X}\) and \(m\). One type is based on the asymptotic normality given in Theorems 2 and 3, and the other is based on empirical
likelihoods. For the first type, we estimate the biases \(\mathrm{E}\big{(}\mathrm{Re}(K_{T_{n}}(x,Z))\big{)}-f_{X}(x)\) in the nominator in Theorem 2 and \(\mathrm{E}\big{(}\mathrm{Re}\big{(}K_{T_{n}}(x,Z)\big{)}\big{(}Y-m(x)\big{)} \big{)}/f_{X}(x)\) in the nominator in Theorem 3. We estimate them simply by zero. These are natural choices since plugging \(\hat{f}_{X}(x)=n^{-1}\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{i}))\) into both \(\mathrm{E}\big{(}\mathrm{Re}(K_{T_{n}}(x,Z))\big{)}\) and \(f_{X}(x)\) gives zero estimates for \(\mathrm{E}\big{(}\mathrm{Re}(K_{T_{n}}(x,Z))\big{)}-f_{X}(x)\) and plugging \(0=n^{-1}\sum_{i=1}^{n}\mathrm{Re}\big{(}K_{T_{n}}(x,Z_{i})\big{)}\big{(}Y_{i} -\hat{m}(x)\big{)}\) into \(\mathrm{E}\big{(}\mathrm{Re}\big{(}K_{T_{n}}(x,Z)\big{)}\big{(}Y-m(x)\big{)} \big{)}\) gives zero estimates for \(\mathrm{E}\big{(}\mathrm{Re}\big{(}K_{T_{n}}(x,Z)\big{)}\big{(}Y-m(x)\big{)} \big{)}/f_{X}(x)\).
To justify the zero estimates of the biases in the construction of the first type asymptotic confidence intervals, it is essential to verify that the variances dominate the squared biases. In the case of \(\hat{f}_{X}\), this amounts to showing
\[\sqrt{n}\cdot\frac{\mathrm{E}\big{(}\mathrm{Re}(K_{T_{n}}(x,Z))\big{)}-f_{X}( x)}{\sqrt{\mathrm{E}\big{(}\big{(}\mathrm{Re}(K_{T_{n}}(x,Z))\big{)}^{2}\big{)}}}=o(1) \tag{4.1}\]
since \(\mathrm{E}\big{(}\big{(}\mathrm{Re}(K_{T_{n}}(x,Z))\big{)}^{2}\big{)}\) determines the magnitude of \(\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z)))\); see the proof of Theorem 4. For the verification of (4.1), we need sharp lower bounds to the denominator. It can be accomplished by verifying (B1)-(B3) with \(q=d\).
### Verification of (B1)-(B3)
In this section, we verify that \(\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})\) achieves the lower bounds given in (B1)-(B3) with \(q=d\). In the Euclidean nonparametric statistics, a common approach to get a lower bound of such quantity is to find an asymptotic leading term by applying Taylor expansion. However, since there exists no suitable Taylor expansion for our problem, it is not trivial to verify (B1)-(B3) not only with the maximal \(q=d\) but also with the minimal \(q=0\).
We first show that \(\mathrm{E}(|K_{T_{n}}(x,Z)|^{2})\) achieves the lower bounds in (B1)-(B3) with \(q=d\), and then consider the lower bounds to \(\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z))^{2}))\). For this, we need an additional condition. We let \(\sigma_{\min}((\tilde{\phi}^{l}(f_{U}))^{-1})\) denote the minimum singular value of \((\tilde{\phi}^{l}(f_{U}))^{-1}\).
* There exists a positive constant \(c\) such that, for all \(l\in\mathbb{N}_{0}\), \(\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{\mathrm{op}}\leq c\cdot\sigma_{\min}(( \tilde{\phi}^{l}(f_{U}))^{-1})\).
One can easily check that the condition (C) holds with \(c=1\) for any \(f_{U}\) on \(\mathbb{S}^{1}\). It also holds with \(c=1\) for any \(f_{U}\) satisfying \(\tilde{\phi}^{l}(f_{U})=s_{l}\cdot I_{N(d,l)}\) for some \(0\neq s_{l}\in\mathbb{R}\). Examples
of such distributions for \(d\geq 2\) include the Laplace, Rosenthal, Gaussian and error-free distributions that we introduced immediately after (S1)-(S3).
**Lemma 1**.: _Assume that the conditions (A2)-(i) and (C) hold. Then, for each \(j\in\{1,2,3\}\), under (Sj)-(ii), \(\mathrm{E}(|K_{T_{n}}(x,Z)|^{2})\) attains the lower bound given in (Bj) with \(q=d\)._
Lemma 1 tells about the lower bounds to \(\mathrm{E}(|K_{T_{n}}(x,Z)|^{2})\). However, since
\[\mathrm{E}(|K_{T_{n}}(x,Z)|^{2})=\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2}) +\mathrm{E}((\mathrm{Im}(K_{T_{n}}(x,Z)))^{2})\geq\mathrm{E}((\mathrm{Re}(K_{ T_{n}}(x,Z))^{2})),\]
where \(\mathrm{Im}(K_{T_{n}}(x,Z))\) is the imaginary part of \(K_{T_{n}}(x,Z)\), Lemma 1 does not provide the lower bounds to \(\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z))^{2}))\). This causes another difficulty. Our original attempt for this issue was to show that \(\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})\geq\mathrm{E}((\mathrm{Im}(K_{ T_{n}}(x,Z)))^{2})\) always holds, but it was not successful due to computational difficulties. Instead, we managed to prove that
\[\int_{\mathbb{S}^{d}}\mathrm{Re}(K_{T_{n}}(x,z))^{2}d\nu(z)\geq\int_{\mathbb{ S}^{d}}\mathrm{Im}(K_{T_{n}}(x,z))^{2}d\nu(z) \tag{4.2}\]
for certain cases. Since \(\int_{\mathbb{S}^{d}}|K_{T_{n}}(x,z)|^{2}d\nu(z)\) also achieves the same lower bounds as those to \(\mathrm{E}(|K_{T_{n}}(x,Z)|^{2})\) as demonstrated in the proof of Lemma 1, proving (4.2) provides that \(\int_{\mathbb{S}^{d}}\mathrm{Re}(K_{T_{n}}(x,z))^{2}d\nu(z)\) also achieves the same lower bounds. This with the assumption \(\inf_{z\in\mathbb{S}^{d}}f_{Z}(z)>0\) gives the desired lower bounds to \(\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})\). We note that the latter assumption on the density \(f_{Z}\) of \(Z\) is implied by (A2)-(i). The cases for which (4.2) hold are the followings:
1. \(d=1\).
2. \(d=2\) and \(\tilde{\phi}^{l}(f_{U})=s_{l}\cdot I_{N(d,l)}\) for some \(0\neq s_{l}\in\mathbb{R}\) and for all \(l\in\mathbb{N}_{0}\).
Verification of (4.2) for the cases (G1)-(G2) requires complex computation based on the theory of spherical harmonics. We note that (G1)-(G2) are practically the most important cases. They cover circular data and spherical data. The distributions of \(U\) satisfying (G2) include, but are not limited to, the Laplace, Rosenthal, Gaussian and error-free distributions on \(SO(3)\). We also note that (G1)-(G2) satisfy the condition (C). Hence, (B1)-(B3) with \(q=d\) follow for the cases (G1)-(G2) under the condition (A2)-(i).
**Lemma 2**.: _Assume that the conditions (A2)-(i) and either (G1) or (G2) hold. Then, for each \(j\in\{1,2,3\}\), under (Sj)-(ii), (Bj) holds with \(q=d\)._
Although we verify (B1)-(B3) with \(q=d\) only for the above cases due to computational difficulties, we strongly believe that they hold for general \(\mathbb{S}^{d}\) and \(f_{U}\). For the verification, one could apply a special computation technique or a different way without direct computation that we are currently not aware. We leave it as an open problem.
### Confidence intervals based on asymptotic normality
In this section, we construct asymptotic confidence intervals based on Theorems 2 and 3 for the ordinary-smooth scenario under the condition (B1) with \(q=d\). We only treat the ordinary-smooth scenario since (4.1) does not hold with the speeds of \(T_{n}\) in (T2\({}^{\prime\prime}\)) and (T3\({}^{\prime\prime}\)) in the super-smooth and log-super-smooth scenarios; see Remark 1 in the Supplementary Material for a related discussion. Even in the Euclidean measurement error problems, studying the Euclidean ordinary-smooth scenario is more common than studying the Euclidean super-smooth scenario due to many technical difficulties in the Euclidean super-smooth scenario.
For the construction of the asymptotic confidence intervals, we need to estimate the unknown biases and variances in Theorems 2-3. The biases are \(\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z)))-f_{X}(x)\) and \(\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))/f_{X}(x)\) and variances are \(s_{1}^{2}(x)\) and \(s_{2}^{2}(x)\), where
\[s_{1}(x) =\big{(}\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z)))\big{)}^{1/2},\] \[s_{2}(x) =\big{(}\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))\big{)}^ {1/2}/f_{X}(x).\]
We estimate the biases by zero as we demonstrated in the beginning of Section 4. For the
variances, we use the following natural estimators:
\[\hat{s}_{1}^{2}(x)= n^{-1}\sum_{i=1}^{n}(\operatorname{Re}(K_{T_{n}}(x,Z_{i})))^{2}-( \hat{f}_{X}(x))^{2},\] \[\hat{s}_{2}^{2}(x)= \hat{f}_{X}^{-2}(x)\cdot\left(n^{-1}\sum_{i=1}^{n}(\operatorname{ Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}-\hat{m}(x)))^{2}\right.\] \[\left.\hskip 56.905512pt-\left(n^{-1}\sum_{i=1}^{n} \operatorname{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}-\hat{m}(x))\right)^{2}\right)\] \[= \hat{f}_{X}^{-2}(x)\cdot n^{-1}\sum_{i=1}^{n}(\operatorname{Re}(K _{T_{n}}(x,Z_{i}))(Y_{i}-\hat{m}(x)))^{2}.\]
**Theorem 4**.: _Assume that the conditions (S1)-(i), (A1)-(i), (T1\({}^{\prime}\)) and (B1) with \(q=d\) hold, and that the Fourier-Laplace series of \(\Delta^{k}_{\mathbb{S}^{d}}(f_{X})\) converges absolutely on \(\mathbb{S}^{d}\). Then, it holds that, for all \(x\in\mathbb{S}^{d}\),_
\[\sqrt{n}\cdot\frac{\hat{f}_{X}(x)-f_{X}(x)}{\hat{s}_{1}(x)} \overset{d}{\longrightarrow}N(0,1).\]
_Hence, a \((1-\alpha)\times 100\%\) asymptotic confidence interval for \(f_{X}(x)\) is given by_
\[\left(\hat{f}_{X}(x)-z_{\alpha/2}\frac{\hat{s}_{1}(x)}{\sqrt{n}}, \hat{f}_{X}(x)+z_{\alpha/2}\frac{\hat{s}_{1}(x)}{\sqrt{n}}\right).\]
**Theorem 5**.: _Assume that the conditions (S1)-(i), (A1), (A2)-(i), (T1\({}^{\prime}\)), (B4) and (B1) with \(q=d\) hold, and that the Fourier-Laplace series of \(\Delta^{k}_{\mathbb{S}^{d}}(f_{X})\) and of \(\Delta^{k}_{\mathbb{S}^{d}}(m\cdot f_{X})\) converge absolutely on \(\mathbb{S}^{d}\). Then, it holds that, for all \(x\in\mathbb{S}^{d}\),_
\[\sqrt{n}\cdot\frac{\hat{m}(x)-m(x)}{\hat{s}_{2}(x)}\overset{d}{ \longrightarrow}N(0,1).\]
_Hence, a \((1-\alpha)\times 100\%\) asymptotic confidence interval for \(m(x)\) is given by_
\[\left(\hat{m}(x)-z_{\alpha/2}\frac{\hat{s}_{2}(x)}{\sqrt{n}}, \hat{m}(x)+z_{\alpha/2}\frac{\hat{s}_{2}(x)}{\sqrt{n}}\right).\]
### Confidence intervals based on empirical likelihoods
Asymptotic confidence regions based on empirical likelihoods, called empirical likelihood confidence regions, are useful alternatives to those based on asymptotic normality. Empirical likelihood confidence regions have the advantage that their shape is determined by the data, they are invariant under transformations, and they often do not require the estimation of
the variance. For an introduction to empirical likelihood methods, we refer to [48]. A broad review and a general theory for empirical likelihood methods can be found in [9] and [30], respectively. To construct the empirical likelihood confidence regions for \(f_{X}\) and \(m\), we define \(F_{f_{X}}(Z_{i},\theta;x)=\operatorname{Re}(K_{T_{n}}(x,Z_{i}))-\theta\) and \(F_{m}(Z_{i},Y_{i},\theta;x)=\operatorname{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}-\theta)\) for \(\theta\in\mathbb{R}\) and \(x\in\mathbb{S}^{d}\). We also define the corresponding empirical likelihood ratio functions at \(x\) by
\[\operatorname{EL}_{f_{X}}(\theta;x) =\max\left\{\prod_{i=1}^{n}(nw_{i}):w_{i}>0,\sum_{i=1}^{n}w_{i}=1, \sum_{i=1}^{n}w_{i}F_{f_{X}}(Z_{i},\theta;x)=0\right\},\] \[\operatorname{EL}_{m}(\theta;x) =\max\left\{\prod_{i=1}^{n}(nw_{i}):w_{i}>0,\sum_{i=1}^{n}w_{i}=1,\sum_{i=1}^{n}w_{i}F_{m}(Z_{i},Y_{i},\theta;x)=0\right\}.\]
Here, we define the maximum of the empty set to be zero. Then, we define the respective empirical likelihood confidence regions by \(\{\theta\in\mathbb{R}:\operatorname{EL}_{f_{X}}(\theta;x)\geq c_{f_{X}}\}\) and \(\{\theta\in\mathbb{R}:\operatorname{EL}_{m}(\theta;x)\geq c_{m}\}\) for some positive constants \(c_{f_{X}}\) and \(c_{m}\). To determine the constants, we provide the asymptotic distributions of the empirical likelihood ratio functions. For this, we take the following conditions.
1. \(P(\operatorname{EL}_{f_{X}}(f_{X}(x);x)>0)\to 1\) for each \(x\in\mathbb{S}^{d}\).
2. \(P(\operatorname{EL}_{m}(m(x);x)>0)\to 1\) for each \(x\in\mathbb{S}^{d}\).
The conditions (E1)-(E2) are basic in the empirical likelihood technique. We note that \(\operatorname{EL}_{f_{X}}(f_{X}(x);x)>0\) is satisfied as long as there are at least two data points \(Z_{i}\) and \(Z_{j}\) such that \(F_{f_{X}}(Z_{i},f_{X}(x);x)>0\) and \(F_{f_{X}}(Z_{j},f_{X}(x);x)<0\), and \(\operatorname{EL}_{m}(m(x);x)>0\) is satisfied as long as there are at least two data points \((Z_{i},Y_{i})\) and \((Z_{j},Y_{j})\) such that \(F_{m}(Z_{i},Y_{i},m(x);x)>0\) and \(F_{m}(Z_{j},Y_{j},m(x);x)<0\). We also treat the ordinary-smooth scenario only since similar technical difficulties exist. Below, \(\chi^{2}_{\alpha}(1)\) denotes the \((1-\alpha)\) quantile of the chi-square distribution \(\chi^{2}(1)\) of degree 1.
**Theorem 6**.: _Assume that the conditions (S1)-(i), (A1)-(i), (T1\({}^{\prime}\)), (E1) and (B1) with \(q=d\) hold, and that the Fourier-Laplace series of \(\Delta^{k}_{\mathbb{S}^{d}}(f_{X})\) converges absolutely on \(\mathbb{S}^{d}\). Then, it holds that, for all \(x\in\mathbb{S}^{d}\),_
\[-2\log\operatorname{EL}_{f_{X}}(f_{X}(x);x)\overset{d}{\longrightarrow}\chi ^{2}(1).\]
_Hence, a \((1-\alpha)\times 100\%\) asymptotic confidence region for \(f_{X}(x)\) is given by \(\{\theta\in\mathbb{R}:-2\log\operatorname{EL}_{f_{X}}(\theta;x)\leq\chi^{2}_ {\alpha}(1)\}=\{\theta\in\mathbb{R}:\operatorname{EL}_{f_{X}}(\theta;x)\geq \exp(-\chi^{2}_{\alpha}(1)/2)\}\)._
**Theorem 7**.: _Assume that the conditions (S1)-(i), (A1), (A2)-(i), (T1\({}^{\prime}\)), (B4), (E2) and (B1) with \(q=d\) hold, and that the Fourier-Laplace series of \(\Delta^{k}_{\mathbb{S}^{d}}(f_{X})\) and of \(\Delta^{k}_{\mathbb{S}^{d}}(m\cdot f_{X})\) converge absolutely on \(\mathbb{S}^{d}\). Then, it holds that, for all \(x\in\mathbb{S}^{d}\),_
\[-2\log\operatorname{EL}_{m}(m(x);x)\stackrel{{ d}}{{ \longrightarrow}}\chi^{2}(1).\]
_Hence, a \((1-\alpha)\times 100\%\) asymptotic confidence region for \(m(x)\) is given by \(\{\theta\in\mathbb{R}:-2\log\operatorname{EL}_{m}(\theta;x)\leq\chi_{\alpha}^{ 2}(1)\}=\{\theta\in\mathbb{R}:\operatorname{EL}_{m}(\theta;x)\geq\exp(-\chi_ {\alpha}^{2}(1)/2)\}\)._
We note that the asymptotic confidence regions in Theorems 6 and 7 are in fact intervals. This is because \(t\theta_{1}+(1-t)\theta_{2}\) for \(0<t<1\) belongs to the asymptotic confidence regions whenever \(\theta_{1}\) and \(\theta_{2}\) belong to those regions. However, they are not necessarily symmetric about \(\hat{f}_{X}(x)\) or \(\hat{m}(x)\). To implement the asymptotic confidence regions, we need to compute \(\operatorname{EL}_{f_{X}}(\theta;x)\) and \(\operatorname{EL}_{m}(\theta;x)\). The Lagrange multiplier technique leads that the unique maximizing weights \(w_{i}\) are \(1/(n(1+\lambda_{f_{X}}F_{f_{X}}(Z_{i},\theta;x)))\) and \(1/(n(1+\lambda_{m}F_{m}(Z_{i},Y_{i},\theta;x)))\) for \(\operatorname{EL}_{f_{X}}(\theta;x)\) and \(\operatorname{EL}_{m}(\theta;x)\), respectively, where \(\lambda_{f_{X}}\in\mathbb{R}\) and \(\lambda_{m}\in\mathbb{R}\) are the solutions of
\[\sum_{i=1}^{n}\frac{F_{f_{X}}(Z_{i},\theta;x)}{1+\lambda_{f_{X}}F_{f_{X}}(Z_{ i},\theta;x)}=0\quad\text{and}\quad\sum_{i=1}^{n}\frac{F_{m}(Z_{i},Y_{i},\theta;x)} {1+\lambda_{m}F_{m}(Z_{i},Y_{i},\theta;x)}=0.\]
## 5 Finite sample performance
### Simulation study
In this section, we show the results of two simulation studies. We conducted regression analysis on \(\mathbb{S}^{2}\) with measurement errors since this problem is practically important and it is our main interest. In the first simulation study, we checked the estimation performance of our regression estimator \(\hat{m}\). Since there exists no other method designed for this problem, we compared \(\hat{m}\) with a regression estimator designed for the error-free case. In particular, we took the naive regression estimator \(\hat{m}^{\text{naive}}\) defined as (2.15) with \(X_{i}\) in (2.15) being replaced by \(Z_{i}\), to see the effect of using \(K_{T_{n}}\) instead of \(K_{T_{n}}^{*}\). We recall that \(K_{T_{n}}^{*}\) is introduced by [29] for the error-free case. In the second simulation study, we compared the two types of asymptotic confidence intervals for \(m\) that we constructed in Theorems 5 and 7, namely the confidence interval based on the asymptotic normality (AN) and the confidence interval
based on the empirical likelihood (EL), respectively. We recall that they are all currently available confidence intervals for this problem.
For both simulation studies, we generated \(X\) from the von Mises-Fisher distribution on \(\mathbb{S}^{2}\) with concentration parameter \(0.1\) and mean direction \((1,1,1)^{\top}/\sqrt{3}\). As for the distribution of \(U\) on \(SO(3)\), we took the Laplace distribution with \(\lambda=0.5\) for the ordinary-smooth scenario (S1), the Gaussian distribution with \(\lambda=0.5\) for the super-smooth scenario (S2) and the von Mises-Fisher distribution with \(\lambda=2\) and \(A=I_{3}\) for the log-super-smooth scenario (S3). The definitions of the Laplace, Gaussian and von Mises-Fisher distributions on \(SO(3)\) are given in Section 3.1. We generated \(Y\) from the model
\[Y=(\cos\varphi_{X}+\sin\varphi_{X})\sin\theta_{X}+\cos\theta_{X}+\epsilon,\]
where \(\varphi_{X}\in[0,2\pi)\) and \(\theta_{X}\in[0,\pi)\) are the angles satisfying
\[X=(\cos\varphi_{X}\sin\theta_{X},\sin\varphi_{X}\sin\theta_{X},\cos\theta_{X}) ^{\top},\]
and \(\epsilon\) is the normal random variable with mean zero and standard deviation \(0.5\). We chose \(T_{n}\) based on a 5-fold cross-validation and repeatedly generated \(\{(Y_{i},U_{i}X_{i}):1\leq i\leq n\}\) with \(n=250\) and \(500\) for \(R=200\) times.
In the first simulation study, we compared the integrated squared bias (ISB), integrated variance (IV) and integrated mean squared error (IMSE) defined by
\[\begin{split}\text{ISB}&=\int_{\mathbb{S}^{2}} \left(R^{-1}\sum_{r=1}^{R}\tilde{m}^{(r)}(x)-m(x)\right)^{2}d\nu(x),\\ \text{IV}&=R^{-1}\sum_{r=1}^{R}\int_{\mathbb{S}^{2} }\left(R^{-1}\sum_{s=1}^{R}\tilde{m}^{(s)}(x)-\tilde{m}^{(r)}(x)\right)^{2}d \nu(x),\\ \text{IMSE}&=\text{ISB+IV}=R^{-1}\sum_{r=1}^{R} \int_{\mathbb{S}^{2}}(\tilde{m}^{(r)}(x)-m(x))^{2}d\nu(x),\end{split} \tag{5.1}\]
where \(\tilde{m}^{(r)}(x)\) is either \(\hat{m}(x)\) or \(\hat{m}^{\text{naive}}(x)\) obtained from the sample in the \(r\)th repeat for \(1\leq r\leq R\). In the second simulation study, we computed the coverage rate \(C_{1-\alpha}(x)\) and average length \(L_{1-\alpha}(x)\) of \(R\) confidence intervals of level \((1-\alpha)\times 100\%\) for each \(x\in\mathcal{G}\) and \(\alpha\in\{0.05,0.1\}\), where \(\mathcal{G}\) is a dense grid of \(\mathbb{S}^{2}\). We then compared \(|\mathcal{G}|^{-1}\sum_{x\in\mathcal{G}}C_{1-\alpha}(x)\) and \(|\mathcal{G}|^{-1}\sum_{x\in\mathcal{G}}L_{1-\alpha}(x)\), where \(|\mathcal{G}|\) denotes the cardinality of \(\mathcal{G}\).
Table 1 shows the result of the first simulation study. The IMSE values demonstrate that \(\hat{m}\) behaves better than \(\hat{m}^{\rm naive}\). In particular, the ISB values of \(\hat{m}\) are always much smaller than those of \(\hat{m}^{\rm naive}\), which is explained by the unbiased scoring property of \(\hat{m}\) as demonstrated in Proposition 3. While the errors of both estimators decrease as the
\begin{table}
\begin{tabular}{c c c c c c c c} \hline & & & \multicolumn{2}{c}{Cov} & \multicolumn{2}{c}{Len} \\ \((1-\alpha)\) & \(n\) & AN & EL & AN & EL \\ \hline
0.9 & 250 & 0.87 & 0.86 & 0.77 & 0.84 \\ \cline{2-7} & 500 & 0.90 & 0.89 & 0.55 & 0.57 \\ \hline
0.95 & 250 & 0.91 & 0.91 & 0.92 & 1.01 \\ \cline{2-7} & 500 & 0.95 & 0.94 & 0.66 & 0.69 \\ \hline \end{tabular}
\end{table}
Table 1: Integrated squared bias (ISB), integrated variance (IV) and integrated mean squared error (IMSE) of \(\hat{m}\) for scenarios (S1)-(S3) based on \(R=200\) Monte-Carlo samples.
sample size increases, the decreasing speed for \(\hat{m}\) is much faster than that for \(\hat{m}^{\text{naive}}\). This suggests that our regression estimator is a reasonable estimator. Table 2 shows the result of the second simulation study. It demonstrates that both methods generally produce higher coverage rates and narrower confidence intervals as the sample size increases. This suggests that both are reasonable methods. The table also reveals that the AN-based intervals have higher coverage rates and shorter lengths than the EL-based intervals. This indicates that the AN-based method can be a better option than the EL-based method. However, the latter is also a good alternative.
### Real data analysis
We analyzed the dataset'sunspots_births' in the R package 'rotasym' ([25]). The dataset was analyzed in [24] to test the rotational symmetry of sunspots. Sunspots are temporary phenomena on the sun that appear as spots darker than the surrounding areas. Sunspot regions are cooler than the surrounding areas since the convection is blocked by the solar magnetic field flux. Sunspots are important sources in the study of solar activity and their number and positions affect the earth's long-term climate, telecommunications networks, aircraft navigation systems and spacecrafts, among others. Hence, it is important to study the distribution of sunspots.
Sunspots usually appear as a group. The dataset'sunspots_births' contains \(n=51,303\) groups of newly born sunspots measured in the years from 1872 to 2018. Each group observation contains the mean longitude and latitude of sunspots in that group. However, sunspots usually last only from a few hours to a few days, and they move across the surface of the sun. Also, the sizes of sunspots, known to have diameters ranging from 16km to 160,000km, keep changing during their lifespans. Due to these reasons combined with technical limitations of measuring devices, it is not easy to measure the exact birth locations of sunspots. Indeed, it is well known that sunspots area observation may contain measurement errors (e.g. [2]). Hence, we may assume that the observed mean longitudes and latitudes contain measurement errors. However, since the levels of the measurement errors could be not too high, we took the Laplace distribution on \(SO(3)\) for the measurement error distribution and estimated the density of the birth locations of sunspots based on the deconvolution density estimator defined at (2.12). We took the four values 0, \(\sqrt{0.05}\), \(\sqrt{0.1}\) and \(\sqrt{0.15}\) for the dis
tribution parameter \(\lambda\), to see how the choice of the parameter affects the resulting density estimates. We note that the estimator with \(\lambda=0\) corresponds to the naive density estimator that does not take into account measurement errors. We took the smoothing parameter \(T_{n}\) minimizing the classical least squares cross-validation criterion ([53], [6]). This kind of comparison scheme was adopted in [18] for contaminated \(\mathbb{S}^{1}\)-valued data. In this data analysis, we also included interval estimation studied in Theorems 4 and 6. We note that asymptotic distributions and asymptotic confidence intervals for densities on \(\mathbb{S}^{2}\) have not been studied in the literature of deconvolution density estimation on \(\mathbb{S}^{2}\).
The contour plots of the estimated densities are depicted in Figure 1. The figure illustrates that, as \(\lambda\) increases, the mass of the estimated density moves to the equator of the sun. It is well known that the rotating speed of the sun is the fastest at the equator and it decreases as the latitude goes up or down. Since sunspots are considered as a consequence of the twisted solar magnetic field caused by the fast rotating speed, the true density is likely to a higher mass as the latitude approaches to zero. It is also natural that the distribution
Figure 1: The contour plots of the estimated densities based on the proposed method with \(\lambda=0,\sqrt{0.05},\sqrt{0.1}\) and \(\sqrt{0.15}\). The color scale is the same for all plots.
Figure 2: The contour plots of the 95% pointwise confidence intervals for the true density based on the asymptotic normality with \(\lambda=0,\sqrt{0.05},\sqrt{0.1}\) and \(\sqrt{0.15}\). The color scale is the same for all plots.
of sunspots is symmetric about the equator and the density levels are horizontal due to the same reason. These justify the validity of the estimated densities for \(\lambda>0\).
Figure 2 depicts the contour plots of the 95% pointwise confidence intervals for the true density based on the asymptotic normality. The corresponding contour plots based on the empirical likelihood technique are omitted since they showed almost the same plots due to the large sample size. The upper confidence bounds on the right side of Figure 2 generally show wider peaks than the estimated densities in Figure 1, while the lower confidence bounds on the left side show opposite trends. Also, the length of each confidence interval is very short, which is very informative. We believe that these provide useful information in the analysis of sunspots.
## Acknowledgements
Research of Jeong Min Jeon was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2020R1A6A3A03037314) and the European Research Council (2016-2021, Horizon 2020/ERC grant agreement No. 694409). Research of Ingrid Van Keilegom was supported by the European Research Council (2016-2021, Horizon 2020/ERC grant agreement No. 694409).
## References
* Atkinson and Han [2012] Atkinson, K. and Han, W. (2012). _Spherical Harmonics and Approximations on the Unit Sphere: An Introduction_. Springer-Verlag Berlin Heidelberg.
* Baranyi et al. [2001] Baranyi, T., Gyori, L., Ludmany, A. and Coffey, H. E. (2001). Comparison of sunspot area data bases. _Monthly Notices of the Royal Astronomical Society_, **323**, 223-230.
* Belomestny and Goldenshluger [2021] Belomestny, D. and Goldenshluger, A. (2021). Density deconvolution under general assumptions on the distribution of measurement errors, _Annals of Statistics_, **49**, 615-649.
* Bertrand et al. [2019] Bertrand, A., Van Keilegom, I. and Legrand, C. (2019). Flexible parametric approach
to classical measurement error variance estimation without auxiliary data. _Biometrics_, **75**, 297-307.
* [5] Boente, G., Gonzalez-Manteiga, W. and Rodriguez, D. (2009). Goodness-of-fit test for directional data. _Scandinavian Journal of Statistics_, **41**, 259-275.
* [6] Bowman, A. W. (1984). An alternative method of cross-validation for the smoothing of density estimates. _Biometrika_, **72**, 353-360.
* [7] Chakraborty, R. and Vemuri, B. C. (2019). Statistics on the Stiefel manifold: theory and applications. _Annals of Statistics_, **47**, 415-438.
* [8] Chang, T. (1989). Spherical regression with errors in variables. _Annals of Statistics_, **17**, 293-306.
* [9] Chen, S. X. and Van Keilegom, I. (2009). A review on empirical likelihood methods for regression. _Test_, **18**, 415-447.
* [10] Chirikjian, G. S. (2012). _Stochastic Models, Information Theory, and Lie Groups, Volume 2_. Birkhauser Basel.
* [11] Cuesta-Albertos, J. A., Cuevas, A. and Fraiman, R. (2009). On projection-based tests for directional and compositional data. _Statistic and Computing_, **19**, 367-380.
* [12] Dattner, I., Reiss, M. and Trabs, M. (2016). Adaptive quantile estimation in deconvolution with unknown error distribution. _Bernoulli_, **22**, 143-192.
* [13] Delaigle, A. (2014). Nonparametric kernel methods with errors-in-variables: constructing estimators, computing them, and avoiding common mistakes. _Australian and New Zealand Journal of Statistics_, **56**, 105-124.
* [14] Delaigle, A., Fan, J. and Carroll, R. J. (2009). A design-adaptive local polynomial estimator for the errors-in-variables problem. _Journal of the American Statistical Association_, **104**, 348-359.
* [15] Delaigle, A., Hall, P. and Jamshidi, F. (2015). Confidence bands in non-parametric errors-in-variables regression. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_, **21**, 169-184.
* [16] Delaigle, A., Hall, P. and Meister, A. (2008). On deconvolution with repeated measurements. _Annals of Statistics_, **36**, 665-685.
* [17] Efthimiou, C. and Frye, C. (2014). _Spherical harmonics in p dimensions_. World Scientific Publishing Co. Pte. Ltd.
* [18] Efromovich, S. (1997). Density estimation for the case of supersmooth measurement error. _Journal of the American Statistical Association_, **92**, 526-535.
* [19] Fan, J. (1991a). On the optimal rates of convergence for nonparametric deconvolution problems. _Annals of Statistics_, **19**, 1257-1272.
* [20] Fan, J. (1991b). Asymptotic normality for deconvolution kernel density estimators. _Sankhya_, **53**, 97-110.
* [21] Fan, J. and Truong, Y. K. (1993). Nonparametric regression with errors in variables. _Annals of Statistics_, **21**, 1900-1925.
* [22] Gao, F., Huang, X.-Y., Jacobs, N. A. and Wang, H. (2015). Assimilation of wind speed and direction observations: results from real observation experiments. _Tellus A: Dynamic Meteorology and Oceanography_, **67**, 27132.
* [23] Garcia-Portugues, E., Crujeiras, R. M. and Gonzalez-Manteiga, W. (2013). Kernel density estimation for directional-linear data. _Journal of Multivariate Analysis_, **121**, 152-175.
* [24] Garcia-Portugues, E., Paindaveine, D., and Verdebout, T. (2020). On optimal tests for rotational symmetry against new classes of hyperspherical distributions. _Journal of the American Statistical Association_, **115**, 1873-1887.
* [25] Garcia-Portugues, E., Paindaveine, D., and Verdebout, T. (2021). rotasym: Tests for Rotational Symmetry on the Hypershpere. R package version 1.1.0.
* [26] Garcia-Portugues, E., Van Keilegom, I., Crujeiras, R. M. and Gonzalez-Manteiga, W. (2016). Testing parametric models in linear-directional regression. _Scandinavian Journal of Statistics_, **43**, 1178-1191.
* [27] Hall, P., Watson, G. S. and Cabrera, J. (1987). Kernel density estimation with spherical data. _Biometrika_, **74**, 751-762.
* [28] Healy, D. M., Hendriks, H. and Kim, P. T. (1998). Spherical deconvolution. _Journal of Multivariate Analysis_, **67**, 1-22.
* [29] Hendriks, H. (1990). Nonparametric estimation of a probability density on a Riemannian manifold using Fourier expansions. _Annals of Statistics_, **18**, 832-849.
* [30] Hjort, N. L., McKeague, I. W. and Van Keilegom, I. (2009). Extending the scope of empirical likelihood. _Annals of Statistics_, **37**, 1079-1111.
* [31] Huckemann, S., Kim, P. T., Koo, J.-Y. and Munk, A. (2010). Mobius deconvolution on the hyperbolic plane with application to impedance density estimation. _Annals of Statistics_, **38**, 2465-2498.
* [32] Jeon, J. M., Park, B. U. and Van Keilegom, I. (2021). Additive regression for non-Euclidean responses and predictors. _Annals of Statistics_, **49**, 2611-2641.
* [33] Jeon, J. M., Park, B. U. and Van Keilegom, I. (2022). Nonparametric regression on Lie groups with measurement errors. _Annals of Statistics (under revision)_.
* [34] Johannes, J. (2009). Deconvolution with unknown measurement error distribution. _Annals of Statistics_, **37**, 2301-2323.
* [35] Johannes, J. and Schwarz, M. (2013). Adaptive circular deconvolution by model selection under unknown error distribution. _Bernoulli_, **19**, 1576-1611.
* [36] Katznelson, Y. (2004). _An introduction to harmonic analysis_. Cambridge University Press.
* [37] Kalf, H. (1995). On the expansion of a function in terms of spherical harmonics in arbitrary dimensions. _Bulletin of the Belgian Mathematical Society_, **2**, 361-380.
* [38] Kim, P. T. (1998). Deconvolution density estimation on SO(N). _Annals of Statistics_, **26**, 1083-1102.
* [39] Kim, P. T. (2000). On the Characteristic Function of the Matrix von Mises-Fisher Distribution with Application to SO(N)-Deconvolution. _In: Gine, E., Mason, D. M., Wellner, J. A. (eds) High Dimensional Probability II. Progress in Probability, Volume 47_, Birkhauser, Boston, MA.
* [40] Kim, P. T. and Koo, J.-Y. (2002). Optimal spherical deconvolution. _Journal of Multivariate Analysis_, **80**, 21-42.
* [41] Kim, P. T., Koo, J.-Y. and Park, H. J. (2004). Sharp minimaxity and spherical deconvolution for super-smooth error distributions. _Journal of Multivariate Analysis_, **90**, 384-392.
* [42] Kim, P. T. and Richards, D. St. P. (2001). Deconvolution density estimation on compact Lie groups. _Contemporary Mathematics_, **287**, 155-171.
* [43] Leon, C. A., Masse, J.-C. and Rivest, L.-P. (2006). A statistical model for random rotations. _Journal of Multivariate Analysis_, **97**, 412-430.
* [44] Luo, Z. M., Kim, P. T., Kim, T. Y. and Koo, J.-Y. (2011). Deconvolution on the Euclidean motion group SE(3). _Inverse Problems_, **27**, 035014.
* [45] Marron, J. S. and Alonso, A. M. (2014). Overview of object oriented data analysis. _Biometical Journal_, **5**, 732-753.
* [46] Meister, A. (2009). _Deconvolution Problems in Nonparametric Statistics_. Springer-Verlag Berlin Heidelberg.
* [47] Nadarajah, S. J. and Zhang, Y. (2017). Wrapped: An R package for circular data. _PLoS ONE_, **12**, e0188512.
* [48] Owen, A. (2001). Empirical Likelihood. Chapman and Hall/CRC, London.
* [49] Pewsey, A. and Garcia-Portugues, E. (2021). Recent advances in directional statistics. _Test_, In print.
* [50] Qui, Y., Nordman, D. J. and Vardeman, S. B. (2014). A wrapped trivariate normal distribution and Bayes inference for 3-D rotations. _Statistica Sinica_, **24**, 897-917.
* [51] Rivest, L. P. (1989). Spherical regression for concentrated Fisher-von Mises distributions. _Annals of Statistics_, **17**, 307-317.
* [52] Rosenthal, M., Wu, W. U., Klassen, E. and Srivastava, A. (2014). Spherical regression models using projective linear transformations. _Journal of the American Statistical Association_, **109**, 1615-1624.
* [53] Rudemo, M. (1982). Empirical choice of histograms and kernel density estimators. _Scandinavian Journal of Statistics_, **9**, 65-78.
* [54] Sakurai, J. J. and Napolitano, J. (2017). _Modern Quantum Mechanics_. Cambridge University Press.
* [55] Scealy, J. L. and Welsh, A. H. (2011). Regression for compositional data by using distributions defined on the hypersphere. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_, **73**, 351-375.
* [56] Sei, T., Shibata, H., Takemura, A., Ohara, K. and Takayama, N. (2013). Properties and applications of Fisher distribution on the rotation group. _Journal of Multivariate Analysis_, **116**, 440-445.
* [57] Stefanski, L. A. and Carroll, R. J. (1990). Deconvolving kernel density estimators. _Statistics_, **21**, 169-184.
* Euclidean Space, the Sphere, and the Poincare Upper Half-Plane_. Springer-Verlag New York.
**Supplementary Material to**
**'Density estimation and regression analysis on \(\mathbb{S}^{d}\)**
**in the presence of measurement error'**
**by Jeong Min Jeon and Ingrid Van Keilegom**
In the Supplementary Material, we provide some examples of the implementation of \(D^{l}_{qr}(u)\) and \(\tilde{\phi}^{l}_{qr}(f_{U})\) for arbitrary \(f_{U}\). We also provide all technical proofs. In the Supplementary Material, we denote by \(B^{l}(x)\) the \(N(d,l)\)-vector whose \(q\)th element equals \(B^{l}_{q}(x)\). We also let \(\|\cdot\|_{2}\) denote the \(L^{2}\)-norm of \(L^{2}((\mathbb{S}^{d},\nu),\mathbb{C})\) and (const.) denote a generic positive constant.
### Implementation of \(D^{l}_{qr}(u)\) and \(\tilde{\phi}^{l}_{qr}(f_{U})\) for arbitrary \(f_{U}\)
1. \((d=1)\) It is well known that each \(u\in SO(2)\) can be written as \[\left(\begin{smallmatrix}\cos\varphi_{u}&-\sin\varphi_{u}\\ \sin\varphi_{u}&\cos\varphi_{u}\end{smallmatrix}\right)\] for some \(\varphi_{u}\in[0,2\pi)\) and that \(\int_{SO(2)}g(u)d\mu(u)=(2\pi)^{-1}\int_{0}^{2\pi}g(u)d\varphi_{u}\) for \(g:SO(2)\to\mathbb{R}\). Using these and the definition of \(B^{l}_{q}\) given in Example 1-1, we may show that \[D^{l}_{11}(u)=\cos(l\varphi_{u}), D^{l}_{12}(u)=-\sin(l\varphi_{u}),\] \[D^{l}_{21}(u)=\sin(l\varphi_{u}), D^{l}_{22}(u)=\cos(l\varphi_{u}).\] Using this, we have \(\tilde{\phi}^{l}_{qr}(f_{U})=(2\pi)^{-1}\int_{0}^{2\pi}f_{U}(u)D^{l}_{qr}(u)\, d\varphi_{u}\).
2. \((d=2)\) We note that each \(u\in SO(3)\) can be written as \(R(\varphi_{u})S(\theta_{u})R(\psi_{u})\) for some Euler angles \(\varphi_{u},\psi_{u}\in[0,2\pi)\) and \(\theta_{u}\in[0,\pi)\), where \[R(\vartheta)=\left(\begin{smallmatrix}\cos\vartheta&-\sin\vartheta&0\\ \sin\vartheta&\cos\vartheta&0\\ 0&0&1\end{smallmatrix}\right), S(\vartheta)=\left(\begin{smallmatrix}\cos\vartheta&0&\sin \vartheta\\ 0&1&0\\ -\sin\vartheta&0&\cos\vartheta\end{smallmatrix}\right)\] for \(\vartheta\in[0,2\pi)\) (Chapter 12.9 in Chirikjian (2012)). For \(B^{l}_{q}\) defined in Example 1-2 and for \(1\leq q,r\leq 2l+1\), it holds that \[D^{l}_{qr}(u)=e^{-\sqrt{-1}\cdot(q-l-1)\varphi_{u}}\cdot d^{l}_{qr}(\theta_{ u})\cdot e^{-\sqrt{-1}\cdot(r-l-1)\psi_{u}},\]
where the definition of \(d^{l}_{qr}(\theta_{u})\) is given at (2.2) (Chapter 12.9 in Chirikjian (2012)). Using this, we have
\[\tilde{\phi}^{l}_{qr}(f_{U})=(8\pi^{2})^{-1}\int_{0}^{2\pi}\int_{0}^{\pi}\int_{0 }^{2\pi}f_{U}(u)D^{l}_{qr}(u)\sin\theta_{u}\,d\varphi_{u}\,d\theta_{u}\,d\psi_{u};\]
see Chapter 12.1 in Chirikjian (2012) for the representation of integration on \(SO(3)\) with respect to the normalized Haar measure.
### Proof of Proposition 1
The proposition follows from
\[\phi^{l}_{q}(g*f) =\int_{SO(d+1)}g(u)\int_{\mathbb{S}^{d}}f(u^{-1}x)\overline{B^{l} _{q}(x)}\,d\nu(x)\,d\mu(u)\] \[=\int_{SO(d+1)}g(u)\int_{\mathbb{S}^{d}}f(x)\overline{B^{l}_{q}( ux)}\,d\nu(x)\,d\mu(u)\] \[=\int_{SO(d+1)}g(u)\int_{\mathbb{S}^{d}}f(x)\,\sum_{r=1}^{N(d,l )}D^{l}_{qr}(u)\overline{B^{l}_{r}(x)}\,d\nu(x)\,d\mu(u)\] \[=\sum_{r=1}^{N(d,l)}\int_{SO(d+1)}g(u)D^{l}_{qr}(u)\,d\mu(u)\int _{\mathbb{S}^{d}}f(x)\overline{B^{l}_{r}(x)}\,d\nu(x)\] \[=\sum_{r=1}^{N(d,l)}\tilde{\phi}^{l}_{qr}(g)\phi^{l}_{r}(f),\]
where we have used the rotation-invariant property of \(\nu\) and that \(\det(u)=1\) for the second equality, and (2.6) for the third equality.
### Proof of Proposition 2
We first show that \(\int_{\mathbb{S}^{d}}B^{l}_{q}(x)d\nu(x)=0\) for \(l>0\). Since \(\{B^{l}_{q}:l\in\mathbb{N}_{0},1\leq q\leq N(d,l)\}\) forms an orthonormal basis of \(L^{2}((\mathbb{S}^{d},\nu),\mathbb{C})\) and \(B^{0}_{1}\equiv(\nu(\mathbb{S}^{d}))^{-1/2}\), we have
\[\int_{\mathbb{S}^{d}}B^{l}_{q}(x)d\nu(x)=(\nu(\mathbb{S}^{d}))^{1/2}\int_{ \mathbb{S}^{d}}B^{l}_{q}(x)B^{0}_{1}(x)d\nu(x)=0\]
for \(l>0\). Hence,
\[\int_{\mathbb{S}^{d}}K_{T_{n}}(x,z)d\nu(x)= \sum_{l=0}^{[T_{n}]}\sum_{q=1}^{N(d,l)}\sum_{r=1}^{N(d,l)}(\tilde{ \phi}^{l}(f_{U}))_{qr}^{-1}\,\overline{B_{r}^{l}(z)}\int_{\mathbb{S}^{d}}B_{q}^ {l}(x)d\nu(x)\] \[= (\tilde{\phi}^{0}(f_{U}))_{11}^{-1}\,\overline{B_{1}^{0}(z)}\int _{\mathbb{S}^{d}}B_{1}^{0}(x)d\nu(x)\] \[= 1,\]
where the last equality follows from the fact \(\tilde{\phi}^{0}(f_{U})=1\). This completes the proof.
### Proof of Proposition 3
It suffices to show that
\[\mathrm{E}\left(\sum_{r=1}^{N(d,l)}(\tilde{\phi}^{l}(f_{U}))_{qr}^{-1} \overline{B_{r}^{l}(Z)}\bigg{|}X\right)=\overline{B_{q}^{l}(X)}.\]
We note that
\[\overline{B_{r}^{l}(Z)}=\overline{B_{r}^{l}(UX)}=\sum_{s=1}^{N(d,l)}D_{rs}^{l }(U)\overline{B_{s}^{l}(X)},\]
where the last equality follows from (2.9). We also note that
\[\mathrm{E}\left(\sum_{s=1}^{N(d,l)}D_{rs}^{l}(U)\overline{B_{s}^ {l}(X)}\bigg{|}X\right)=\sum_{s=1}^{N(d,l)}\mathrm{E}(D_{rs}^{l}(U))\overline {B_{s}^{l}(X)}=\sum_{s=1}^{N(d,l)}\tilde{\phi}_{rs}^{l}(f_{U})\overline{B_{s}^ {l}(X)},\]
where the first equality follows from the assumption \(U\perp X\). Hence, we have
\[\mathrm{E}\left(\sum_{r=1}^{N(d,l)}(\tilde{\phi}^{l}(f_{U}))_{qr} ^{-1}\overline{B_{r}^{l}(Z)}\bigg{|}X\right) =\sum_{s=1}^{N(d,l)}\overline{B_{s}^{l}(X)}\sum_{r=1}^{N(d,l)}( \tilde{\phi}^{l}(f_{U}))_{qr}^{-1}\tilde{\phi}_{rs}^{l}(f_{U})\] \[=\sum_{s=1}^{N(d,l)}\overline{B_{s}^{l}(X)}(I_{N(d,l)})_{qs}\] \[=\overline{B_{q}^{l}(X)},\]
which is the desired result.
### Proof of Proposition 4
We define \(\tilde{f}_{X}(x)=n^{-1}\sum_{i=1}^{n}K_{T_{n}}(x,Z_{i})\). We note that \(\hat{f}_{X}(x)=\mathrm{Re}(\tilde{f}_{X}(x))\) and
\[\mathrm{E}(\tilde{f}_{X}(x)) =\sum_{l=0}^{[T_{n}]}\sum_{q=1}^{N(d,l)}B_{q}^{l}(x)\sum_{r=1}^{N( d,l)}(\tilde{\phi}^{l}(f_{U}))_{qr}^{-1}\phi_{r}^{l}(f_{Z})\] \[=\sum_{l=0}^{[T_{n}]}\sum_{q=1}^{N(d,l)}\phi_{q}^{l}(f_{X})B_{q}^ {l}(x),\]
where the second equality follows from (2.6). Hence,
\[\sup_{x\in\mathbb{S}^{d}}|f_{X}(x)-\mathrm{E}(\tilde{f}_{X}(x))|=\sup_{x\in \mathbb{S}^{d}}\left|\sum_{l>T_{n}}\sum_{q=1}^{N(d,l)}\phi_{q}^{l}(f_{X})B_{q} ^{l}(x)\right|=o(1),\]
where the last equality follows from the assumption that the Fourier-Laplace series of \(f_{X}\) converges uniformly. Also,
\[\sup_{x\in\mathbb{S}^{d}}|\tilde{f}_{X}(x)-\mathrm{E}(\tilde{f}_ {X}(x))|\] \[\leq \sup_{x\in\mathbb{S}^{d}}\sum_{l=0}^{[T_{n}]}\left|\sum_{q=1}^{N( d,l)}B_{q}^{l}(x)\sum_{r=1}^{N(d,l)}(\tilde{\phi}^{l}(f_{U}))_{qr}^{-1}\left(n^{-1} \sum_{i=1}^{n}\overline{B_{r}^{l}(Z_{i})}-\phi_{r}^{l}(f_{Z})\right)\right|\] \[= \sup_{x\in\mathbb{S}^{d}}\sum_{l=0}^{[T_{n}]}\left|B^{l}(x)^{ \top}(\tilde{\phi}^{l}(f_{U}))^{-1}\left(n^{-1}\sum_{i=1}^{n}\overline{B^{l}( Z_{i})}-\phi^{l}(f_{Z})\right)\right|\] \[\leq \sup_{x\in\mathbb{S}^{d}}\sum_{l=0}^{[T_{n}]}\left\|B^{l}(x) \right\|\cdot\left\|(\tilde{\phi}^{l}(f_{U}))^{-1}\left(n^{-1}\sum_{i=1}^{n} \overline{B^{l}(Z_{i})}-\phi^{l}(f_{Z})\right)\right\|\] \[\leq \sup_{x\in\mathbb{S}^{d}}\sum_{l=0}^{[T_{n}]}\left\|B^{l}(x) \right\|\cdot\left\|(\tilde{\phi}^{l}(f_{U}))^{-1}\right\|_{\mathrm{op}} \left\|n^{-1}\sum_{i=1}^{n}\overline{B^{l}(Z_{i})}-\phi^{l}(f_{Z})\right\|\] \[= \sum_{l=0}^{[T_{n}]}\sqrt{\frac{N(d,l)}{\nu(\mathbb{S}^{d})}} \cdot\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{\mathrm{op}}\left\|n^{-1}\sum_{i=1}^{ n}\overline{B^{l}(Z_{i})}-\phi^{l}(f_{Z})\right\|,\]
where we have used the fact that \(\|B^{l}(x)\|^{2}\equiv N(d,l)/\nu(\mathbb{S}^{d})\) for the last equality. Hence,
\[\mathrm{E}\left(\sup_{x\in\mathbb{S}^{d}}|\tilde{f}_{X}(x)-\mathrm{ E}(\tilde{f}_{X}(x))|\right)\] \[\leq \sum_{l=0}^{[T_{n}]}\sqrt{\frac{N(d,l)}{\nu(\mathbb{S}^{d})}} \cdot\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{\mathrm{op}}\left(\sum_{r=1}^{N(d,l) }\mathrm{E}\left(\left|n^{-1}\sum_{i=1}^{n}\overline{B_{r}^{l}(Z_{i})}-\phi_{r }^{l}(f_{Z})\right|^{2}\right)\right)^{1/2}\] \[\leq n^{-1/2}\sum_{l=0}^{[T_{n}]}\sqrt{\frac{N(d,l)}{\nu(\mathbb{S}^ {d})}}\cdot\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{\mathrm{op}}\left(\sum_{r=1}^{ N(d,l)}\mathrm{E}(|B_{r}^{l}(Z)|^{2})\right)^{1/2}\] \[= (\nu(\mathbb{S}^{d}))^{-1}n^{-1/2}\sum_{l=0}^{[T_{n}]}N(d,l)\|( \tilde{\phi}^{l}(f_{U}))^{-1}\|_{\mathrm{op}}.\]
Now, we assume the case (S1)-(i). Then,
\[\sum_{l=0}^{[T_{n}]}N(d,l)\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{ \mathrm{op}}\leq 1+c_{1}\sum_{l=1}^{[T_{n}]}N(d,l)l^{\beta}\leq(\mathrm{const.})T _{n}^{\beta+d}\]
since \(N(d,l)=O(l^{d-1})\) as \(l\to\infty\). By the choice (T1), it holds that
\[\sup_{x\in\mathbb{S}^{d}}|\tilde{f}_{X}(x)-f_{X}(x)|=\sup_{x\in \mathbb{S}^{d}}|\hat{f}_{X}(x)+\sqrt{-1}\cdot\mathrm{Im}(\tilde{f}_{X}(x))-f _{X}(x)|=o_{p}(1).\]
Since \(\sup_{x\in\mathbb{S}^{d}}|\hat{f}_{X}(x)-f_{X}(x)|\leq\sup_{x\in\mathbb{S}^{d }}|\tilde{f}_{X}(x)-f_{X}(x)|\), the desired result follows. Now, we assume the case (S2)-(i). Then,
\[\sum_{l=0}^{[T_{n}]}N(d,l)\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{ \mathrm{op}} \leq 1+c_{1}\sum_{l=1}^{[T_{n}]}N(d,l)l^{\alpha}\exp(\gamma\cdot l ^{\beta})\] \[\leq(\mathrm{const.})T_{n}^{\alpha+d}\exp(\gamma\cdot T_{n}^{ \beta}).\]
By the choice (T2), the result for the case (S2)-(i) similarly follows. Finally, we assume the case (S3)-(i). Then,
\[\sum_{l=0}^{[T_{n}]}N(d,l)\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{ \mathrm{op}} \leq 1+c_{1}\sum_{l=1}^{[T_{n}]}N(d,l)l^{\alpha}\exp(\gamma l^{ \beta}(\log l-\xi_{1}))\] \[\leq(\mathrm{const.})T_{n}^{\alpha+d}\exp(\gamma T_{n}^{\beta}( \log T_{n}-\xi_{1})).\]
Again by the choice (T3), the result for the case (S3)-(i) similarly follows. This completes the proof.
### Proof of Theorem 1
We first prove the case of density estimation. Recall the definition of \(\tilde{f}_{X}\) given in the proof of Proposition 4. We note that
\[\mathrm{E}\left(\|\tilde{f}_{X}-f_{X}\|_{2}^{2}\right)=\mathrm{E}\left(\| \tilde{f}_{X}-\mathrm{E}(\tilde{f}_{X})\|_{2}^{2}\right)+\|f_{X}-\mathrm{E}( \tilde{f}_{X})\|_{2}^{2}.\]
We first find the rate of \(\|f_{X}-\mathrm{E}(\tilde{f}_{X})\|_{2}^{2}\). We note that
\[\|f_{X}-\mathrm{E}(\hat{f}_{X})\|_{2}^{2}\] \[=\sum_{l>T_{n}}\sum_{q=1}^{N(d,l)}|\phi_{q}^{l}(f_{X})|^{2}\] (S.1) \[=\sum_{l>T_{n}}\sum_{q=1}^{N(d,l)}\lambda_{l}^{-2k}|(-1)^{k} \lambda_{l}^{k}\phi_{q}^{l}(f_{X})|^{2}\] \[=(T_{n}(T_{n}+d-1))^{-2k}\sum_{l>T_{n}}\sum_{q=1}^{N(d,l)}(T_{n}( T_{n}+d-1))^{2k}\lambda_{l}^{-2k}|\phi_{q}^{l}(\Delta_{\mathbb{S}^{d}}^{k}(f_{X}))| ^{2}\] \[\leq(T_{n}(T_{n}+d-1))^{-2k}\|\Delta_{\mathbb{S}^{d}}^{k}(f_{X}) \|_{2}^{2},\]
where \(-\lambda_{l}=-l(l+d-1)\) are the eigenvalues of the Laplace-Beltrami operator \(\Delta_{\mathbb{S}^{d}}\) on \(C^{2}(\mathbb{S}^{d})\) and \(\Delta_{\mathbb{S}^{d}}^{k}\) is the composition of \(\Delta_{\mathbb{S}^{d}}\) for \(k\)-times. Since \(\Delta_{\mathbb{S}^{d}}^{k}(f_{X})\) is a continuous function on \(\mathbb{S}^{d}\), we have \(\|\Delta_{\mathbb{S}^{d}}^{k}(f_{X})\|_{2}^{2}<\infty\). This gives
\[\|f_{X}-\mathrm{E}(\tilde{f}_{X})\|_{2}^{2}=O(T_{n}^{-4k}).\]
Now, we find the rate of \(\mathrm{E}\left(\|\tilde{f}_{X}-\mathrm{E}(\tilde{f}_{X})\|_{2}^{2}\right)= \int_{\mathbb{S}^{d}}\mathrm{Var}\left(\tilde{f}_{X}(x)\right)f_{X}(x)d\nu(x)\). We note that
\[\mathrm{Var}\left(\tilde{f}_{X}(x)\right)=n^{-1}\mathrm{Var}\left(K_{T_{n}}( x,Z)\right)\leq n^{-1}\mathrm{E}(|K_{T_{n}}(x,Z)|^{2}).\]
Since \(f_{X}\) is bounded, it suffices to find the rate of \(\int_{\mathbb{S}^{d}}\mathrm{E}(|K_{T_{n}}(x,Z)|^{2})d\nu(x)\). It equals
\[\mathrm{E}\left(\int_{\mathbb{S}^{d}}|K_{T_{n}}(x,Z)|^{2}d\nu(x)\right)= \sum_{l=0}^{[T_{n}]}\mathrm{E}\left(\sum_{q=1}^{N(d,l)}\left|\sum_ {r=1}^{N(d,l)}\left(\tilde{\phi}^{l}(f_{U})\right)_{qr}^{-1}\overline{B_{r}^{ l}(Z)}\right|^{2}\right)\] \[\leq \sum_{l=0}^{[T_{n}]}\left\|\left(\tilde{\phi}^{l}(f_{U})\right)^{ -1}\right\|_{\mathrm{op}}^{2}\mathrm{E}\left(\|B^{l}(Z)\|^{2}\right)\] \[= (\nu(\mathbb{S}^{d}))^{-1}\sum_{l=0}^{[T_{n}]}N(d,l)\left\|( \tilde{\phi}^{l}(f_{U}))^{-1}\right\|_{\mathrm{op}}^{2},\]
where the first equality follows from the orthonormality of \(\{B_{q}^{l}:1\leq q\leq N(d,l)\}\).
In the case of (a),
\[\sum_{l=0}^{[T_{n}]}N(d,l)\left\|(\tilde{\phi}^{l}(f_{U}))^{-1}\right\|_{\text{op }}^{2}\leq 1+c_{1}^{2}\sum_{l=1}^{[T_{n}]}N(d,l)l^{2\beta}\leq(\text{const.})T_{n}^{2 \beta+d}.\]
Therefore, we obtain
\[\text{E}\left(\|\tilde{f}_{X}-f_{X}\|_{2}^{2}\right)=O(T_{n}^{-4k}+n^{-1}T_{n }^{2\beta+d}).\]
This implies that
\[\|\hat{f}_{X}-f_{X}\|_{2}^{2}\leq\|\tilde{f}_{X}-f_{X}\|_{2}^{2}=O_{p}(T_{n}^{ -4k}+n^{-1}T_{n}^{2\beta+d}).\]
In the case of (b), we note that
\[\sum_{l=0}^{[T_{n}]}N(d,l)\left\|(\tilde{\phi}^{l}(f_{U}))^{-1} \right\|_{\text{op}}^{2}\leq 1+c_{1}^{2}\sum_{l=1}^{[T_{n}]}N(d,l)l^{2\alpha}\exp(2\gamma \cdot T_{n}^{\beta})\] \[\leq (\text{const.})T_{n}^{2\alpha+d}\exp(2\gamma\cdot T_{n}^{\beta}).\]
Therefore, we obtain
\[\text{E}\left(\|\tilde{f}_{X}-f_{X}\|_{2}^{2}\right)=O(T_{n}^{-4k}+n^{-1}T_{n }^{2\alpha+d}\exp(2\gamma\cdot T_{n}^{\beta})).\]
This implies that
\[\|\hat{f}_{X}-f_{X}\|_{2}^{2}\leq\|\tilde{f}_{X}-f_{X}\|_{2}^{2}=O_{p}(T_{n}^{ -4k}+n^{-1}T_{n}^{2\alpha+d}\exp(2\gamma\cdot T_{n}^{\beta})).\]
In the case of (c), similar arguments show that
\[\|\hat{f}_{X}-f_{X}\|_{2}^{2}=O_{p}(T_{n}^{-4k}+n^{-1}T_{n}^{2\alpha+d}\exp(2 \gamma\cdot T_{n}^{\beta}(\log T_{n}-\xi_{1}))).\]
This completes the proof for the case of density estimation.
We now turn to the case of regression estimation. We write
\[\widetilde{m\cdot f_{X}}(x) =n^{-1}\sum_{i=1}^{n}\left(\sum_{l=0}^{[T_{n}]}\sum_{q=1}^{N(d,l )}B_{q}^{l}(x)\sum_{r=1}^{N(d,l)}(\tilde{\phi}^{l}(f_{U}))_{qr}^{-1}\overline {B_{r}^{l}(Z_{i})}\right)Y_{i},\] \[F(x) =m(x)f_{X}(x)-\text{E}(\widetilde{m\cdot f_{X}}(x)).\]
We note that \(\mathrm{E}\left(\|\widetilde{m\cdot f_{X}}-m\cdot f_{X}\|_{2}^{2}\right)=\mathrm{E} \left(\|\widetilde{m\cdot f_{X}}-\mathrm{E}(\widetilde{m\cdot f_{X}})\|_{2}^{2} \right)+\|F\|_{2}^{2}\). We first approximate \(\|F\|_{2}^{2}\). We note that
(S.2) \[\begin{split}\mathrm{E}\left(\sum_{r=1}^{N(d,l)}(\tilde{\phi}^{l }(f_{U}))_{qr}^{-1}\overline{B_{r}^{l}(Z)}Y\right)&=\mathrm{E} \left(\mathrm{E}\left(\sum_{r=1}^{N(d,l)}(\tilde{\phi}^{l}(f_{U}))_{qr}^{-1} \overline{B_{r}^{l}(Z)}Y\bigg{|}X\right)\right)\\ &=\mathrm{E}\left(\mathrm{E}\left(\sum_{r=1}^{N(d,l)}(\tilde{ \phi}^{l}(f_{U}))_{qr}^{-1}\overline{B_{r}^{l}(Z)}\bigg{|}X\right)m(X)\right) \\ &=\mathrm{E}(m(X)\overline{B_{q}^{l}(X)})\\ &=\phi_{q}^{l}(m\cdot f_{X}),\end{split}\]
where the second equality follows from the underlying assumption \(U\perp(X,\epsilon)\), and the third equality follows from the proof of Proposition 3. From (S.2), we have
\[\mathrm{E}\left(\widetilde{m\cdot f_{X}}(x)\right)=\sum_{l=0}^{[T_{n}]}\sum_{ q=1}^{N(d,l)}\phi_{q}^{l}(m\cdot f_{X})B_{q}^{l}(x).\]
Since this is a partial sum of the Fourier-Laplace series of \(m\cdot f_{X}\) at \(x\), and \(m\cdot f_{X}\) is \(2k\)-times continuously differentiable, by arguing as (S.1), we get
(S.3) \[\|F\|_{2}^{2}=O(T_{n}^{-4k}).\]
We now approximate
\[\mathrm{E}\left(\|\widetilde{m\cdot f_{X}}-\mathrm{E}(\widetilde{m\cdot f_{X }})\|_{2}^{2}\right)=\int_{\mathbb{S}^{d}}\mathrm{Var}\left(\widetilde{m \cdot f_{X}}(x)\right)f_{X}(x)d\nu(x).\]
We note that
\[\mathrm{Var}\left(\widetilde{m\cdot f_{X}}(x)\right)= n^{-1}\mathrm{Var}\left(K_{T_{n}}(x,Z)Y\right)\] \[\leq n^{-1}\mathrm{E}(|K_{T_{n}}(x,Z)Y|^{2})\] \[\leq n^{-1}\mathrm{E}(\mathrm{E}(|K_{T_{n}}(x,Z)|^{2}|X)\mathrm{E}(Y ^{2}|X))\] \[\leq (\mathrm{const.})n^{-1}\mathrm{E}(|K_{T_{n}}(x,Z)|^{2}),\]
where the second inequality follows from the underlying assumption \(U\perp(X,\epsilon)\), and the last inequality follows from the boundedness of \(\mathrm{E}(Y^{2}|X=\cdot)\). Since \(f_{X}\) is bounded, it suffices to find the rate of \(\int_{\mathbb{S}^{d}}\mathrm{E}(|K_{T_{n}}(x,Z)|^{2})d\nu(x)\). The rate is obtained for each smoothness scenario in the proof of the first part of the theorem.
Hence, in the case of (a), we have
\[\mathrm{E}\left(\|\widetilde{m\cdot f_{X}}-m\cdot f_{X}\|_{2}^{2}\right)=O(T_{n} ^{-4k}+n^{-1}T_{n}^{2\beta+d}).\]
This implies that
\[\|\widehat{m\cdot f_{X}}-m\cdot f_{X}\|_{2}^{2}\leq\widetilde{\|m\cdot f_{X}}- m\cdot f_{X}\|_{2}^{2}=O_{p}(T_{n}^{-4k}+n^{-1}T_{n}^{2\beta+d}),\]
where \(\widehat{m\cdot f_{X}}:=\widetilde{\mathrm{Re}(m\cdot f_{X})}\). We note that
\[\inf_{x\in\mathbb{S}^{d}}|\hat{f}_{X}(x)| \geq\inf_{x\in\mathbb{S}^{d}}(f_{X}(x)-|\hat{f}_{X}(x)-f_{X}(x)|)\] \[\geq\inf_{x\in\mathbb{S}^{d}}f_{X}(x)-\sup_{x\in\mathbb{S}^{d}}| \hat{f}_{X}(x)-f_{X}(x)|.\]
This with Proposition 4 and the assumption \(\inf_{x\in\mathbb{S}^{d}}f_{X}(x)>0\) entails that there exists a constant \(c>0\) such that \(\inf_{x\in\mathbb{S}^{d}}|\hat{f}_{X}(x)|\geq c\) with probability tending to one. Since
\[\int_{\mathbb{S}^{d}}|\hat{m}(x)-m(x)|^{2}d\nu(x)\] \[=\int_{\mathbb{S}^{d}}\left|\widehat{\frac{m\cdot f_{X}}{f_{X}}(x )}-\frac{m(x)f_{X}(x)}{f_{X}(x)}\right|^{2}d\nu(x)\] \[\leq 2\int_{\mathbb{S}^{d}}\frac{|\widehat{m\cdot f_{X}}(x)-m(x)f _{X}(x)|^{2}}{|\hat{f}_{X}(x)|^{2}}+\frac{(m(x))^{2}|\hat{f}_{X}(x)-f_{X}(x)| ^{2}}{|\hat{f}_{X}(x)|^{2}}d\nu(x)\] \[\leq(\mathrm{const.})(\|\widehat{m\cdot f_{X}}-m\cdot f_{X}\|_{2} ^{2}+\|\hat{f}_{X}-f_{X}\|_{2}^{2})\]
with probability tending to one, the result for the case (a) follows. The cases of (b) and (c) similarly follow as in the case of (a). This completes the proof.
### Proof of Theorem 2
We note that
\[\hat{f}_{X}(x)-f_{X}(x)=n^{-1}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))-f _{X}(x)).\]
We write \(W_{ni}(x)=n^{-1}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))-f_{X}(x))\) and show that
(S.4) \[\frac{\sum_{i=1}^{n}W_{ni}(x)-\mathrm{E}(\sum_{i=1}^{n}W_{ni}(x))}{\sqrt{ \mathrm{Var}(\sum_{i=1}^{n}W_{ni}(x))}}\stackrel{{ d}}{{\longrightarrow}}N(0,1).\]
For this, we check that the Lyapunov condition
(S.5) \[\frac{\mathrm{E}(|W_{n1}(x)-\mathrm{E}(W_{n1}(x))|^{2+\varsigma})}{n^{\varsigma/2} (\mathrm{Var}(W_{n1}(x)))^{1+\varsigma/2}}\to 0\]
holds for some constant \(\varsigma>0\). In particular, we choose any \(\varsigma>0\) for the cases of (S2)-(i) and (S3)-(i). For the case of (S1)-(i), we choose \(\varsigma>0\) satisfying \(p<\varsigma/((2d-q)\varsigma+2(d-q))\). Such \(\varsigma\) exists since \(\varsigma/((2d-q)\varsigma+2(d-q))\to 1/(2d-q)\) as \(\varsigma\to\infty\). (S.5) is equivalent to
(S.6) \[\frac{\mathrm{E}(|V_{n}(x)-\mathrm{E}(V_{n}(x))|^{2+\varsigma})}{n^{\varsigma/ 2}(\mathrm{Var}(V_{n}(x)))^{1+\varsigma/2}}\to 0,\]
where \(V_{n}(x)=\mathrm{Re}(K_{T_{n}}(x,Z))-f_{X}(x)\). Since the nominator in (S.6) is bounded by
\[(\mathrm{const.})\mathrm{E}(|\mathrm{Re}(K_{T_{n}}(x,Z))|^{2+\varsigma}),\]
it suffices to show that
(S.7) \[\frac{\mathrm{E}(|\mathrm{Re}(K_{T_{n}}(x,Z))|^{2+\varsigma})}{n^{\varsigma/2 }(\mathrm{Var}(V_{n}(x)))^{1+\varsigma/2}}\to 0.\]
Also, since
(S.8) \[\begin{split}|\mathrm{E}(V_{n}(x))|&\leq\left|\sum_{ l>T_{n}}\sum_{q=1}^{N(d,l)}\phi_{q}^{l}(f_{X})B_{q}^{l}(x)\right|=o(1),\\ \mathrm{E}((V_{n}(x))^{2})&=\mathrm{E}((\mathrm{Re }(K_{T_{n}}(x,Z)))^{2})-f_{X}^{2}(x)+o(1),\\ \mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})&\to \infty,\end{split}\]
it suffices to show that
(S.9) \[\frac{\mathrm{E}(|\mathrm{Re}(K_{T_{n}}(x,Z))|^{2+\varsigma})}{n^{\varsigma/2 }(\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2}))^{1+\varsigma/2}}\to 0.\]
We note that
(S.10) \[\begin{split}\mathrm{E}(|K_{T_{n}}(x,Z)|^{2+\varsigma})& =\int_{\mathbb{S}^{d}}|K_{T_{n}}(x,z)|^{2+\varsigma}f_{Z}(z)d\nu(z)\\ &\leq\sup_{x\in\mathbb{S}^{d}}f_{X}(x)\int_{\mathbb{S}^{d}}|K_{T _{n}}(x,z)|^{2+\varsigma}d\nu(z),\end{split}\]
where the inequality follows from the fact \(\sup_{z\in\mathbb{S}^{d}}f_{Z}(z)\leq\sup_{x\in\mathbb{S}^{d}}f_{X}(x)\). We note that
(S.11) \[\begin{split}|K_{T_{n}}(x,z)|^{\varsigma}&\leq\left( \sum_{l=0}^{[T_{n}]}\|B^{l}(x)\|\|(\tilde{\phi}^{l}(f_{U}))^{-1}\|_{\mathrm{ op}}\|B^{l}(z)\|\right)^{\varsigma}\\ &=\left((\nu(\mathbb{S}^{d}))^{-1}\sum_{l=0}^{[T_{n}]}N(d,l)\|( \tilde{\phi}^{l}(f_{U}))^{-1}\|_{\mathrm{op}}\right)^{\varsigma}.\end{split}\]
We also note that
(S.12) \[\begin{split}\int_{\mathbb{S}^{d}}|K_{T_{n}}(x,z)|^{2}d\nu(z)& =\sum_{l=0}^{[T_{n}]}\sum_{r=1}^{N(d,l)}\left|\sum_{q=1}^{N(d,l)}B_{ q}^{l}(x)(\tilde{\phi}^{l}(f_{U}))_{qr}^{-1}\right|^{2}\\ &\leq\sum_{l=0}^{[T_{n}]}\|B^{l}(x)\|^{2}\|((\tilde{\phi}^{l}(f_{ U}))^{-1})^{\top}\|_{\rm op}^{2}\\ &=(\nu(\mathbb{S}^{d}))^{-1}\sum_{l=0}^{[T_{n}]}N(d,l)\|(\tilde{ \phi}^{l}(f_{U}))^{-1}\|_{\rm op}^{2},\end{split}\]
where the first equality follows from the orthonormality of \(\{B_{q}^{l}:1\leq q\leq N(d,l)\}\). Combining (S.10), (S.11) and (S.12), we have
(S.13) \[\begin{split}&\text{E}(|K_{T_{n}}(x,Z)|^{2+\varsigma})\\ &\leq\begin{cases}(\text{const.})T_{n}^{(2+\varsigma)\beta+(1+ \varsigma)d},&\text{if (S1)-(i) holds}\\ (\text{const.})T_{n}^{(2+\varsigma)\alpha+(1+\varsigma)d}\exp((2+\varsigma) \gamma\cdot T_{n}^{\beta}),&\text{if (S2)-(i) holds}\\ (\text{const.})T_{n}^{(2+\varsigma)\alpha+(1+\varsigma)d}\exp((2+\varsigma) \gamma\cdot T_{n}^{\beta}(\log T_{n}-\xi_{1})),&\text{if (S3)-(i) holds.}\end{cases}\end{split}\]
Since \(\text{E}(|\text{Re}(K_{T_{n}}(x,Z))|^{2+\varsigma})\leq\text{E}(|K_{T_{n}}(x,Z )|^{2+\varsigma})\), \(\text{E}(|\text{Re}(K_{T_{n}}(x,Z))|^{2+\varsigma})\) attains the same upper bounds given in (S.13). Using (B1)-(B3) with \(\eta\) sufficiently close to \(1\) in the cases of (B2) and (B3), we obtain (S.9). Therefore, we have (S.4). This completes the proof.
### Proof of Theorem 3 and some remark
We note that
\[\hat{m}(x)-m(x) =\frac{1}{\hat{f}_{X}(x)}\frac{1}{n}\sum_{i=1}^{n}\text{Re}(K_{T _{n}}(x,Z_{i}))(Y_{i}-m(x))\] \[=\frac{1}{f_{X}(x)}\frac{1}{n}\sum_{i=1}^{n}\text{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}-m(x))\left(1+\frac{f_{X}(x)-\hat{f}_{X}(x)}{\hat{f}_{X}(x)}\right)\] \[=\sum_{i=1}^{n}W_{ni}(x)+\sum_{i=1}^{n}W_{ni}(x)\cdot\frac{f_{X}( x)-\hat{f}_{X}(x)}{\hat{f}_{X}(x)},\]
where \(W_{ni}(x)=(f_{X}(x))^{-1}n^{-1}\text{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}-m(x))\). Hence,
\[\frac{\hat{m}(x)-m(x)-\text{E}(\sum_{i=1}^{n}W_{ni}(x))}{\sqrt{ \text{Var}(\sum_{i=1}^{n}W_{ni}(x))}}\] \[=\frac{\sum_{i=1}^{n}W_{ni}(x)-\text{E}(\sum_{i=1}^{n}W_{ni}(x))} {\sqrt{\text{Var}(\sum_{i=1}^{n}W_{ni}(x))}}+\frac{\sum_{i=1}^{n}W_{ni}(x)}{ \sqrt{\text{Var}(\sum_{i=1}^{n}W_{ni}(x))}}\cdot\frac{f_{X}(x)-\hat{f}_{X}(x) }{\hat{f}_{X}(x)}.\]
Thus, it suffices to prove that
(S.14) \[\frac{\sum_{i=1}^{n}W_{ni}(x)-\mathrm{E}(\sum_{i=1}^{n}W_{ni}(x))}{\sqrt{\mathrm{ Var}(\sum_{i=1}^{n}W_{ni}(x))}}\xrightarrow{d}N(0,1)\]
and
(S.15) \[\frac{\sum_{i=1}^{n}W_{ni}(x)}{\sqrt{\mathrm{Var}(\sum_{i=1}^{n}W_{ni}(x))}} \cdot\frac{f_{X}(x)-\hat{f}_{X}(x)}{\hat{f}_{X}(x)}=o_{p}(1).\]
For (S.14), we check that the Lyapunov condition
\[\frac{\mathrm{E}(|W_{n1}(x)-\mathrm{E}(W_{n1}(x))|^{2+\delta})}{n^{\delta/2}( \mathrm{Var}(W_{n1}(x)))^{1+\delta/2}}\to 0\]
holds for \(\delta\) in (B4). This is equivalent to verifying that
(S.16) \[\frac{\mathrm{E}(|V_{n}(x)-\mathrm{E}(V_{n}(x))|^{2+\delta})}{n^{\delta/2}( \mathrm{Var}(V_{n}(x)))^{1+\delta/2}}\to 0,\]
where \(V_{n}(x)=\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x))\). Since the nominator in (S.16) is bounded by (const.)\(\mathrm{E}(|V_{n}(x)|^{2+\delta})\), it suffices to show that
\[\frac{\mathrm{E}(|V_{n}(x)|^{2+\delta})}{n^{\delta/2}(\mathrm{Var}(V_{n}(x))) ^{1+\delta/2}}\to 0.\]
Also, since
\[\mathrm{E}(|V_{n}(x)|^{2+\delta}) \leq(\mathrm{const.})\mathrm{E}(|\mathrm{Re}(K_{T_{n}}(x,Z))|^{2+ \delta}),\] \[\mathrm{E}((V_{n}(x))^{2}) \geq(\mathrm{const.})\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2 })\rightarrow\infty,\] \[\mathrm{E}(V_{n}(x)) =\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))(m(X)-m(x)))=o(1),\]
it suffices to show that
\[\frac{\mathrm{E}(|\mathrm{Re}(K_{T_{n}}(x,Z))|^{2+\delta})}{n^{\delta/2}( \mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2}))^{1+\delta/2}}\to 0.\]
But, this follows as in the proof of (S.9).
For (S.15), we note that
(S.17) \[\begin{split}&|f_{X}(x)-\mathrm{E}(\hat{f}_{X}(x))|\\ &\leq\sum_{l>T_{n}}\sum_{q=1}^{N(d,l)}|\phi_{q}^{l}(f_{X})B_{q}^{l }(x)|\\ &=\sum_{l>T_{n}}\sum_{q=1}^{N(d,l)}\lambda_{l}^{-k}|(-1)^{k} \lambda_{l}^{k}\phi^{l}(f_{X})B_{q}^{l}(x)|\\ &\leq(T_{n}(T_{n}-d+1))^{-k}\sum_{l>T_{n}}\sum_{q=1}^{N(d,l)} \lambda_{l}^{-k}(T_{n}(T_{n}-d+1))^{k}|\phi^{l}(\Delta_{\mathbb{S}^{d}}^{k}(f_ {X}))B_{q}^{l}(x)|\\ &\leq(T_{n}(T_{n}-d+1))^{-k}\sum_{l>T_{n}}\sum_{q=1}^{N(d,l)}| \phi^{l}(\Delta_{\mathbb{S}^{d}}^{k}(f_{X}))B_{q}^{l}(x)|\\ &=o(T_{n}^{-2k}),\end{split}\]
where the last equality follows from the absolute convergence of the Fourier-Laplace series of \(\Delta_{\mathbb{S}^{d}}^{k}(f_{X})\). Also, it holds that
\[\mathrm{E}(\hat{f}_{X}(x))-\hat{f}_{X}(x)=O_{p}(n^{-1/2}\cdot T_{n}^{\beta+d})\]
as in the proof of Proposition 4. This with (S.17) implies that
(S.18) \[f_{X}(x)-\hat{f}_{X}(x)=o(T_{n}^{-2k})+O_{p}(n^{-1/2}\cdot T_{n}^{\beta+d})=o _{p}(1).\]
Hence, it suffices to show that
(S.19) \[\frac{\mathrm{E}(\sum_{i=1}^{n}W_{ni}(x))}{\sqrt{\mathrm{Var}(\sum_{i=1}^{n}W_ {ni}(x))}}\cdot(f_{X}(x)-\hat{f}_{X}(x))=o_{p}(1)\]
by (S.14) and the fact that \((\hat{f}_{X}(x))^{-1}=O_{p}(1)\). We note that
(S.20) \[\begin{split}& f_{X}(x)\cdot\left|\mathrm{E}\left(\sum_{i=1}^{n}W_{ni }(x)\right)\right|\\ =&|\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))| \\ \leq&|m(x)|\cdot\sum_{l>T_{n}}\sum_{q=1}^{N(d,l)}| \phi_{q}^{l}(f_{X})B_{q}^{l}(x)|+\sum_{l>T_{n}}\sum_{q=1}^{N(d,l)}|\phi_{q}^{l }(m\cdot f_{X})B_{q}^{l}(x)|\\ \leq&(T_{n}(T_{n}-d+1))^{-k}\bigg{(}|m(x)|\cdot\sum_ {l>T_{n}}\sum_{q=1}^{N(d,l)}|\phi^{l}(\Delta_{\mathbb{S}^{d}}^{k}(f_{X}))B_{q} ^{l}(x)|\\ &+\sum_{l>T_{n}}\sum_{q=1}^{N(d,l)}|\phi^{l}(\Delta_{\mathbb{S}^ {d}}^{k}(m\cdot f_{X}))B_{q}^{l}(x)|\bigg{)}\\ =& o(T_{n}^{-2k}),\end{split}\]
where the last equality follows from the absolute convergence of the Fourier-Laplace series of \(\Delta_{\mathbb{S}^{d}}^{k}(f_{X})\) and of \(\Delta_{\mathbb{S}^{d}}^{k}(m\cdot f_{X})\). We also note that
(S.21) \[\mathrm{Var}\left(\sum_{i=1}^{n}W_{ni}(x)\right)\geq(\mathrm{const.})n^{-1} \mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})\geq(\mathrm{const.})n^{-1}T_{n} ^{2\beta+q}.\]
Thus, (S.19) follows if
(S.22) \[n^{1/2}T_{n}^{-(\beta+q/2)}T_{n}^{-4k}=O(1)\quad\text{and}\quad T_{n}^{d-q/2}T _{n}^{-2k}=O(1).\]
The first one at (S.22) follows by (T1\({}^{\prime\prime\prime}\)). The second one at (S.22) follows by (A1)-(i) and (A1)-(ii) with \(k>(2d-q)/4\). This completes the proof.
**Remark 1**.: _We give details on why we do not cover the super-smooth and log-super-smooth scenarios. Under (S2)-(i)+(B2), we obtain the rate \(o(T_{n}^{-2k})+O_{p}(n^{-1/2}T_{n}^{\alpha+d}\exp(\gamma\cdot T_{n}^{\beta}))\) in the place of the right hand side of the first equality at (S.18) and the lower bound \((\mathrm{const.})n^{-1}T_{n}^{2\alpha+q}\exp(2\gamma(\eta\cdot T_{n})^{\beta})\) in the place of the right hand side of the last inequality at (S.21). Hence, we need_
(S.23) \[n^{-1/2}T_{n}^{\alpha+d}\exp(\gamma\cdot T_{n}^{\beta})=o(1)\]
_for the last equality at (S.18) and need_
(S.24) \[n^{1/2}T_{n}^{-(\alpha+q/2)}\exp(-\gamma(\eta\cdot T_{n})^{\beta})\cdot T_{n} ^{-2k}\cdot(T_{n}^{-2k}+n^{-1/2}T_{n}^{\alpha+d}\exp(\gamma\cdot T_{n}^{\beta }))=O(1)\]
_for (S.19). For (S.23), we need a log-type speed for \(T_{n}\), while (S.24) does not hold with any log-type speed for \(T_{n}\). The log-super-smooth scenario has a similar problem._
### Proof of Lemma 1
Using the condition (C) and the fact \(\inf_{z\in\mathbb{S}^{d}}f_{Z}(z)\geq\inf_{x\in\mathbb{S}^{d}}f_{X}(x)\), we obtain
\[\mathrm{E}(|K_{T_{n}}(x,Z)|^{2}) \geq\int_{\mathbb{S}^{d}}|K_{T_{n}}(x,z)|^{2}d\nu(z)\cdot\inf_{x\in \mathbb{S}^{d}}f_{X}(x)\] \[=\sum_{l=0}^{[T_{n}]}\|((\tilde{\phi}^{l}(f_{U}))^{-1})^{\top}B^{ l}(x)\|^{2}\cdot\inf_{x\in\mathbb{S}^{d}}f_{X}(x)\] \[\geq(\text{const.})\sum_{l=0}^{[T_{n}]}\|B^{l}(x)\|^{2}(\sigma_{ \min}((\tilde{\phi}^{l}(f_{U}))^{-1}))^{2}\] \[\geq(\text{const.})\sum_{l=0}^{[T_{n}]}\|B^{l}(x)\|^{2}\|(\tilde{ \phi}^{l}(f_{U}))^{-1}\|_{\text{op}}^{2}\] \[\geq(\text{const.})\sum_{l=0}^{[T_{n}]}N(d,l)\|(\tilde{\phi}^{l} (f_{U}))^{-1}\|_{\text{op}}^{2}.\]
Hence, for each \(0<\eta<1\), we have
\[\mathrm{E}(|K_{T_{n}}(x,Z)|^{2})\] \[\geq\begin{cases}(\text{const.})\sum_{l=0}^{[T_{n}]}N(d,l)\cdot l ^{2\beta},&\text{if (S1)-(ii) holds}\\ (\text{const.})\sum_{l=0}^{[T_{n}]}N(d,l)\cdot l^{2\alpha}\cdot\exp(2\gamma \cdot l^{\beta}),&\text{if (S2)-(ii) holds}\\ (\text{const.})\sum_{l=0}^{[T_{n}]}N(d,l)\cdot l^{2\alpha}\cdot\exp(2\gamma \cdot l^{\beta}(\log l-\xi_{2})),&\text{if (S3)-(ii) holds}\\ \end{cases}\] \[\geq\begin{cases}(\text{const.})(\eta\cdot T_{n})^{2\beta}(\sum_{l=0} ^{[T_{n}]}N(d,l)-\sum_{l=0}^{[\eta\cdot T_{n}]+1}N(d,l)),&\text{if (S1)-(ii) holds}\\ (\text{const.})(\eta\cdot T_{n})^{2\alpha}(\sum_{l=0}^{[T_{n}]}N(d,l)-\sum_{l= 0}^{[\eta\cdot T_{n}]+1}N(d,l))\\ \end{cases}\qquad\cdot\exp(2\gamma\cdot(\eta\cdot T_{n})^{\beta}),&\text{if (S2)-(ii) holds}\\ (\text{const.})(\eta\cdot T_{n})^{2\alpha}(\sum_{l=0}^{[T_{n}]}N(d,l)-\sum_{l= 0}^{[\eta\cdot T_{n}]+1}N(d,l))\\ \end{cases}\]
We now prove that
(S.25) \[T_{n}^{-d}\left(\sum_{l=0}^{[T_{n}]}N(d,l)-\sum_{l=0}^{[\eta\cdot T_{n}]+1}N( d,l)\right)\to(\text{const.}).\]
We note that
\[N(d,l)=\left(2+\frac{d-1}{l}\right)\frac{1}{(d-1)!}\frac{\Gamma(l+d-1)}{ \Gamma(l)}.\]
From Tricomi and Erdelyi (1951), it is known that
\[\frac{\Gamma(l+d-1)}{\Gamma(l)}=l^{d-1}\left(1+\frac{(d-1)(d-2)}{2l}+O(l^{-2}) \right).\]
Hence, by Faulhaber's formula, we have
\[\sum_{l=0}^{[T_{n}]}\frac{\Gamma(l+d-1)}{\Gamma(l)}=\frac{1}{d}[T_{n}]^{d}+o([ T_{n}]^{d}).\]
This with simple algebra gives \(T_{n}^{-d}\sum_{l=0}^{[T_{n}]}N(d,l)\to 2/(d!)\) and hence (S.25) follows. Thus, we get
\[\begin{split}&\text{E}(|K_{T_{n}}(x,Z)|^{2})\\ &\geq\begin{cases}(\text{const.})T_{n}^{2\beta+d},&\text{if (S1)-(ii) holds}\\ (\text{const.})T_{n}^{2\alpha+d}\exp(2\gamma\cdot(\eta\cdot T_{n})^{\beta}),& \text{if (S2)-(ii) holds}\\ (\text{const.})T_{n}^{2\alpha+d}\exp(2\gamma\cdot(\eta\cdot T_{n})^{\beta}( \log T_{n}+\log\eta-\xi_{2})),&\text{if (S3)-(ii) holds.}\end{cases}\end{split}\] (S.26)
This completes the proof.
### Proof of Lemma 2
We first consider the case (G1). In this case, \(K_{T_{n}}(x,z)\) is real-valued for all \(x,z\in\mathbb{S}^{1}\) since \(B_{q}^{l}\) and \(\tilde{\phi}_{qr}^{l}\) are real-valued. Hence, the result follows.
Now, we consider the case (G2). We note that
\[\begin{split} K_{T_{n}}(x,z)&=\sum_{l=0}^{[T_{n}]}s_ {l}^{-1}\sum_{q=1}^{2l+1}B_{q}^{l}(x)\overline{B_{q}^{l}(z)}\\ &=\sum_{l=0}^{[T_{n}]}s_{l}^{-1}\frac{2l+1}{4\pi}\sum_{q=1}^{2l+1 }(\cos((q-l-1)\varphi_{x})+\sqrt{-1}\sin((q-l-1)\varphi_{x}))\\ &\qquad\qquad\cdot(\cos((q-l-1)\varphi_{z})-\sqrt{-1}\sin((q-l-1) \varphi_{z}))d_{q(l+1)}^{l}(\theta_{x})d_{q(l+1)}^{l}(\theta_{z}).\end{split}\]
Hence,
\[\begin{split}&\text{Re}(K_{T_{n}}(x,z))=\sum_{l=0}^{[T_{n}]}s_{l}^ {-1}\frac{2l+1}{4\pi}\sum_{q=1}^{2l+1}\cos((q-l-1)(\varphi_{x}-\varphi_{z}))d _{q(l+1)}^{l}(\theta_{x})d_{q(l+1)}^{l}(\theta_{z}),\\ &\text{Im}(K_{T_{n}}(x,z))=\sum_{l=0}^{[T_{n}]}s_{l}^{-1}\frac{2l +1}{4\pi}\sum_{q=1}^{2l+1}\sin((q-l-1)(\varphi_{x}-\varphi_{z}))d_{q(l+1)}^{l}( \theta_{x})d_{q(l+1)}^{l}(\theta_{z}).\end{split}\]
Thus,
\[\int_{\mathbb{S}^{2}}(\mathrm{Re}(K_{T_{n}}(x,z)))^{2}d\nu(z)\] \[=\sum_{l=0}^{[T_{n}]}\sum_{l^{\prime}=0}^{[T_{n}]}s_{l}^{-1}s_{l^{ \prime}}^{-1}\frac{(2l+1)(2l^{\prime}+1)}{16\pi^{2}}\sum_{q=1}^{2l+1}\sum_{q^{ \prime}=1}^{2l^{\prime}+1}d_{q(l+1)}^{l}(\theta_{x})d_{q^{\prime}(l^{\prime}+ 1)}^{l^{\prime}}(\theta_{x})\] \[\qquad\qquad\qquad\cdot\int_{0}^{2\pi}\cos((q-l-1)(\varphi_{x}- \varphi_{z}))\cos((q^{\prime}-l^{\prime}-1)(\varphi_{x}-\varphi_{z}))d\varphi_ {z}\] \[\qquad\qquad\qquad\cdot\int_{0}^{\pi}d_{q(l+1)}^{l}(\theta_{z})d_ {q^{\prime}(l^{\prime}+1)}^{l^{\prime}}(\theta_{z})\sin(\theta_{z})d\theta_{ z},\] \[\int_{\mathbb{S}^{2}}(\mathrm{Im}(K_{T_{n}}(x,z)))^{2}d\nu(z)\] \[=\sum_{l=0}^{[T_{n}]}\sum_{l^{\prime}=0}^{[T_{n}]}s_{l}^{-1}s_{l^ {\prime}}^{-1}\frac{(2l+1)(2l^{\prime}+1)}{16\pi^{2}}\sum_{q=1}^{2l+1}\sum_{q ^{\prime}=1}^{2l^{\prime}+1}d_{q(l+1)}^{l}(\theta_{x})d_{q^{\prime}(l^{\prime }+1)}^{l^{\prime}}(\theta_{x})\] \[\qquad\qquad\qquad\cdot\int_{0}^{2\pi}\sin((q-l-1)(\varphi_{x}- \varphi_{z}))\sin((q^{\prime}-l^{\prime}-1)(\varphi_{x}-\varphi_{z}))d\varphi _{z}\] \[\qquad\qquad\cdot\int_{0}^{\pi}d_{q(l+1)}^{l}(\theta_{z})d_{q^{ \prime}(l^{\prime}+1)}^{l^{\prime}}(\theta_{z})\sin(\theta_{z})d\theta_{z}.\]
Since
\[\int_{0}^{2\pi}\cos((q-l-1)(\varphi_{x}-\varphi_{z}))\cos((q^{ \prime}-l^{\prime}-1)(\varphi_{x}-\varphi_{z}))d\varphi_{z}\] \[=\begin{cases}2\pi,&\text{if $q-l-1=q^{\prime}-l^{\prime}-1=0$}, \\ \pi,&\text{if $q-l-1=q^{\prime}-l^{\prime}-1\neq 0$},\\ 0,&\text{else},\end{cases}\] \[\int_{0}^{2\pi}\sin((q-l-1)(\varphi_{x}-\varphi_{z}))\sin((q^{ \prime}-l^{\prime}-1)(\varphi_{x}-\varphi_{z}))d\varphi_{z}\] \[=\begin{cases}\pi,&\text{if $q-l-1=q^{\prime}-l^{\prime}-1\neq 0$}, \\ 0,&\text{else},\end{cases}\]
we have
\[\int_{\mathbb{S}^{2}}(\mathrm{Re}(K_{T_{n}}(x,z)))^{2}d\nu(z)- \int_{\mathbb{S}^{2}}(\mathrm{Im}(K_{T_{n}}(x,z)))^{2}d\nu(z)\] \[=\sum_{l=0}^{[T_{n}]}\sum_{l^{\prime}=0}^{[T_{n}]}s_{l}^{-1}s_{l^ {\prime}}^{-1}\frac{(2l+1)(2l^{\prime}+1)}{8\pi}d_{(l+1)(l+1)}^{l}(\theta_{x} )d_{(l^{\prime}+1)(l^{\prime}+1)}^{l^{\prime}}(\theta_{x})\] \[\qquad\qquad\qquad\cdot\int_{0}^{\pi}d_{q(l+1)}^{l}(\theta_{z})d _{q^{\prime}(l^{\prime}+1)}^{l^{\prime}}(\theta_{z})\sin(\theta_{z})d\theta_{ z}.\]
By equation (12) in Pagaran et al. (2006), it holds that
\[\int_{0}^{\pi}d^{l}_{(l+1)(l+1)}(\theta_{z})d^{l^{\prime}}_{(l^{\prime}+1)(l^{ \prime}+1)}(\theta_{z})\sin(\theta_{z})d\theta_{z}=\frac{2}{2l+1}I(l=l^{\prime}).\]
Thus, we have
\[\int_{\mathbb{S}^{2}}(\mathrm{Re}(K_{T_{n}}(x,z)))^{2}d\nu(z)-\int _{\mathbb{S}^{2}}(\mathrm{Im}(K_{T_{n}}(x,z)))^{2}d\nu(z)\] \[=\sum_{l=0}^{[T_{n}]}s_{l}^{-2}(d^{l}_{(l+1)(l+1)}(\theta_{x}))^{2 }\frac{2l+1}{4\pi}\geq 0.\]
Combining this with the proof of Lemma 1 entails that \(\int_{\mathbb{S}^{2}}(\mathrm{Re}(K_{T_{n}}(x,z)))^{2}d\nu(z)\) achieves the lower bounds given in (S.26). Then, by the fact \(\inf_{z\in\mathbb{S}^{d}}f_{Z}(z)\geq\inf_{x\in\mathbb{S}^{d}}f_{X}(x)\) and the condition (A2)-(i), we get the desired result.
### Proof of Theorem 4
We prove the two assertions
(S.27) \[\sqrt{n}\cdot\frac{\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z)))-f_{X}(x)}{\sqrt{ \mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z)))}}=o(1)\]
and
(S.28) \[\frac{\hat{s}_{1}(x)}{\sqrt{\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z)))}}\overset {p}{\rightarrow}1.\]
Then, we get the desired result by combining (S.27), (S.28) and Theorem 2.
For the assertion (S.27), we note that
\[\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z))) =\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})-(\mathrm{E}( \mathrm{Re}(K_{T_{n}}(x,Z))))^{2}\] \[=\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})-f_{X}^{2}(x)+o(1),\] \[\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2}) \rightarrow\infty.\]
Hence, it suffices to show that
(S.29) \[\sqrt{n}\cdot\frac{\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z)))-f_{X}(x)}{\sqrt{ \mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})}}=o(1).\]
We note that
\[\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z)))-f_{X}(x)=o(T_{n}^{-2k}),\]
by (S.17). Hence, (S.29) follows from (S.17) and Lemma 2 if \(\sqrt{n}\cdot T_{n}^{-(2k+\beta+d/2)}=O(1)\). But, the latter holds with the choice (T1\({}^{\prime}\)). Thus, the assertion (S.27) follows.
For the assertion (S.28), we show that
(S.30) \[\begin{split}&\frac{1}{n}\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{i})) \stackrel{{ p}}{{\to}}\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))),\\ &\frac{1}{n}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i})))^{2} \stackrel{{ p}}{{\to}}\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{ 2}).\end{split}\]
For the first one at (S.30), it suffices to show that
\[\mathrm{Var}\left(\frac{1}{n}\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{i})) \right)=o(1).\]
This follows since
\[\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})=O(T_{n}^{2\beta+d})=O(n^{(2 \beta+d)/(4k+2\beta+d)})=o(n).\]
For the second one at (S.30), we apply Corollary 2 in Chapter 10 of Chow and Teicher (1997). Then, it suffices to show that
(S.31) \[\mathrm{E}\left(\frac{(\mathrm{Re}(K_{T_{n}}(x,Z)))^{2}}{\mathrm{E}((\mathrm{ Re}(K_{T_{n}}(x,Z)))^{2})}\cdot I\left(\frac{(\mathrm{Re}(K_{T_{n}}(x,Z)))^{2}}{ \mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})}\geq n\varepsilon\right)\right)\to 0\]
holds for any \(\varepsilon>0\). One can prove that (S.31) holds using (S.9) and Lemma 2. Thus, the assertion (S.28) follows. This completes the proof.
### Proof of Theorem 5
We check the two claims
(S.32) \[\sqrt{n}\cdot\frac{\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))}{\sqrt{ \mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))}}=o(1)\]
and
(S.33) \[\frac{\hat{s}_{2}(x)}{\sqrt{\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))} }\stackrel{{ p}}{{\to}}1.\]
Then, we get the desired result by combining (S.32), (S.33) and Theorem 3.
The claim (S.32) follows if we prove that
(S.34) \[\sqrt{n}\cdot\frac{\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))}{\sqrt{ \mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})}}=o(1),\]
since
\[\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))\] \[=\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))^{2})-(\mathrm{E}( \mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x))))^{2}\] \[\geq(\mathrm{const.})\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2}) +o(1).\]
We note that (S.34) follows from (S.20) and Lemma 2 provided that \(\sqrt{n}\cdot T_{n}^{-(2k+\beta+d/2)}=O(1)\). But, the latter holds with the choice (T1\({}^{\prime}\)). Thus, the claim (S.32) follows.
The claim (S.33) follows if we show that
(S.35) \[\frac{1}{n}\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}- \hat{m}(x))\overset{p}{\to}\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x))),\] \[\frac{1}{n}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}- \hat{m}(x)))^{2}\overset{p}{\to}\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m( x)))^{2}).\]
For the first one at (S.35), we note that
\[\frac{1}{n}\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}- \hat{m}(x))\] \[= \frac{1}{n}\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}- m(x))+(m(x)-\hat{m}(x))\hat{f}_{X}(x)\] \[= \frac{1}{n}\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}-m( x))+\left(m(x)-\frac{m(x)f_{X}(x)+o_{p}(1)}{f_{X}(x)+o_{p}(1)}\right)\hat{f}_{X}(x)\] \[= \frac{1}{n}\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}-m( x))+o_{p}(1),\]
where the second equality follows similarly as in the proof of Proposition 4. Now, since
\[n^{-1}\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))^{2})\leq(\mathrm{const. })n^{-1}\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})=o(1),\]
the first one at (S.35) follows. For the second one at (S.35), we note that
\[\frac{1}{n}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}- \hat{m}(x)))^{2}\] \[=\frac{1}{n}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}- m(x)))^{2}+(m(x)-\hat{m}(x))^{2}\frac{1}{n}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}} (x,Z_{i})))^{2}\] \[\quad+(m(x)-\hat{m}(x))\frac{2}{n}\sum_{i=1}^{n}(\mathrm{Re}(K_{ T_{n}}(x,Z_{i})))^{2}(Y_{i}-m(x))\] \[=\frac{1}{n}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}- m(x)))^{2}+o_{p}(1).\]
Hence, it suffices to show that
\[\frac{1}{n}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}-m(x)))^{2} \stackrel{{ p}}{{\rightarrow}}\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))^{2}).\]
For this, we apply Corollary 2 in Chapter 10 of Chow and Teicher (1997). Then, it suffices to show that
(S.36) \[\mathrm{E}\bigg{(}\frac{(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))^{2}} {\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))^{2})}\cdot I\bigg{(}\frac{( \mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))^{2}}{\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z ))(Y-m(x)))^{2})}\geq n\varepsilon\bigg{)}\bigg{)}\to 0\]
holds for any \(\varepsilon>0\). One can prove that (S.36) holds using (B4), (S.9) and Lemma 2. Thus, the claim (S.33) follows. This completes the proof.
### Proof of Theorem 6
For the proof, we apply Theorem 2.1 in Hjort et al. (2009). For this, we verify the conditions (A0)-(A3) in Hjort et al. (2009). Note that
\[\mathrm{EL}_{f_{X}}(\theta;x)=\max\left\{\prod_{i=1}^{n}(nw_{i}):w_{i}>0,\sum_ {i=1}^{n}w_{i}=1,\sum_{i=1}^{n}w_{i}F_{f_{X}}^{*}(Z_{i},\theta;x)=0\right\},\]
where
\[F_{f_{X}}^{*}(Z_{i},\theta;x)=\frac{n^{-1/2}F_{f_{X}}(Z_{i}, \theta;x)}{\sqrt{\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z)))}}.\]
We note that (A0) in Hjort et al. (2009) immediately follows from the condition (E1). For (A1) in Hjort et al. (2009), it suffices to show that
(S.37) \[\frac{n^{-1/2}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))-f_{X }(x))}{\sqrt{\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z)))}}\stackrel{{ d}}{{ \longrightarrow}}N(0,1).\]
From (S.4), we have
\[\frac{n^{-1/2}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))-f_{X }(x))-n^{1/2}\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))-f_{X}(x))}{\sqrt{\mathrm{ Var}(\mathrm{Re}(K_{T_{n}}(x,Z)))}}\stackrel{{ d}}{{ \longrightarrow}}N(0,1).\]
We also have
\[\sqrt{n}\cdot\frac{\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z)))-f_{X}( x)}{\sqrt{\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z)))}}=o(1)\]
by (S.27). Combining the two results gives (S.37). For (A2) in Hjort et al. (2009), it suffices to show that
(S.38) \[\frac{n^{-1}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))-f_{X}(x))^{2}}{\mathrm{ Var}(\mathrm{Re}(K_{T_{n}}(x,Z)))}\stackrel{{ p}}{{\to}}1.\]
Since
\[n^{-1}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))-f_{X}(x))^{2}\] \[=n^{-1}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i})))^{2}-2n^{-1 }f_{X}(x)\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{i}))+f_{X}^{2}(x),\] \[\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z)))\] \[=\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2})-(\mathrm{E}( \mathrm{Re}(K_{T_{n}}(x,Z))))^{2},\]
it suffices to show that
(S.39) \[\begin{split}\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z)))\to f_{X}(x),\\ n^{-1}\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{i}))\stackrel{{ p}}{{\to}}\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))),\\ n^{-1}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i})))^{2}\stackrel{{ p}}{{\to}}\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z)))^{2}).\end{split}\]
The first assertion at (S.39) follows from (S.8), and the second and last assertions at (S.39) follow from (S.30). Hence, (S.38) holds. For (A3) in Hjort et al. (2009), it suffices to show that
(S.40) \[\max_{1\leq i\leq n}|F_{f_{X}}^{*}(Z_{i},f_{X}(x);x)|\stackrel{{ p}}{{\to}}0.\]
We note that, for any \(\varepsilon>0\) and \(\varsigma>0\),
\[P\left(n^{-1/2}\max_{1\leq i\leq n}|\mathrm{Re}(K_{T_{n}}(x,Z_{ i}))-f_{X}(x)|>\varepsilon\sqrt{\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z)))}\right)\] \[\leq(\mathrm{const.})\frac{\mathrm{E}(|\mathrm{Re}(K_{T_{n}}(x,Z ))-f_{X}(x)|^{2+\varsigma})}{n^{\varsigma/2}(\mathrm{Var}(\mathrm{Re}(K_{T_{n} }(x,Z))))^{1+\varsigma/2}}\] \[\leq(\mathrm{const.})\frac{\mathrm{E}(|\mathrm{Re}(K_{T_{n}}(x,Z ))|^{2+\varsigma})}{n^{\varsigma/2}(\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z))))^ {1+\varsigma/2}}\] \[\to 0,\]
where the limit follows similarly as in the proof of (S.7). Thus, (S.40) holds. Now, Theorem 2.1 in Hjort et al. (2009) gives the desired result.
### Proof of Theorem 7
We apply Theorem 2.1 in Hjort et al. (2009) to prove the theorem. We note that
\[\mathrm{EL}_{m}(\theta;x)=\max\left\{\prod_{i=1}^{n}(nw_{i}):w_{i}>0,\sum_{i=1}^{ n}w_{i}=1,\sum_{i=1}^{n}w_{i}F_{m}^{*}(Z_{i},Y_{i},\theta;x)=0\right\},\]
where
\[F_{m}^{*}(Z_{i},Y_{i},\theta;x)=\frac{n^{-1/2}F_{m}(Z_{i},Y_{i},\theta;x)}{ \sqrt{\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))}}.\]
Since the condition (A0) in Hjort et al. (2009) immediately follows from the condition (E2), it suffices to show that
\[\frac{n^{-1/2}\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i} -m(x))}{\sqrt{\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))}}\overset{d}{ \longrightarrow}N(0,1),\] (S.41) \[\frac{n^{-1}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i} -m(x)))^{2}}{\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))}\overset{p}{ \longrightarrow}1,\] \[\max_{1\leq i\leq n}|F_{m}^{*}(Z_{i},Y_{i},m(x);x)|\overset{p}{ \rightarrow}0\]
to verify the conditions (A1)-(A3) of Theorem 2.1 in Hjort et al. (2009).
For the first assertion at (S.41), we note that
\[\frac{n^{-1/2}\sum_{i=1}^{n}\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}-m(x))-n^{1/2 }\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))}{\sqrt{\mathrm{Var}(\mathrm{ Re}(K_{T_{n}}(x,Z))(Y-m(x)))}}\overset{d}{\longrightarrow}N(0,1).\]
This follows from the proof of Theorem 3. Also, it holds that
\[\sqrt{n}\cdot\frac{\mathrm{E}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))}{\sqrt{ \mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))}}=o(1)\]
by (S.32). Combining the two results gives the first assertion at (S.41). The second assertion at (S.41) follows from the facts
\[\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))=\mathrm{E}(( \mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))^{2})+o(1),\] \[\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x)))^{2})\to\infty,\] \[n^{-1}\sum_{i=1}^{n}(\mathrm{Re}(K_{T_{n}}(x,Z_{i}))(Y_{i}-m(x)) )^{2}\overset{p}{\rightarrow}\mathrm{E}((\mathrm{Re}(K_{T_{n}}(x,Z))(Y-m(x) ))^{2}).\]
For the third assertion at (S.41), we note that, for any \(\varepsilon>0\) and \(\delta>0\) in (B4),
\[P\bigg{(}n^{-1/2}\max_{1\leq i\leq n}|\mathrm{Re}(K_{T_{n}}(x,Z_ {i}))(Y_{i}-m(x))|>\varepsilon\sqrt{\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z))( Y-m(x)))}\bigg{)}\] \[\leq(\mathrm{const.})\frac{\mathrm{E}(|\mathrm{Re}(K_{T_{n}}(x,Z ))(Y-m(x))|^{2+\delta})}{n^{\delta/2}(\mathrm{Var}(\mathrm{Re}(K_{T_{n}}(x,Z)) (Y-m(x))))^{1+\delta/2}}\] \[\to 0,\]
where the limit follows similarly as in the proof of Theorem 3. Now, Theorem 2.1 in Hjort et al. (2009) gives the desired result.
|
2305.18869 | Dissecting Chain-of-Thought: Compositionality through In-Context
Filtering and Learning | Chain-of-thought (CoT) is a method that enables language models to handle
complex reasoning tasks by decomposing them into simpler steps. Despite its
success, the underlying mechanics of CoT are not yet fully understood. In an
attempt to shed light on this, our study investigates the impact of CoT on the
ability of transformers to in-context learn a simple to study, yet general
family of compositional functions: multi-layer perceptrons (MLPs). In this
setting, we find that the success of CoT can be attributed to breaking down
in-context learning of a compositional function into two distinct phases:
focusing on and filtering data related to each step of the composition and
in-context learning the single-step composition function. Through both
experimental and theoretical evidence, we demonstrate how CoT significantly
reduces the sample complexity of in-context learning (ICL) and facilitates the
learning of complex functions that non-CoT methods struggle with. Furthermore,
we illustrate how transformers can transition from vanilla in-context learning
to mastering a compositional function with CoT by simply incorporating
additional layers that perform the necessary data-filtering for CoT via the
attention mechanism. In addition to these test-time benefits, we show CoT helps
accelerate pretraining by learning shortcuts to represent complex functions and
filtering plays an important role in this process. These findings collectively
provide insights into the mechanics of CoT, inviting further investigation of
its role in complex reasoning tasks. | Yingcong Li, Kartik Sreenivasan, Angeliki Giannou, Dimitris Papailiopoulos, Samet Oymak | 2023-05-30T09:02:00Z | http://arxiv.org/abs/2305.18869v2 | # Dissecting Chain-of-Thought: A Study on Compositional In-Context Learning of MLPs
###### Abstract
Chain-of-thought (CoT) is a method that enables language models to handle complex reasoning tasks by decomposing them into simpler steps. Despite its success, the underlying mechanics of CoT are not yet fully understood. In an attempt to shed light on this, our study investigates the impact of CoT on the ability of transformers to in-context learn a simple to study, yet general family of compositional functions: multi-layer perceptrons (MLPs). In this setting, we reveal that the success of CoT can be attributed to breaking down in-context learning of a compositional function into two distinct phases: focusing on data related to each step of the composition and in-context learning the single-step composition function. Through both experimental and theoretical evidence, we demonstrate how CoT significantly reduces the sample complexity of in-context learning (ICL) and facilitates the learning of complex functions that non-CoT methods struggle with. Furthermore, we illustrate how transformers can transition from vanilla in-context learning to mastering a compositional function with CoT by simply incorporating an additional layer that performs the necessary filtering for CoT via the attention mechanism. In addition to these test-time benefits, we highlight how CoT accelerates pretraining by learning shortcuts to represent complex functions and how filtering plays an important role in pretraining. These findings collectively provide insights into the mechanics of CoT, inviting further investigation of its role in complex reasoning tasks.
## 1 Introduction
The advent of transformers (Vaswani et al., 2017) has revolutionized natural language processing, paving the way for remarkable performance in a wide array of tasks. LLMs, such as GPTs (Brown et al., 2020), have demonstrated an unparalleled ability to capture and leverage vast amounts of data, thereby facilitating near human-level performance across a variety of language generation tasks. Despite this success, a deep understanding of their underlying mechanisms remains elusive.
Chain-of-thought prompting (Wei et al., 2022c) is an emergent ability of transformers where the model solves a complex problem (Wei et al., 2022b), by decomposing it into intermediate steps. Intuitively, such this underlies the ability of general-purpose language models to accomplish previously-unseen complex tasks by leveraging more basic skills acquired during the pretraining phase.
Compositional learning and CoT has enjoyed significant recent success in practical language modeling tasks spanning question answering, code generation, and mathematical reasoning (Perez et al., 2021; Imani et al., 2023; Yuan et al., 2023). In this work, we attempt to demystify some of the mechanics underlying this success and the benefits of CoT in terms of sample complexity and approximation
ability. To do this we explore the role of CoT in learning in-context multi-layer perceptrons (MLP), that we believe can lead to a first set of insightful observations. Throughout, we ask:
_Does CoT improve in-context learning of MLPs, and what are the underlying mechanics?_
**Contributions:** As our central contribution, we establish a rigorous and experimentally-supported abstraction that decouples CoT prompting into a _filtering phase_ and an _in-context learning (ICL) phase_. In the _filtering phase_, the model attends to the relevant tokens within the prompt based on an instruction, and suppresses those irrelevant. In the _ICL phase_, the model runs inference on the filtered prompt to output a _step_. The model then moves to the next _step_ in the chain. How a transformer architecture can actually realize this process is formalized in Theorem 1 for MLPs.
Building on this, we identify and thoroughly compare three schemes as illustrated in Figure 1. (a) ICL: In-context learning from input-output pairs provided in the prompt, (b) CoT-I: Examples in the prompt are augmented with intermediate steps, (c) CoT-I/O: The model also outputs intermediate steps during prediction. Our main contributions are:
* in contrast to the \(\Omega(kd)\) lower bound without step-augmented prompt. In line with theory, our experiments (e.g. Figs. 2&3) identify a striking universality phenomenon (as \(k\) varies) and also demonstrate clear approximation benefits of CoT compared to vanilla ICL.
* **Accelerated pretraining via learning shortcuts:** We construct deep linear MLPs where each layer is chosen from a discrete set of matrices. This is in contrast to above where MLP weights can be arbitrary. We show that CoT can dramatically accelerate pretraining by memorizing these discrete matrices and can infer all layers correctly from a _single_ demonstration. Notably the pretraining loss goes to zero step-by-step where each step _"learns to filter a layer"_. Together, these showcase how CoT identifies composable shortcuts to avoid the need for solving linear regression. In contrast, we show that, ICL (without CoT) collapses to the linear regression performance as it fails to memorize exponentially many candidates (due to lack of composition).
The paper is organized as follows. In Section 2, we introduce the problem setup and preliminaries. Section 3 provides an empirical investigation of CoT with 2-layer MLPs and states our main theoretical results. Section 4 presents holistic experiments on the sample complexity and approximation benefits of CoT. Finally, we elucidate the benefits of CoT during pretraining via experiments on deep linear MLPs in Section 4.3. Related work and discussion are provided in Section 5 and 6.
Figure 1: An illustration of ICL, CoT-I and CoT-I/O methods, using a 3-layer MLP as an example (top left, where \(\mathbf{x}\), \(\mathbf{y}\), \(\mathbf{s}^{1}\), \(\mathbf{s}^{2}\) denote input, output and hidden features respectivaly). The ICL method utilizes in-context examples in the form of \((\mathbf{x},\mathbf{y})\) and makes predictions directly based on the provided \(\mathbf{x}_{\text{test}}\). Both CoT-I and CoT-I/O methods admit prompts with samples formed by \((\mathbf{x},\mathbf{s}^{1},\mathbf{s}^{2},\mathbf{y})\). However, CoT-I/O uniquely makes recurrent predictions by re-inputting the intermediate output (as shown on the right). The performance of these methods is shown on the bottom left, with a more detailed discussion available in Sections 2 and 4.2.
## 2 Preliminaries and Setup
We denote the set \(\{1,2,\ldots,n\}\) as \([n]\). Vectors and matrices are represented in bold text (e.g., \(\mathbf{x}\), \(\mathbf{A}\)), while scalars are denoted in plain text (e.g., \(y\)). The input and output domains are symbolized as \(\mathcal{X}\) and \(\mathcal{Y}\) respectively (unless specified otherwise), and \(\mathbf{x}\in\mathcal{X}\), \(\mathbf{y}\in\mathcal{Y}\) denote the input and output.
### In-context Learning
Following the study by Garg et al. (2022), the fundamental problem of vanilla in-context learning (ICL) involves constructing a prompt with input-output pairs in the following manner:
\[\mathbf{p}_{n}(f)=(\mathbf{x}_{i},\mathbf{y}_{i})_{i=1}^{n}\quad\text{where}\quad\mathbf{y}_{ i}=f(\mathbf{x}_{i}),\] (P-ICL)
where the transition function \(f\in\mathcal{F}:\mathcal{X}\rightarrow\mathcal{Y}\) remains constant within a single prompt but can vary across prompts, and the subscript \(n\) signifies the number of in-context samples contained in the prompt. Considering language translation as an example, \(f\) is identified as the target language, and the prompt can be defined as \(\mathbf{p}(\texttt{Spanish})\) = ((_apple, manzana_), (_ball, pelota_), \(\ldots\)) or \(\mathbf{p}(\texttt{French})\)=(_cat, chat_), (_flower, fleur_), \(\ldots\)). Let TF denote any auto-regressive model (e.g., Decoder-only Transformer). The aim of in-context learning is to learn a model that can accurately predict the output, given a prompt \(\mathbf{p}\) and the test input \(\mathbf{x}_{\text{test}}\), as shown in the following equation:
\[\text{TF}\big{(}\mathbf{p}_{n}(\tilde{f}),\mathbf{x}_{\text{test}}\big{)}\approx\tilde {f}(\mathbf{x}_{\text{test}}) \tag{2.1}\]
where \(\tilde{f}\in\mathcal{F}\) is the test function which may differ from the functions used during training. Previous work (Zhou et al., 2022; Li et al., 2023) has demonstrated that longer prompts (containing more examples \(n\)) typically enhance the performance of the model.
### Chain-of-thought Prompt and Prediction
As defined in (P-ICL), the prompt in vanilla ICL only contains input-output pairs of the target function. This demands that the model learns the function \(f\in\mathcal{F}\) in one go, which becomes more challenging as \(\mathcal{F}\) grows more complex, since larger models and increased prompt length (\(n\)) are needed to make correct predictions (as depicted by the green curves in Figures 4 and 5). Existing studies on chain-of-thought methods (Wei et al., 2022c) observed that prompts containing step-by-step instructions assist the model in decomposing the function and making better predictions. Specifically, consider a function composed of \(L\) subfunctions, represented as \(\tilde{f}:=f_{L}\circ f_{L-1}\circ\ldots f_{1}\). Each intermediate output can be viewed as a step, enabling us to define a length-\(n\) CoT prompt related to
Figure 3: We decouple the composed risk of predicting 2-layer MLPs into risks of individual layers.
Figure 2: Solving 2-layer MLPs with varying input dimension \(d\) and hidden neuron size \(k\).
with \(L\) steps (expressed with \(\mathbf{s}^{\ell},\ell\in[L]\)) as follows:
\[\mathbf{p}_{n}(f)=(\mathbf{x}_{i},\mathbf{s}_{i}^{1},\cdots\mathbf{s}_{i}^{L-1},\mathbf{s}_{i}^{L})_{ i=1}^{n}\quad\text{where}\quad\mathbf{s}_{i}^{\ell}=f_{\ell}(\mathbf{s}_{i}^{\ell-1}),\; \ell\in[L].\] (P-CoT)
Here \(\mathbf{x}_{i}=\mathbf{s}_{i}^{0}\), \(\mathbf{y}_{i}=\mathbf{s}_{i}^{L}\) and \(f_{\ell}\in\mathcal{F}_{\ell}\), which implies that \(f\in\mathcal{F}_{L}\times\cdots\mathcal{F}_{1}:=\mathcal{F}\).
Next we introduce two methodologies for making predictions within the CoT framework:
**CoT over input only (CoT-I).** Contrasted with ICL, CoT-I considers step-by-step instructions as inputs, nonetheless, the prediction for the last token is performed as a single entity. Our experiments indicate that this approach lowers the sample complexity for TF to comprehend the function \(\tilde{f}\) being learned (refer to the orange curves in Figures 4&5). The CoT-I prediction aligns with Eq. (2.1), but the prompt is determined by (P-CoT).
One-shot prediction: \(\texttt{TF}(\mathbf{p}_{n}(\tilde{f}),\mathbf{x}_{\text{test}})\approx\tilde{f}(\bm {x}_{\text{test}})\).
CoT over both input and output (CoT-I/O).Despite the fact that CoT-I improves the sample complexity of learning \(\tilde{f}\), the TF must still possess the capacity to approximate functions from the function class \(\mathcal{F}\), given that the prediction is made in one shot. To mitigate this challenge, we consider a scenario where in addition to implementing a CoT prompt, we also carry out CoT predictions. Specifically, for a composed problem with inputs formed via (P-CoT), the model recurrently makes \(L\)-step predictions as outlined below:
\[\text{Step 1: }\texttt{TF}(\mathbf{p}_{n}(\tilde{f}),\mathbf{x}_{ \text{test}}):=\hat{\mathbf{s}}^{1}\] \[\text{Step 2: }\texttt{TF}(\mathbf{p}_{n}(\tilde{f}),\mathbf{x}_{\text{test} },\hat{\mathbf{s}}^{1}):=\hat{\mathbf{s}}^{2}\] \[\vdots\] \[\text{Setp}\;L\text{: }\texttt{TF}(\mathbf{p}_{n}(\tilde{f}),\mathbf{x}_{ \text{test}},\hat{\mathbf{s}}^{1}\cdots,\hat{\mathbf{s}}^{L-1})\approx\tilde{f}(\mathbf{x} _{\text{test}}), \tag{2.3}\]
where at each step, the model outputs an intermediate step (\(\hat{\mathbf{s}}^{\ell}\)) which is then fed back to the input sequence to facilitate the next-step prediction (\(\hat{\mathbf{s}}^{\ell+1}\)). Following this strategy, the model only needs to learn the union of the sub-function sets, \(\bigcup_{\ell=1}^{L}\mathcal{F}_{\ell}\), whose complexity scales linearly with the number of steps \(L\). Empirical evidence of the benefits of CoT-I/O over ICL and CoT-I in enhancing sample efficiency and model expressivity is reflected in the blue curves shown in Figures 4 and 5.
## 3 Empirical and Theoretical Perspectives on CoT
In this section, we begin by examining the performance of CoT-I/O when learning 2-layer MLPs with input dimension of \(d\) and hidden dimension of \(k\). Our experimentation indicates that CoT-I/O necessitates only \(O(\max(d,k))\) in-context samples. Subsequently, in Section 3.2, we present our theoretical findings that demonstrate how CoT-I/O can execute filtering over the CoT prompt, thereby learning a 2-layer MLP, akin to resolving \(k\)\(d\)-dimensional ReLU problems and \(1\)\(k\)-dimensional linear regression problem.
### Empirical Exploration of 2-layer MLPs
To investigate how MLP architecture impacts CoT-I/O performance, we train 2-layer MLPs with varying input dimensions (\(d\)) and hidden layer sizes (\(k\)). The results are presented in Figures 2 and 3. Here, \(\mathbf{x}\), \(\mathbf{s},y\) represent input, hidden state, and output respectively. Detailed information on the implementation is deferred to Section 4.2. All experiments utilize a small GPT-2 model for training1.
Footnote 1: Our code is available at [https://github.com/yingcong-li/Dissecting-CoT](https://github.com/yingcong-li/Dissecting-CoT).
**CoT-I/O performance is agnostic to \(k\) when \(k\leq d\) (Figure 2).** In Fig. 2(a), we train MLPs with \(d=10,20\) and \(k=4,8,16\). Solid and dashed curves represent the CoT-I/O test risk of \(d=10\) and \(20\) respectively for varying in-context samples. The results indicate that an increase in \(d\) will amplifies the number of samples needed for in in-context learning, while the performance remains unaffected by changes in \(k\in\{4,8,16\}\). To further scrutinize the impact of \(d\) on CoT-I/O accuracy, in Fig. 2(b), we adjust the horizontal axis by dividing it by the input dimension \(d\), and superimpose both \(d=10,k=16\) (blue solid) and \(d=20,k=16\) (orange dashed) results. This alignment of the two curves implies that the in-context sample complexity of CoT-I/O is linearly dependent on \(d\).
**Large \(k\) dominates CoT-I/O performance (Figure 3).** We further investigate the circumstances under which \(k\) begins to govern the CoT-I/O performance. In Figure 3(a), we replicate the same experiments with \(d=10\), but train with wider MLPs (\(k=64\)). Blue, orange and green curves represent results for \(k=4,16,64\) respectively. Since the hidden dimension \(k=64\) is larger, learning the second layer requires more hidden features (\(\mathbf{s}\)), thus \(N=100\) in-context samples (providing \(100\)\(\mathbf{s}\)s) are insufficient to fully restore the second layer, leading to performance gaps between \(k=4,16\) and \(k=64\). To quantify the existing gaps, we conduct single-step evaluations for both the first and the second layers, with the results shown in Figures 3(b) and 3(c). Specifically, let \(\mathbf{p}_{n}(\tilde{f})\) be a test prompt containing \(n\) in-context samples where \(\tilde{f}\) represents any arbitrary 2-layer MLP. Given a test sample \((\mathbf{x}_{\text{test}},\mathbf{s}_{\text{test}},y_{\text{test}})\), the layer predictions are performed as follows.
1st layer prediction: \[\texttt{TF}(\mathbf{p}_{n}(\tilde{f}),\mathbf{x}_{\text{test}}):=\hat{\mathbf{s}},\]
2nd layer prediction: \[\texttt{TF}(\mathbf{p}_{n}(\tilde{f}),\mathbf{x}_{\text{test}},\mathbf{s}_{\text{test}}) :=\hat{y}.\]
The test risks are calculated by \(\|\hat{\mathbf{s}}-\mathbf{s}_{\text{test}}\|^{2}\) and \((\hat{y}-y_{\text{test}})^{2}\). The risks illustrated in the figures are normalized for comparability (refer to Section 4.2 and appendix for more details). Evidence from Fig. 3(b) and 3(c) shows that while increasing \(k\) does not affect the first layer's prediction, it does augment the number of samples required to learn the second layer. Moreover, by plotting the first layer risks of \(k=4,16\) (blue/orange dotted) and second layer risk of \(k=64\) (green dashed) in Fig. 3(a), we can see that they align with the CoT-I/O composed risks. This substantiates the hypothesis that CoT-I/O learns 2-layer MLP through compositional learning of separate layers.
### Provable Approximation of MLPs via Chain-of-Thought
The observations we made in Section 3.1 are indicative of the model processing each one of the two layers sequentially. Now in this section, we state our main contribution of establishing a result that decouples CoT-based in-context learning (CoT-I/O) into two phases: (1) _Filtering Phase:_ Given a prompt that contains features of multiple MLP layers, retrieve only the features related to a target layer to create a _homogeneous_ prompt. (2) _ICL Phase:_ Given filtered prompt, learn the target layer weights through gradient descent. Combining these two phases, and looping over all layers, we will show that there exists a transformer architecture such that CoT-I/O can provably approximate a multilayer MLP up to a given resolution. To state our result, we assume access to an oracle that performs linear regression and consider the consider the condition number of the data matrix.
**Definition 1** (MLP and condition number): _Consider a multilayer MLP defined by the recursion \(\mathbf{s}_{i}^{\ell}=\phi(\mathbf{W}_{\ell}\mathbf{s}_{i}^{\ell-1})\) for \(\ell\in[L]\), \(i\in[n]\) and \(\mathbf{s}_{i}^{0}=\mathbf{x}_{i}\). Here \(\phi(x)=\max(\alpha x,x)\) is a Leaky-ReLU activation with \(1\geq\alpha>0\). Define the feature matrix \(\mathbf{T}_{\ell}=[\mathbf{s}_{1}^{\ell}\ \dots\ \mathbf{s}_{n}^{\ell}]^{\top}\) and define its condition number \(\kappa_{\ell}=\sigma_{\max}(\mathbf{T}_{\ell})/\sigma_{\min}(\mathbf{T}_{\ell})\) (with \(\sigma_{\min}:=0\) for fat matrices) and \(\kappa_{\max}=\max_{0\leq\ell<L}\kappa_{\ell}\)._
**Assumption 1** (Oracle Model): _We assume access to a transformer \(\texttt{TF}_{\texttt{LR}}\) which can run \(T\) steps of gradient descent on the quadratic loss \(\mathcal{L}(\mathbf{w})=\sum_{i=1}^{n}(y_{i}-\mathbf{w}^{\top}\mathbf{x}_{i})^{2}\) given a prompt of the form \((\mathbf{x}_{1},y_{1},\dots,\mathbf{x}_{n},y_{n})\)._
We remark that this assumption is realistic and has been formally established by earlier work (Giannou et al., 2023; Akyurek et al., 2022). Our CoT abstraction builds on these to demonstrate that CoT-I/O can call a blackbox TF model to implement a compositional function when combined with filtering.
The following result summarizes our main theoretical contribution. The precise statement is deferred to the supplementary material.
**Theorem 1** (Decoupling CoT): _Consider a prompt \(\mathbf{p}_{n}(f)\) generated from an \(L\)-layer MLP \(f(\cdot)\) as described in Definition 1, and assume given test example \((\mathbf{x}_{\text{test}},\mathbf{s}_{\text{test}}^{1},\dots\mathbf{s}_{\text{test}}^{L})\). For any resolution \(\epsilon>0\), there exists \(\delta=\delta(\epsilon)\), iteration choice \(T=\mathcal{O}(\kappa_{\max}^{2}\log(1/\epsilon))\), and a backend transformer construction \(\texttt{TF}_{\texttt{RE}}\) such that the concatenated transformer \(\texttt{TF}=\texttt{TF}_{\texttt{LR}}\circ\texttt{TF}_{\texttt{RE}}\) implements the following: Let \((\hat{\mathbf{s}}^{i})^{\ell-1}_{i=1}\) denote the first \(\ell-1\) CoT-I/O outputs of TF and set \(\mathbf{p}[\ell]=(\mathbf{p}_{n}(f),\mathbf{x}_{\text{test}},\hat{\mathbf{s}}^{1}\dots\hat{ \mathbf{s}}^{\ell-1})\). At step \(\ell\), TF implements_
1. _Filtering._ _Define the filtered prompt with input/output features of layer_ \(\ell\)_,_ \[\mathbf{p}_{n}^{\text{filter}}=\begin{pmatrix}\dots\mathbf{0},\,\mathbf{s}_{1}^{\ell-1},& \mathbf{0}\ \dots\mathbf{0},\,\mathbf{s}_{n}^{\ell-1},&\mathbf{0}\ \dots\mathbf{0},\,\hat{\mathbf{s}}^{\ell-1}\\ \dots\mathbf{0},&\mathbf{0},&\mathbf{s}_{1}^{\ell}\dots\mathbf{0},&\mathbf{0},&\mathbf{s}_{n}^{\ell} \dots\mathbf{0},&\mathbf{0}\end{pmatrix}.\]
_There exists a fixed projection matrix_ \(\mathbf{\Pi}\) _that applies individually on tokens such that the backend output obeys_ \(\|\mathbf{\Pi}(\texttt{TF}_{\texttt{BE}}(\mathbf{p}[\ell]))-\mathbf{p}_{n}^{ \texttt{filter}}\|\leq\delta\)_._
2. _Gradient descent._ _The combined model obeys_ \(\|\texttt{TF}(\mathbf{p}[\ell])-\mathbf{s}_{\texttt{ws}}^{\ell}\|\leq\ell\cdot\epsilon/L\)_._
\(\texttt{TF}_{\texttt{BE}}\) _has constant number of layers independent of_ \(T\) _and_ \(n\)_. Consequently, after_ \(L\) _rounds of CoT-I/O, TF outputs_ \(f(\mathbf{x}_{\texttt{ws}})\) _up to_ \(\epsilon\) _accuracy._
**Remark 1**: _Note that, this result effectively shows that, with a sufficiently good blackbox transformer \(\texttt{TF}_{\texttt{LB}}\) (per Assumption 1), CoT-I/O can learn an \(L\)-layer MLP using in-context sample size \(n>\max_{\ell\in[L]}d_{\ell}\) where \(d_{\ell}\) is the input dimension of \(\ell\)th layer. This is assuming condition number \(\kappa_{\max}\) of the problem is finite as soon as all layers become over-determined. Consequently, CoT-I/O needs \(\max(k,d)\) sample complexity to learn a two layer MLP. This provides a formal justification for the observation that empirical CoT-I/O performance is agnostic to \(k\) as long as \(k\leq d\)._
We provide the filtering statements in the Appendix, and the key components of our construction are the following: (i) Inputs are projected through the embedding layer in which a set of encodings, an enumeration of the tokens (\(1,2,\ldots,N\)), an enumeration of the layers (\(1,2,\ldots,L\)) and an identifier for each layer already predicted are all attached. Notice that this "modification" to the input only depends on the sequence length and is agnostic to the token to-be-predicted. This allows for an automated looping over \(L\) predictions. (ii) We use this information to extract the sequence length \(N\) and the current layer \(\ell\) to-be-predicted. (iii) With these at hand we construct an 'if - then' type of function using the ReLU layers to filter out the samples that are not needed for the prediction.
## 4 Experimental Results
### Model Training
In Figure 1 and Section 2, we have discussed vanilla ICL, CoT-I and CoT-I/O methods. Intuitively, ICL can be viewed as a special case of CoT-I (or CoT-I/O) if we assume only one step is performed. Consequently, we will focus on implementing CoT-I and CoT-I/O for model training in the following.
Consider the CoT prompt as in (P-CoT), and assume that \(\mathbf{x}\sim\mathcal{D}_{\mathcal{X}}\), and \(f_{\ell}\sim\mathcal{D}_{\ell},\ell\in[L]\), where \(L\) denotes the number of compositions/steps, such that the final prediction should approximate \(f(\mathbf{x})=f_{L}(f_{L-1}\ldots f_{1}(\mathbf{x})):=\mathbf{y}\in\mathcal{Y}\). We define \(\ell(\hat{\mathbf{y}},\mathbf{y}):\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}\) as a loss function. For simplicity, we assume \(f_{\ell}(\ldots f_{1}(\mathbf{x}))\in\mathcal{Y}\), \(\ell\in[L]\). Let \(N\) represent the in-context window of TF, which implies that TF can only admit a prompt containing up to \(N\) in-context samples. Generally, our goal is to ensure high prediction performance given any length-\(n\) prompt, where \(n\in[N]\). To this end, we train the model using prompts with length from \(1\) to \(N\) equally and aim to minimize the averaged risk over different prompt size. Assuming the model TF is parameterized by \(\mathbf{\theta}\) and considering meta learning problem, the objective functions for CoT-I and CoT-I/O are defined as follows.
\[\hat{\mathbf{\theta}}^{\text{CoT-I}}=\arg\min_{\mathbf{\theta}}\mathbb{E}_{(\mathbf{x}_{n })_{n=1}^{N},(f_{\ell})_{\ell=1}^{L}}\left[\frac{1}{N}\sum_{n=1}^{N}\ell(\hat{ \mathbf{y}}_{n},f(\mathbf{x}_{n}))\right]\ \ \text{where}\ \ \hat{\mathbf{y}}_{n}=\texttt{TF}(\mathbf{p}_{n}(f),\mathbf{x}_{n})\]
and
\[\hat{\mathbf{\theta}}^{\text{CoT-I/O}}=\arg\min_{\mathbf{\theta}}\mathbb{E}_{(\mathbf{x}_{ n})_{n=1}^{N},(f_{\ell})_{\ell=1}^{L}}\left[\frac{1}{NL}\sum_{n=1}^{N}\sum_{\ell=1}^{L} \ell(\hat{\mathbf{s}}_{n}^{\ell},\mathbf{s}_{n}^{\ell})\right]\ \ \text{where}\ \ \hat{\mathbf{s}}_{n}^{\ell}=\texttt{TF}(\mathbf{p}_{n}(f),\mathbf{x}_{n}\cdots\mathbf{s}_{n}^{ \ell-1}).\]
Here \(\mathbf{p}_{n}(f)\) is given by (P-CoT), and as mentioned previously, \(\mathbf{s}^{0}=\mathbf{x}\) and \(\mathbf{s}^{L}=\mathbf{y}\). All \(\mathbf{x}\) and \(f_{\ell}\) are independent, and we take the expectation of the risk over their respective distributions.
### 2-layer Random MLPs
For a clear exposition, we first focus on the case of two layer MLPs which are 2-step tasks, and compare three different methods: ICL, CoT-I and CoT-I/O. Comparison results are shown in Figures 4 and 5.
**Dataset.** Consider 2-layer MLPs with input \(\mathbf{x}\in\mathbb{R}^{d}\), hidden feature (step-1 output) \(\mathbf{s}\in\mathbb{R}^{k}\), and output \(y\in\mathbb{R}\). Here, \(\mathbf{s}=f_{1}(\mathbf{x}):=(\mathbf{W}\mathbf{x})_{+}\) and \(y=f_{2}(\mathbf{s}):=\mathbf{v}^{\top}\mathbf{s}\), with \(\mathbf{W}\in\mathbb{R}^{k\times d}\) and
being the parameters of the first and second layer/sub-function, and \((x)_{+}=\max(x,0)\) being ReLU activation. The function is composed as \(y=\mathbf{v}^{\top}(\mathbf{W}\mathbf{x})_{+}\). We define the function distributions as follows: each entry of \(\mathbf{W}\) is sampled via \(\mathbf{W}_{ij}\sim\mathcal{N}(0,\frac{2}{k})\), and \(\mathbf{v}\sim\mathcal{N}(0,\mathbf{I}_{k})\), with inputs being randomly sampled through \(\mathbf{x}\sim\mathcal{N}(0,\mathbf{I}_{d})\)2. We apply the quadratic loss in our experiments. To avoid the implicit bias due to distribution shift, both training and test datasets are generated following the same strategy.
Footnote 2: Following this strategy for data generation, the expected norms of \(\mathbf{x}\), \(\mathbf{s}\) and \(y\) are equivalent, and the risk curves displayed in the figures are normalized for comparison.
**Varying model sizes (Figure 4).** We initially assess the benefits of CoT-I/O over ICL and CoT-I across different TF models. With \(d=10\) and \(k=8\) fixed, we train three different GPT-2 models: standard, small and tiny GPT-2. The small GPT-2 has \(6\) layers, \(4\) attention heads per layer and \(128\) dimensional embeddings. The standard GPT-2 consists of twice the number of layers, attention heads and embedding dimensionality compared to the small GPT-2, and tiny GPT-2, on the other hand, possesses only half of these hyperparameters compared to the small GPT-2. We evaluate the performance using prompts containing \(n\) in-context samples, where \(n\) ranges from \(1\) to \(N\) (\(N=100\)). The associated test risks are displayed in Figs. 4(b), 4(c) and 4(d). The blue, orange and green curves correspond to CoT-I/O, CoT-I and ICL, respectively. In Fig. 4(a), we present the averaged risks. The results show that using CoT-I/O, the small GPT-2 can solve 2-layer MLPs with approximately \(60\) samples, while CoT-I requires the standard GPT-2. Conversely, ICL is unable to achieve zero test risk even with the standard GPT-2 model and up to \(100\) samples. This indicates that to learn 2-layer MLPs in a single shot, ICL requires at least \(\mathcal{O}(dk+d)\) samples to restore all function parameters. Conversely, CoT-I and CoT-I/O can leverage implicit samples contained in the CoT prompt. Let \(f_{1}\in\mathcal{F}_{1}\) (first layer) and \(f_{2}\in\mathcal{F}_{2}\) (second layer). By comparing the performance of CoT-I and CoT-I/O, it becomes evident that the standard GPT-2 is capable of learning the composed function \(f=f_{2}\circ f_{1}\in\mathcal{F}\), which the small GPT-2 cannot express.
**Varying MLP widths (Figure 5).**: Next, we explore how different MLP widths impact the performance (by varying the hidden neuron size \(k\in\{4,8,16\}\)). The corresponding results are depicted in Figure 5. The blue, orange and green curves in Fig. 5(b), 5(c) and 5(d) correspond to hidden layer sizes of \(k=4\), \(8\), and \(16\), respectively. Fig. 5(a) displays the averaged risks. We keep \(d=10,\ N=100\) fixed and train with the small GPT-2 model. As discussed in Section 3, CoT-I/O can learn a 2-layer MLP using around \(60\) samples for all \(k=4,8,16\) due to its capability to deconstruct composed functions. However, CoT-I can only learn the narrow MLPs with \(k=4\), and ICL is unable to learn any of them. Moreover, we observe a substantial difference in the performance of ICL and CoT-I with varying \(k\) (e.g., see averaged risks in Fig. 5(a)). This can be explained by the
Figure 4: Comparison of the three methods for solving \(2\)-layer MLPs using different GPT-2 models.
Figure 5: Comparison of the three methods for solving \(2\)-layer MLPs with different hidden sizes.
fact that enlarging \(k\) results in more complex \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\), thus making the learning of \(\mathcal{F}=\mathcal{F}_{2}\times\mathcal{F}_{1}\) more challenging for ICL and CoT-I.
### Deep Linear MLPs
In Sections 3.1 and 4.2, we have discussed the approximation benefits of CoT-I/O and how it in-context learns 2-layer random MLPs by parallel learning of \(k\)\(d\)-dimensional ReLU and 1 \(k\)-dimensional linear regression. In this section, we investigate the capability of CoT-I/O in learning longer compositions. For brevity, we will use CoT to refer to CoT-I/O in the rest of the discussion.
**Dataset.** Consider \(L\)-layer linear MLPs with input \(\mathbf{x}\in\mathbb{R}^{d}\sim\mathcal{N}(0,\mathbf{I}_{d})\), and output generated by \(\mathbf{y}=\mathbf{W}_{L}\mathbf{W}_{L-1}\cdots\mathbf{W}_{1}\mathbf{x}\), where the \(\ell\)th layer is parameterized by \(\mathbf{W}_{\ell}\in\mathbb{R}^{d\times d}\), \(\ell\in[L]\). In this work, to better understand the emerging ability of CoT, we assume that each layer draws from the same discrete sub-function set \(\bar{\mathcal{F}}=\{\mathbf{W}_{k}:\mathbf{\bar{W}}_{k}^{\top}\mathbf{\bar{W}}_{k}=\mathbf{I },k\in[K]\}\)3. Therefore, to learn the \(L\)-layer neural net, CoT only needs to learn \(\bar{\mathcal{F}}\) with \(|\bar{\mathcal{F}}|=K\), whereas ICL needs to learn the function set \(\bar{\mathcal{F}}^{L}\), which contains \(K^{L}\) random matrices.
Footnote 3: This assumption ensures that the norm of the feature remains constant across layers ( \(\|\mathbf{x}\|=\|\mathbf{y}\|=\|\mathbf{s}^{\ell}\|\)), enabling fair evaluation across different layers.
Composition ability of CoT (Figures 6).Set \(d=5\), \(L=6\) and \(K=4\). At each round, we randomly select \(L\) matrices \(\mathbf{W}_{\ell}\), \(\ell\in[L]\) from \(\bar{\mathcal{F}}\) so that for any input \(\mathbf{x}\), we can form a chain
\[\mathbf{x}\rightarrow\mathbf{s}^{1}\rightarrow\mathbf{s}^{2}\cdots\rightarrow\mathbf{s}^{6}: =\mathbf{y},\]
where \(\mathbf{s}^{\ell}=\mathbf{W}_{\ell}\mathbf{s}^{\ell-1}\), \(\ell\in[L]\) and \(\mathbf{s}^{0}:=\mathbf{x}\). Let CoT-\(X\) denote \(X\)-step CoT-I/O method. For example, the in-context sample of CoT-6 has form of \((\mathbf{x},\mathbf{s}^{1},\mathbf{s}^{2},\ldots\mathbf{s}^{5},\mathbf{y})\), which contains all the intermediate outputs from each layer; while CoT-3, CoT-2 have prompt samples formed as \((\mathbf{x},\mathbf{s}^{2},\mathbf{s}^{4},\mathbf{y})\) and \((\mathbf{x},\mathbf{s}^{3},\mathbf{y})\) respectively. In this setting, ICL is also termed as CoT-1, as its prompt contains only \((\mathbf{x},\mathbf{y})\) pairs. To solve the length-6 chain, CoT-\(X\) needs to learn a model that can remember \(4^{6/X}\) matrices. Therefore, ICL is face a significantly challenge sincd it needs to remember \(4^{6}=4,096\) matrices (all combinations of the 4 matrices used for training and testing) compared to just \(4\) for CoT-6.
We train small GPT-2 models using the CoT-2/-3/-6 and ICL methods, and present the results in Fig. 6(a). As evident from the figure, the performance curves of CoT-2 (orange), CoT-3 (green) and CoT-6 (red) overlap, and they can all make precise predictions in one shot (given an in-context example \(n=1\)). It seems that TF has effectively learned to remember up to \(64\) matrices (for CoT-2) and compose up to \(6\) layers (for CoT-6). However, ICL (blue) struggles to learn the 6-layer MLPs in one shot. The black dashed curve shows the solution for linear regression \(y=\mathbf{\beta}^{\top}\mathbf{x}\) computed directly via least squares given \(n\) random training samples, where \(\mathbf{x}\) is the input and \(y\) is from the output of the 6-layer MLPs (e.g., \(\mathbf{y}[0]\)). The test risks for \(n=1,\ldots 10\) are plotted (in Fig. 6(a)), which show that the ICL curve aligns with the least squares performance. This implies that, instead of remembering all \(4,096\) matrices, ICL solves the problem from the linear regression phase.
Figure 6: Evaluations over deep linear MLPs using CoT-I/O and ICL where CoT-\(X\) represents the \(X\)-step CoT-I/O. Fig. 6(a) illustrates point-to-point meta results where the model is trained with substantial number of samples. In contrast, Fig. 6(b) displays the one-shot performance (with only one in-context sample provided) when making predictions during training. See Section 4.3 for further implementation details.
In addition to the meta-learning results which highlight the approximation benefits of multi-step CoT, we also investigate the convergence rate of CoT-2/-3/-6 and ICL, with results displayed in Fig. 6(b). We test the one-shot performance during training and find that CoT-6 converges fastest. This is because it has the smallest sub-function set, and given the same tasks (e.g., deep neural nets), shortening the chain leads to slower convergence. This supports the evidence that taking more steps facilitates faster and more effective learning of complex problems.
Evidence of Filtering (Figure 7).As per Theorem 1 and the appendix, transformers can perform filtering over CoT prompts, and the results from 2-layer MLPs align with our theoretical findings. However, can we explicitly observe filtering behaviors? In Fig. 7(a), we display the results of the first 50k iterations from Fig. 6(b), and observe risk drops in CoT-6 (red) at the 15k and 25k iteration (shown as grey dotted and dashed lines). Subsequently, in Fig. 7(b), we plot the test risk of each layer prediction (by feeding the model with correct intermediate features not the predicted ones), where CoT-6 (red) predicts the outputs from all 6 layers (\(\mathbf{s}^{1},\cdots,\mathbf{s}^{L}\)). From these figures, we can identify risk drops when predicting different layers, which appear at either 15k (for layer 2, 3, 4, 5, 6) or 25k (for layer 1) iteration. This implies that the model learns to predict each step/layer function independently. Further, we test the filtering evidence of the \(\ell\)th layer by filling irrelevant positions with random features. Specifically, an in-context example is formed by
\[(\mathbf{z}^{0},\cdots,\mathbf{s}^{\ell-1},\mathbf{s}^{\ell},\mathbf{z}^{\ell+1},\dots\mathbf{z}^ {L}),\ \ \text{where}\ \ \mathbf{s}^{\ell}=\mathbf{W}_{\ell}(\mathbf{s}^{\ell-1})\ \ \text{and}\ \ \mathbf{z}\sim\mathcal{N}(0,\mathbf{I}_{d}).\]
The test risks are represented by black dotted curves in Fig. 7(b), which aligned precisely with the CoT-6 curves (red). This signifies that each layer's prediction concentrate solely on the corresponding intermediate steps in the prefix, while disregarding irrelevant features. This observation provides evidence that the process of filtering is indeed performed.
## 5 Related Work
With the success of LLMs and prompt structure (Lester et al., 2021), there is growing interest in in-context learning (ICL) from both theoretical and experimental lens (Garg et al., 2022; Brown et al., 2020; von Oswald et al., 2022; Dai et al., 2022; Min et al., 2022; Lyu et al., 2022; Li et al., 2023; Xie et al., 2021; Min et al., 2021; Wei et al., 2023). As an extension, chain-of-thought (CoT) prompts have made impressive improvements in performing complex reasoning by decomposing it into step-by-step intermediate solutions (Wei et al., 2022; Narang et al., 2020; Lampinen et al., 2022; Wei et al., 2022; Zhou et al., 2022; Nye et al., 2021; Velickovic and Blundell, 2021; Lanchantin et al., 2023), which in general, shows the ability of transformer in solving compositional functions, and the idea of learning how to compose skills has been well studied in other literatures (Sahni et al., 2017; Liska et al., 2018). More specifically, for the problem of learning shallow networks, there are several well known hardness results Goel et al. (2017, 2020), Zhang et al. (2019). In particular, Hahn and Goyal (2023) shows a formal learnability bound which implies that compositional structure can benefit ICL. However, most of the work focuses on investigating empirical benefits and algorithmic designing of CoT, and there exists little effort studying the underlying mechanisms of CoT.
Considering the expressivity of the transformer architecture itself, Yun et al. (2019) showed that TFs are universal sequence to sequence approximators. More recently, Giannou et al. (2023) use an explicit construction to show that shallow TFs can be used to run general purpose programs as long as we loop them. Other works have also shown the turing-completeness of the TF architecture
Figure 7: Fig. 7(a) is generated by magnifying the initial 50k iterations of Fig. 6(b), and we decouple the composed risks from predicting \(6\)-layer linear MLPs into predictions for each layer, and the results are depicted in Fig. 7(b). Additional implementation details can be found in Section 4.3.
but these typically require infinite/high precision and recursion around attention layers (Wei et al., 2022; Perez et al., 2019, 2021; Liu et al., 2022). Closer to our work, Akyurek et al. (2022) prove that a transformer with constant number of layers can implement gradient descent in solving linear regression, and Giannou et al. (2023) introduce similar results by looping outputs back into inputs. In this work, we prove CoT can be treated as: first apply filtering on the CoT prompts using special construction, and then in-context learn the filtered prompt.
## 6 Conclusion and Discussion
In this work, we investigate chain-of-thought prompting and shed light on how it enables compositional learning of multilayer perceptrons step-by-step. Specially, we have explored and contrasted three methods: ICL, CoT-I and CoT-I/O, and found that CoT-I/O facilitates better approximation and faster convergence through looping and sample efficiency. Additionally, we empirically and theoretically demonstrated that to learn a 2-layer MLP with \(d\)-dimensional input and \(k\) neurons, CoT-I/O requires \(\mathcal{O}(\max(d,k))\) in-context samples whereas ICL runs into approximation error bottlenecks.
There are several interesting avenues for future research building on our findings. To what extent our decoupling of CoT (filtering followed by ICL) align with the empirical evidence in practical problems such as code generation and mathematical reasoning? We have shown that CoT-I/O can rely on linear regression oracle to learn an MLP. To what extent transformers can approximate MLPs without CoT-I/O (e.g. with CoT-I), what are lower/upper bounds?
|
2308.00764 | Mode coupling coefficients between the convective core and radiative
envelope of $γ\,$Doradus and slowly pulsating B stars | Signatures of coupling between an inertial mode in the convective core and a
gravito-inertial mode in the envelope have been found in four-year Kepler light
curves of 16 rapidly rotating $\gamma\,$Doradus ($\gamma\,$Dor) stars. This
makes it possible to obtain a measurement of the rotation frequency in their
convective core. Despite their similar internal structure and available data,
inertial modes have not yet been reported for slowly pulsating B (SPB) stars.
We aim to provide a numerical counterpart of the recently published theoretical
expressions for the mode-coupling coefficients, $\varepsilon$ and
$\tilde{\varepsilon}$. These coefficients represent the two cases of a
continuous and a discontinuous Brunt-V\"ais\"al\"a frequency profile at the
core-envelope interface, respectively. We used asteroseismic forward models of
two samples consisting of 26 SPB stars and 37 $\gamma\,$Dor stars to infer
their numerical values of $\varepsilon$. The asteroseismically inferred values
of $\varepsilon$ for the two samples are between 0.0 and 0.34. While
$\varepsilon$ is most strongly correlated with the near-core rotation frequency
for $\gamma\,$Dor stars, the fractional radius of the convective core instead
provides the tightest correlation for SPB stars. We find $\varepsilon$ to
decrease mildly as the stars evolve. Our asteroseismic results for the mode
coupling support the theoretical interpretation and reveal that young,
fast-rotating $\gamma\,$Dor stars are most suitable for undergoing couplings
between inertial modes in the rotating convective core and gravito-inertial
modes in the radiative envelope. The phenomenon has been found in 2.4\% of such
pulsators with detected period spacing patterns, whereas it has not been seen
in any of the SPB stars so far. (shortened abstract to meet the arXiv limits) | Conny Aerts, Stéphane Mathis | 2023-08-01T18:03:56Z | http://arxiv.org/abs/2308.00764v1 | Mode coupling coefficients between the convective core and radiative envelope of \(\gamma\) Doradus and slowly pulsating B stars
###### Abstract
Context:Signatures of coupling between an inertial mode in the convective core and a gravito-inertial mode in the envelope have been found in four-year _Kepler_ light curves of 16 rapidly rotating \(\gamma\) Doradus (\(\gamma\) Dor) stars. This makes it possible to obtain a measurement of the rotation frequency in their convective core. Despite their similar internal structure and available data, inertial modes have not yet been reported for slowly pulsating B (SPB) stars.
Aims:We aim to provide a numerical counterpart of the recently published theoretical expressions for the mode-coupling coefficients, \(\varepsilon\) and \(\bar{\varepsilon}\). These coefficients represent the two cases of a continuous and a discontinuous Brunt-Vaisala frequency profile at the core-envelope interface, respectively. We consider \(\gamma\) Dor and SPB stars to shed light on the difference between these two classes of intermediate-mass gravito-inertial mode pulsators in terms of core and envelope mode coupling.
Methods:We used asteroseismic forward models of two samples consisting of 26 SPB stars and 37 \(\gamma\) Dor stars to infer their numerical values of \(\varepsilon\) and \(\bar{\varepsilon}\). For both samples, we also computed: the linear correlation coefficients between \(\varepsilon\) or \(\bar{\varepsilon}\) and the near-core rotation frequency, the chemical gradient, the evolutionary stage, the convective core masses and radii, and the Schonberg-Chandrasekhar limiting mass representing the maximum mass of an inert helium core at central hydrogen exhaustion that can still withstand the pressure of the overlaying envelope.
Results:The asteroseismically inferred values of \(\varepsilon\) and \(\bar{\varepsilon}\) for the two samples are between 0.0 and 0.34. While \(\varepsilon\) is most strongly correlated with the near-core rotation frequency for \(\gamma\) Dor stars, the fractional radius of the convective core instead provides the tightest correlation for SPB stars. We find \(\varepsilon\) to decrease mildly as the stars evolve. For the SPB stars, \(\varepsilon\) and \(\bar{\varepsilon}\) have similar moderate correlations with respect to the core properties. For the \(\gamma\) Dor stars, \(\bar{\varepsilon}\) reveals systematically lower and often no correlation to the core properties; their \(\varepsilon\) is mainly determined by the near-core rotation frequency. The Schonberg-Chandrasekhar limit is already surpassed by the more massive SPB stars, while none of the \(\gamma\) Dor stars have reached it yet.
Conclusions:Our asteroseismic results for the mode coupling support the theoretical interpretation and reveal that young, fast-rotating \(\gamma\) Dor stars are most suitable for undergoing couplings between inertial modes in the rotating convective core and gravito-inertial modes in the radiative envelope. The phenomenon has been found in 2.4% of such pulsators with detected period spacing patterns, whereas it has not been seen in any of the SPB stars so far.
## 1 Introduction
Rotation is an important ingredient of stellar evolution models (Maeder, 2009). However, our understanding of the physical processes inside stars induced by their internal rotation is marked by some lingering deficiencies. Thanks to asteroseismology, we have access to a tool for measuring the internal profile from so-called rotational splitting of a star's non-radial mode frequencies (Ledoux, 1951). This was first turned into practice for the pressure modes of the Sun (Deubner et al., 1979) and subsequently for gravity modes in a white dwarf (Winget et al., 1991), as well as for low-order pressure and gravity modes in a young massive B-type dwarf (Aerts et al., 2003; Dupret et al., 2004).
The study of internal stellar rotation became an established field of research once the photometric light curves assembled with the NASA _Kepler_ telescope reached a sufficiently long duration to resolve rotationally split frequencies. This led to estimates for the core rotation frequency from split dipole mixed modes in red giants (e.g. Beck et al., 2012; Mosser et al., 2012; Deheuvels et al., 2012; Beck et al., 2014; Gehan et al., 2018) and in subgiants (e.g. Deheuvels et al., 2014, 2020). Rotationally split multiplets also allowed to deduce the near-core rotation frequency of main sequence stars (e.g. Kurtz et al., 2014; Saio et al., 2015; Moravveji et al., 2016; Schmid and Aerts, 2016; Van Reeth et al., 2016; Li et al., 2019, 2020). A few more detections of internal rotation for multiple intermediate- and high-mass stars have been done with the BRITE constellation as well (e.g. Sowicka et al., 2017; Kallinger et al., 2017). The refurbished version of the _Kepler_ project (K2) subsequently delivered internal rotation measurements of white dwarfs (Hermes et al., 2017), while those of subdwarfs are summarised in Charpinet et al. (2018). Applications of rotation inversions delivered the entire rotation profile inside various types of slowly rotating stars, instead of just the (near-) core and envelope values (Deheuvels et al., 2014, 2015; Triana et al., 2015; Di Mauro et al., 2016; Triana et al., 2017; Bazot et al., 2019). Aerts et al. (2019) provided an overarching sum
mary of all internal rotation measurements and discussed them in terms of stellar evolution and angular momentum transport mechanisms.
A particularly challenging case to measure internal stellar rotation from the splitting of nonradial mode frequencies occurs when the modes have frequency values comparable to the frequency of rotation. In such a case, the Coriolis acceleration cannot be treated as a small perturbative effect with respect to the other acting forces in the computations of the eigenmodes of the star. A proper treatment of the nonradial oscillation modes in such a situation can be done adopting the traditional approximation of rotation (TAR, Lee & Saio, 1987, 1997; Townsend, 2003; Mathis, 2009; Bouabid et al., 2013). The TAR is an excellent approximation for modes with spin parameters \(s=2\Omega/\omega\geq 1\) with \(\Omega\) the rotation frequency and \(\omega\) the frequency of the mode. It is valid in the regime where \(2\Omega<N\) and \(\omega\ll N\) with \(N\) as the Brunt-Vaisala (BV) frequency and in stars not flattened too much by the centrifugal acceleration (Ouazzani et al., 2017; Dhouib et al., 2021, 2021), provided that the horizontal displacement caused by the mode is dominant over the vertical one. This latter condition is fulfilled for gravity modes of stars in the core hydrogen burning phase of their evolution, as proven by long-term high-resolution time-series spectroscopy of slowly pulsating B (SPB) stars (Aerts et al., 1999; Mathias et al., 2001; De Cat & Aerts, 2002; Briquet et al., 2003; De Cat et al., 2005) and \(\gamma\) Doradus (\(\gamma\) Dor) stars (Mathias et al., 2004; Aerts et al., 2004; De Cat et al., 2006). Space asteroseismology meanwhile showed that these conditions are also well met for most of the gravito-inertial and Rossby modes active in the rapidly rotating radiative envelope of intermediate- and high-mass pulsating dwarfs, revealing spin parameters roughly in the range \(s\in[10,30]\)(Neiner et al., 2012, 2012, 2017; Saio et al., 2018).
Methods for measuring the internal rotation profile, \(\Omega(r)\), of gravito-inertial mode pulsators have been developed since the four-year light curves assembled with the NASA _Kepler_ space telescope allowed for the identification of series of such modes with consecutive radial order (Van Reeth et al., 2015, 2016; Ouazzani et al., 2017; Christophe et al., 2018; Van Reeth et al., 2018; Li et al., 2019, 2020; Takata et al., 2020). Applications of these methods have led to rigorous and consistent estimates of the internal rotation frequency for hundreds of intermediate-mass stars (see Aerts, 2021, for a summary). Most of the asteroseismic measurements deliver the internal rotation frequency at the bottom of the radiative envelope, where a boundary layer occurs between the convective core and the radiative envelope. The gravito-inertial mode kernels have their strongest probing power in that boundary layer (Van Reeth et al., 2016; Ouazzani et al., 2017; Pedersen et al., 2018; Michielsen et al., 2019, 2021; Mombarg et al., 2021; Vanlaer et al., 2023). For a fraction of the pulsators with gravito-inertial modes, an envelope or surface estimate of the rotation is also available from pressure modes and rotational modulation, respectively (Van Reeth et al., 2018; Li et al., 2020; Sekaran et al., 2021).
Gravity and super-inertial gravito-inertial modes with \(\omega>2\Omega\) do not propagate in the convective core of dwarfs (e.g. Prat et al., 2016, 2018) and can hence not deliver their core rotation (e.g. Kurtz et al., 2014; Triana et al., 2015, for early attempts from rotation inversion for a \(\gamma\) Dor and SPB star, respectively). A major breakthrough on this front was achieved by Ouazzani et al. (2020), who came up with a way to determine the rotation in the convective core from inertial modes restored purely by the Coriolis acceleration. Their theoretical study showed convincingly that coupling between a pure inertial mode trapped in the convective core and a sub-inertial gravito-inertial mode (with \(\omega<2\Omega\)) propagating in the surrounding radiative envelope may occur for models of rapidly rotating \(\gamma\) Dor stars. Ouazzani et al. (2020) found such mode coupling to result in a clear dip signature at particular frequencies in period spacing patterns of envelope gravito-inertial modes. Their theoretical study was inspired by the predictions of Saio et al. (2018) to explain observed period spacing patterns in the young rapidly rotating \(\gamma\) Dor pulsator KIC 5608334. To interpret the asteroseismic data of this star, Saio et al. (2018) compared predictions based on the TAR with computations taking full account of the Coriolis acceleration from the method developed by Lee & Baraffe (1995) and found the latter to reveal a dip structure in the period spacing diagram, while the TAR predictions do not.
The findings by Ouazzani et al. (2020) then led Saio et al. (2021) to revisit the _Kepler_ data of \(\gamma\) Dor stars and study those with a single clean dip signal in their period spacing pattern from the perspective of coupling between inertial core modes and gravito-inertial envelope modes. They found 16 \(\gamma\) Dor stars with such mode coupling and deduced the rotation frequency in their convective core. This gives slightly faster rotation in the core than in the envelope at a level of 10% differentiality or less. They also found an anticorrelation between the level of differentiality and the evolutionary stage of the pulsators expressed as the hydrogen mass fraction of the core, \(X_{c}\), which ranges from 0.7 to 0.2 for the 16 pulsators with mode coupling.
Following onto Ouazzani et al. (2020) and Saio et al. (2021), a deeper theoretical understanding of the dip structure in period spacing patterns of \(\gamma\) Dor stars and of their 10% level of differentiality between the core and envelope rotation was offered by Tokuno & Takata (2022). Their study also provides theoretical expressions for a dimensionless parameter, \(\varepsilon\), which they assume to attain values between 0 and 1 and expressing a level of optimal circumstances for inertial modes in the core to couple to sub-inertial gravito-inertial modes in the envelope.
While a dip structure in mode period spacings due to coupling between inertial core modes and gravito-inertial envelope modes has been well established in several \(\gamma\) Dor stars, no such signature has been reported yet for SPB stars. However, the inner structure of \(\gamma\) Dor and SPB stars is similar in the sense that they have a well-developed convective core surrounded by an often rapidly rotating boundary layer that connects the core to a radiative envelope. The members of both classes of gravito-inertial pulsators cover the entire range from slow rotation to almost critical rotation. It is not yet understood why the members of these two classes of pulsators reveal period spacing patterns with somewhat different morphological properties. Indeed, while many of the \(\gamma\) Dor stars have long period spacing patterns involving tens of modes of consecutive radial order, often with just one dip or none at all (Van Reeth et al., 2015; Keen et al., 2015; Bedding et al., 2015; Li et al., 2019, 2020; Sekaran et al., 2021), most of the SPB stars have shorter patterns with oscillatory behaviour and/or multiple dips (Degroda et al., 2010; Papics et al., 2012, 2014; Szewczuk & Daszynska-Daszkiewicz, 2015; Papics et al., 2017; Szewczuk & Daszynska-Daszkiewicz, 2018; Szewczuk et al., 2021; Pedersen et al., 2021). The detected periodicity in the dip pattern of slow to moderate rotators among the \(\gamma\) Dor and SPB stars is relatively well understood in terms of a strong \(\mu-\)gradient (\(\nabla_{\mu}\)) due to a receding convective core as the star evolves, causing strong mode trapping (Kurtz et al., 2014; Saio et al., 2015; Schmid & Aerts, 2016; Murphy et al., 2016; Pedersen et al., 2018; Michielsen et al., 2019; Li et al., 2019; Wu et al., 2020; Michielsen et al., 2021). This phenomenon caused by \(\nabla_{\mu}\), or by buoyancy glitches in general, gives a specific morphol
ogy to period spacing patterns that also occurs in the absence of fast rotation, as studied analytically by Miglio et al. (2008) and Cunha et al. (2019). However, some \(\gamma\) Dor and SPB stars reveal periodic and rather shallow dips intertwined with one or a few sharp dips, which may point to the simultaneous occurrence of mode trapping due to a strong \(\mu-\)gradient and coupling between inertial and gravito-inertial modes.
Moreover, the presence of a core magnetic field may also give rise to a dip structure, even when the TAR is used (Prat et al., 2019, 2020; Van Beeck et al., 2020; Dhoub et al., 2022). Being able to unravel the physical causes of all the observed dips thus offers the future potential to detect and measure internal magnetic field profiles in addition to the rotation profiles and check if they are consistent with the predictions for the internal magnetic field strengths by Aerts et al. (2021). This requires a way to unravel the signature of mode coupling from the one due to \(\nabla_{\mu}\), in the presence of rapid rotation and possible magnetic fields when modeling observed dips. With this goal in mind, we aim to provide a numerical value for the coupling coefficient by relying on the best asteroseismic models of a sample of gravito-inertial pulsators consisting of both \(\gamma\) Dor and SPB stars. We wish to investigate if numerical values of the parameters \(\varepsilon\) and \(\tilde{\varepsilon}\) introduced by Tokuno & Takata (2022) from asteroseismic models provide a useful prediction about the occurrence or absence of core-to-envelope mode coupling. In particular, our goal is to investigate whether \(\gamma\) Dor stars and SPB stars have different or similar values for \(\varepsilon\) and \(\tilde{\varepsilon}\). We will also test if the values of \(\varepsilon\) and \(\tilde{\varepsilon}\) indeed occur in the interval \([0,1]\) as assumed by Tokuno & Takata (2022) and whether they are correlated with typical properties of the convective core of \(\gamma\) Dor and SPB stars, which have quite different size and mass. Finally, we will also search for relationships between the coupling coefficients and the hydrogen mass fraction (as a good proxy for the evolutionary stage) or the properties of \(\mu\) and \(\nabla_{\mu}\) in the boundary layer between the convective core and the radiative envelope for both types of pulsators.
## 2 Asteroseismic inference of the coupling coefficient, \(\varepsilon\)
The theoretical work by Tokuno & Takata (2022) defined a new parameter, \(\varepsilon\), capturing the level of opportunity for an interaction between a pure inertial mode in the rotating convective core and a sub-inertial gravito-inertial mode propagating in the rotating radiative envelope. We therefore call \(\varepsilon\) the "coupling coefficient" (Tokuno & Takata (2022) did not provide any terminology for this parameter). In their analytical expressions derived for \(\varepsilon\), Tokuno & Takata (2022) relied on some assumptions, one being weak interaction between the oscillation modes in the core and envelope, such that \(0<\varepsilon\ll 1\). They found \(\varepsilon\) to decrease from 0.343 at stellar birth to 0.018 near the terminal age main sequence of a \(\gamma\) Dor star model of \(1.5\,\mathrm{M}_{\odot}\) with solar metallicity and rotating at \(2.2\,\mathrm{d}^{-1}\) (\(25.44\mu\mathrm{Hz}\)) computed by Saio et al. (2021).
The theoretical work by Tokuno & Takata (2022) provided a physical understanding of observed mode couplings in \(\gamma\) Dor stars. It triggered our current study with the aim to derive values of \(\varepsilon\) for concrete gravito-inertial pulsators. Following up on Aerts et al. (2021), we compute such numerical values of \(\varepsilon\) for the 63 gravito-inertial pulsators in that study, relying on their best forward asteroseismic models. These were computed by Mombarg et al. (2021) for the 37 \(\gamma\) Dor stars and by Pedersen et al. (2021) for the 26 SPB stars using statistical methodology inspired by Aerts et al. (2018). While some additional g-mode pulsators have been modeled in the literature (see Aerts 2021, for a review), the adopted methods are too diverse to add them to the sample. Moreover, their numerical seismically calibrated models are not available to us. Thus, we restricted this study to the homogeneously treated sample of 63 pulsators already considered in Aerts et al. (2021). Two \(\gamma\) Dor stars in this sample reveal the envisioned coupling between an inertial core and a gravito-inertial envelope mode.
### Internal profiles at the convective core boundary
We first recall the ingredients and definition of \(\varepsilon\) for the two extreme cases considered by Tokuno & Takata (2022), namely a continuous profile of the hydrogen mass fraction at the convective core boundary versus a discontinuity at that boundary resulting in a sharp spike of the BV frequency \(N(r)\). Figure 1 shows the profiles of the hydrogen and helium mass fraction, mean molecular weight (\(\mu\)) and its gradient (\(\nabla_{\mu}\)) for the 63 pulsators. Aerts et al. (2021) already included the profiles for \(N(r)\) in their Fig. 2. Here, we show \(\nabla_{\mu}\), which provides by far the largest contribution to \(N(r)\) in the boundary layer between the convective core and the radiative envelope. Indeed, many of the \(\gamma\) Dor and all of the SPB stars experience a receding convective core throughout their main sequence evolution, leaving behind a gradient in \(\mu\). This gradient is completely dominant over the contribution of the entropy gradient shown as dotted lines in the bottom panels of Fig. 1. This figure reveals that both smooth and abrupt profiles occur. Hence, we used both formulations for the coupling coefficient deduced by Tokuno & Takata (2022) and applied them to all the stars. The upper two panels of Fig. 1 show that the 26 SPB stars cover the entire main sequence, while the sample of 37 \(\gamma\) Dor stars does not contain stars so close to hydrogen exhaustion in the core.
Continuous hydrogen and helium mass fraction profiles at the core boundary correspond to continuous density profiles. Assuming a constant density in the convective core, which is a good approximation for the case of continuous profiles, Tokuno & Takata (2022) derived the following expression for the coupling coefficient \(\varepsilon\) for a mode with frequency, \(\omega\) :
\[\varepsilon\equiv\left(\left.\frac{4\ \Omega^{2}}{r_{a}\cdot\frac{\mathrm{d}N^{2} }{\mathrm{d}r}\right|_{r=r_{a}}}\right)^{1/3}\, \tag{1}\]
where \(r_{a}\) is defined as the inner boundary of the envelope mode propagation cavity where \(\omega\) equals the BV frequency \(N\). This equation shows that the opportunity of mode coupling increases with increasing rotation frequency and decreases with the steepness of the wall caused by the stratification. In practice \(r_{a}\) is close to the radius of the convective core (denoted here as \(R_{\mathrm{cx}}\)), which is the position where \(N^{2}\) becomes negative towards the stellar centre. In the entire inner region where \(N^{2}<0\), all the material is homogeneously mixed, such that \(\nabla_{\mu}=0\) (cf. bottom panels of Fig. 1). However, \(r_{a}\) is determined from the outer side towards the core, hence it depends on the properties of the boundary layer between the convective core and the envelope. The structure of this layer is still uncertain and asteroseismology has been used to try and unravel its chemical (Pedersen et al., 2018) and temperature (Michiselsen et al., 2019) properties. Forward asteroseismic modeling of 26 SPB stars by Pedersen et al. (2021); Pedersen (2022) pointed out that convective penetration with an adiabatic temperature gradient and full chemical mixing in the boundary layer is preferred above a radiative temperature
gradient accompanied by a smooth diffusive mixing profile for somewhat over half of the stars. Hence, this sample study of 26 pulsators showed that there is no unified best way to describe the temperature profile for all SPB pulsators. For this reason, Michielsen et al. (2021) dissected the chemical mixing profile and the temperature gradient in the boundary layer of one of these 26 SPB pulsators in more detail. They compared models with a radiative temperature gradient with those having a gradual transition between an adiabatic and radiative gradient, where the transition is based on the Peclet number. The latter type of gradient is inspired by numerical simulations (Viallet et al., 2013) and compares the ratio of advective versus diffusive transport
Figure 1: Profiles of hydrogen (full lines) and helium (dotted lines) mass fraction (top), of the mean molecular weight (middle), and of a zoom-in on its gradient in the area of the convective core (bottom) for the 37 \(\gamma\) Dor (left) and 26 SPB (right) stars. These profiles were retrieved from the best forward asteroseismic models computed by Mombarg et al. (2021) and by Pedersen et al. (2021), respectively.
in the boundarylayer between a convection and radiative zone. Michielsen et al. (2021) selected the SPB with the longest period spacing pattern to test their more detail physical description of chemical mixing in the boundary layer, stitching a penetrative and diffusive mixing profile and assessing the nature of the temperature gradient. This turned out to be a challenging task, given the dominance of \(\nabla_{\mu}\) over the temperature gradient in that layer (cf. Fig. 1). They found KIC 7760680, which rotates at 25% of its critical rate, to reveal a fully radiative rather than Peclet-based temperature gradient in their models with convective boundary mixing. Unraveling the temperature and mixing profiles for \(\gamma\) Dor pulsators is even harder than in the case of SPB stars (Mombarg et al. 2019) because the effects of microscopic atomic diffusion cannot be ignored for these pulsators. Moreover, the role of radiative levitation for element mixing is unclear when treated without incorporating rotational mixing in the modeling (Mombarg et al. 2020). Although they are computationally intense, predictions of oscillation mode properties that take into account the joint effects of radiative levitation and rotation offer a promising route to bring the asteroseismic probing power of the boundary layer of \(\gamma\) Dor pulsators to the next level (Mombarg et al. 2022).
For the case of discontinuous profiles of the hydrogen mass fraction and of \(N^{2}\) at the convective core boundary, Tokuno & Takata (2022) introduced an alternative expression for the coupling coefficient, while also omitting the assumption of a constant density, namely:
\[\tilde{\varepsilon}\equiv\frac{\Omega}{N_{0}}\text{ with }N_{0}\equiv\lim_{r \to R_{\mathrm{cr}}}N(r), \tag{2}\]
where the limit has to be taken from the radiative envelope towards the convective core, that is, going from the envelope outside of the core towards the centre of the star.
In the mathematical limit of \(\mathrm{d}N^{2}/\mathrm{d}r\rightarrow\infty\), one gets \(\varepsilon\to 0\) preventing mode coupling even for the case of continuous profiles. On the other hand, for strictly discontinuous profiles resulting in a steep BV frequency spike, one would get \(N_{0}\rightarrow\infty\) and hence \(\tilde{\varepsilon}\to 0\), again excluding mode coupling. We evaluate the difference between the results from both expressions in Eqs. 1 and 2 in the next section.
### Inferred values of \(\varepsilon\) and \(\tilde{\varepsilon}\)
The numerical values for \(\varepsilon\) and \(\tilde{\varepsilon}\) are plotted as circles and squares, respectively, in Fig. 2 as a function of the near-core rotation frequency \(\Omega\) taken from Aerts et al. (2021). First, we obtained values for \(\varepsilon\) and \(\tilde{\varepsilon}\) between \(0\) and \(1\), as assumed by Tokuno & Takata (2022). The numerical stellar evolution models computed by Mombarg et al. (2021) and Pedersen et al. (2021) have a dense mesh for their core boundary layers. This is necessary for proper computation of the eigenmodes as they have high radial order and thus many nodes need to get resolved for proper numerical approximation of the displacement vector. These dense meshes give us good evaluation capacity for the derivative in Eq. (1) from its linearised algebraic-differential version. Since we here consider the general case of inertial modes in the core with a range of frequencies \(\omega\), we compute the gradient from the value of \(N^{2}\) and \(r\) in the three cells closest to the core boundary defined by \(N^{2}(r)<0\) and retain the largest value of \(\varepsilon\) among the three, as a good representation of maximal mode coupling in the case of a continuous BV profile.
The dotted lines in Fig. 2 indicate the range covered by \(\varepsilon\) and \(\tilde{\varepsilon}\) per star, connecting the values obtained for the two approximations expressed by Eqs. 1 and 2, which can be considered as representing the two extreme cases of reality. For some stars the two estimates are very similar but for others the difference between \(\varepsilon\) and \(\tilde{\varepsilon}\) is large. This difference depends entirely on the shape of the \(\nabla_{\mu}\) profile near the convective core boundary.
The strongest potential coupling between the modes in the core and envelope systematically occurs for \(\varepsilon\) computed from Eq. (1), as shown by comparing the circles and squares per star in Fig. 2. The values we get for \(\varepsilon\) are according to the theoretical expectations in Tokuno & Takata (2022). We find that \(\tilde{\varepsilon}\) is close to zero for many stars. Low values for both approximations of the coupling coefficient occur for a considerable fraction of the SPB pulsators, whose \(\nabla_{\mu}\) value is often some ten times larger than those for \(\gamma\) Dor stars, with a sharp drop towards the convective core. A major conclusion thus is that mode coupling is harder to establish for SPB than for \(\gamma\) Dor stars. As discussed in the previous section, this is in agreement with _Kepler_ space photometry, where mode coupling between core and envelope is not yet reported for any of the SPB stars, while it was found in various of the \(\gamma\) Dor stars and interpreted as such by Saio et al. (2021). We refer to the latter paper for an extensive discussion and illustrations of the delicate balance between the local rotation and two pulsation frequencies of the two involved modes and their eigenmode properties for the mode coupling to become effective.
Saio et al. (2021) found two of the \(\gamma\) Dor stars in our sample to have a dip in the period spacing pattern that is characteristic of the coupling between a core inertial mode and an envelope gravito-inertial mode, namely, KIC 11907454 and KIC 12066947. These have a relatively high \(\varepsilon\) value and are among the most rapid rotators, as indicated in Fig. 2.
The lack of any mode coupling detection for SPB stars may simply be due to an observational bias, as there are a factor 20 less such stars with period spacing patterns from space photometry compared to \(\gamma\) Dor stars. The SPB star KIC 8255796 may be the best candidate, as it has a typical Lorentz-shape dip in its period spacing pattern (Pedersen et al. 2021, see Fig. 15 in the supplementary material). However, its pattern is among the shortest ones, constituting of only nine identified dipole modes. This SPB has a mass of \(5.7\,\mathrm{M}_{\odot}\) and a relatively slowly rotating near-core boundary layer (some 19% of the star's critical rate). Moreover, it has a low \(\varepsilon=0.0118\) and is the most evolved of all the 26 SPB stars in the sample while, as we will show below, we expect the coupling capacity to be weakest in that stage of the main sequence. Therefore, we consider it unlikely for its observed dip structure to be due to mode coupling.
Since we wish to understand how mode coupling between core and envelope modes comes about, we now investigate relationships between \(\varepsilon\) and \(\tilde{\varepsilon}\) with respect to other parameters characterizing the convective core. Following their definitions in Eqs. 1 and 2, a positive correlation occurs with \(\Omega\); namely, higher rotation rates lead to higher coupling coefficients. This is indeed found for both \(\varepsilon\) and \(\tilde{\varepsilon}\) (shown in Fig. 2), where the linear correlation coefficients for the continuous and discontinuous cases are denoted as \(r_{\mathrm{c}}\) and \(r_{\mathrm{d}}\), respectively. They are almost equal (\(r_{\mathrm{c}}=0.64\) and \(r_{\mathrm{d}}=0.66\)) for the SPB pulsators, while the relation is more pronounced for the continuous case of the \(\gamma\) Dor star models (\(r_{\mathrm{c}}=0.89\) versus \(r_{\mathrm{d}}=0.51\)).
### Relation with stellar core parameters
Figure 3 shows a fairly strong positive correlation between \(\varepsilon\) or \(\tilde{\varepsilon}\) and the relative size of the convective core for SPB stars. The trend also occurs for \(\varepsilon\) of \(\gamma\) Dor stars but with a lower \(r_{\mathrm{c}}\) value - yet it is absent for \(\tilde{\varepsilon}\). This difference can be understood from the
properties of the fractional radius among the two samples. Indeed, the \(\gamma\,\)Dor stars cover a narrower range in \(R_{\rm cc}/R\) (with \(R\) the radius of the star) than the SPB stars, implying a weaker correlation between \(\varepsilon\) and convective core size than for the SPB stars. Moreover, \(\gamma\,\)Dor stars cover a range in stellar mass (\(M\)) such that some stars in the sample exhibit a growing convective core with a steep BV profile during the core hydrogen burning, while others have a receding core and a smoother varying BV frequency. This mixed behaviour in terms of convective core size evolution makes it intrinsically less obvious to have a tight correlation between \(\varepsilon\) and \(R_{\rm cc}\) and explains the absence of any relation with \(\bar{\varepsilon}\). The two \(\gamma\,\)Dor stars with detected mode coupling do not stand out in terms of their \(R_{\rm cc}/R\) value.
For both samples, the convective core mass (expressed as relative fraction of the total stellar mass) has a lower linear correlation with respect to \(\varepsilon\) than the fractional radius of the convective core and the correlation essentially disappears for the \(\gamma\,\)Dor stars (see Fig. 4). Again, this is to be expected as the \(\gamma\,\)Dor sample covers stars with a growing and decreasing convective core mass. For both samples \(r_{\rm c}\) is lower for \(M_{\rm cc}\) than for \(R_{\rm cc}\) because the amount of hydrogen that gets injected into the core at the expense of CNO products is more dependent on the cause and character of the overshoot and envelope mixing than the size of the core. We also checked for correlations between \(\varepsilon\) and the values of \(R_{\rm cc}\) or \(M_{\rm cc}\) themselves instead of their fractional dimensionless values, but this leads to even lower \(r_{\rm c}\) and \(r_{\rm d}\) values than those listed in Figs. 3 and 4.
Both \(R_{\rm cc}/R\) and \(M_{\rm cc}/M\) evolve as the core hydrogen burning progresses. The manner in which they do depends greatly on the level and character of the internal mixing in the core boundary layer and in the envelope (Moravveji et al. 2015, 2016; Mombarg et al. 2019, 2021; Pedersen et al. 2021; Pedersen 2022). Thus, a comparison between \(\varepsilon\) or \(\bar{\varepsilon}\) and the evolutionary stage is also of interest. Figure 5 reveals the relationships between either \(\varepsilon\) or \(\bar{\varepsilon}\) and the central core hydrogen mass fraction, \(X_{c}\), with respect to the initial value at birth. This fraction represents the stage of the main sequence evolution. We find a relatively mild trend that mode coupling in the continuous BV case is more likely in the early stages of stellar evolution. This trend is similar to the one found for \(R_{\rm cc}/R\), that is somewhat tighter for the SPB stars than for the \(\gamma\,\)Dor stars. This is in line with the fact that the convective core of SPB stars decreases monotonically from birth to central hydrogen exhaustion for the entire sample, while this is not the case for the \(\gamma\,\)Dor sample. No connection was found between \(\bar{\varepsilon}\) and the evolutionary stage for \(\gamma\,\)Dor stars, while \(\varepsilon\) and \(\bar{\varepsilon}\) behave similarly with respect to the evolutionary stage for the SPB class.
The chemical evolution of the star is roughly captured by the value of the mean molecular weight in the envelope versus that in the core, \(\mu_{\rm c}/\mu_{\rm c}\). This ratio is 1 at the star's birth and decreases as the chemical evolution of the star continues. However, in contrast to the evolutionary stage shown in Fig.5, \(\mu_{\rm c}/\mu_{\rm c}\) de
Figure 4: Same as Fig. 2, but plotted as a function of the mass in the convective core, expressed as a fraction of the total stellar mass.
Figure 3: Same as Fig. 2, but plotted as a function of the extent of the convective core, expressed as a fraction of the total radius.
Figure 2: Values for \(\varepsilon\) in the case of a continuous (circles) and for \(\bar{\varepsilon}\) in the limit of a discontinuous (squares) BV profile plotted as a function of the near-core rotation frequency. The \(\gamma\,\)Dor and SPB stars are indicated in red and blue, respectively. The two \(\gamma\,\)Dor stars for which Saio et al. (2021) detected a signature of coupling between a core and envelope mode are overplotted with black symbols, namely, KIC 11907454 (cross) and KIC 12066947 (plus). Linear correlation coefficients, \(r_{\rm c}\) and \(r_{\rm d}\), between \(\omega\) and \(\varepsilon\), respectively \(\bar{\varepsilon}\), are listed for the two samples.
pends directly on the effect of internal mixing in the radiative envelope. We show the relationship between \(\varepsilon\) or \(\tilde{\varepsilon}\) and \(\mu_{\rm e}/\mu_{\rm c}\) in Fig. 6. This gives the same picture as the one seen for the main sequence stage.
### On the Schonberg-Chandrasekhar limiting mass
A similar yet slightly different way of looking at the evolutionary aspect of the mode coupling coefficients is via the so-called Schonberg-Chandrasekhar limiting mass, \(M_{\rm SC}\)(Schonberg & Chandrasekhar 1942). This quantity is formally defined as the maximum mass a helium core can have after hydrogen exhaustion in the core in order to remain inert, that is to withstand the pressure of the encompassing stellar envelope without contracting. When the value of \(M_{\rm SC}\) is surpassed, the helium core will start to shrink and the star will evolve on a fast contraction time scale rather than on a slow nuclear time scale.
An analytical expression for \(M_{\rm SC}\) has been deduced from the virial theorem in the case of non-rotating polytropic stellar models with an isothermal helium core (Stein 1966; Cox & Giuli 1968). Despite the fact that real stars do not adhere to a polytropic equation of state, the approximation for the Schonberg-Chandrasekhar limit deduced from numerical stellar models was found to be essentially the same as for the polytropic approximation (Kippenhahn et al. 2013):
\[M_{\rm SC}/M\simeq 0.37\cdot(\mu_{\rm e}/\mu_{\rm c})^{2}\,, \tag{3}\]
Maeder (1971) investigated the effect of uniform rotation on \(M_{\rm SC}\), concluding that its value does not change appreciably. It decreases due to rotation by at most 3%, even for fast rotation close to the critical value. It is thus meaningful to compute the current value of \(M_{\rm SC}/M\) from Eq. 3 for our sample of rapidly rotating gravito-inertial mode pulsators, some of which are close to hydrogen exhaustion in their core (Fig. 5). Figure 7 shows the relation between \(\varepsilon\) or \(\tilde{\varepsilon}\) and the quadratic dependence on \(\mu_{\rm e}/\mu_{\rm c}\) via the computed value of \(M_{\rm SC}\) from the asteroseismically calibrated \(\mu\) profiles shown in Fig. 1.
The correlation between \(M_{\rm SC}/M\) and \(\varepsilon\) plotted in Fig. 7 is obviously consistent with the one displayed for the linear dependence, \(\mu_{\rm e}/\mu_{\rm c}\), in Fig. 6. The obtained numerical values for \(M_{\rm SC}/M\) in the current evolutionary stage of the stars computed from Eq. 3 resemble those of the fractional convective core mass used in Fig. 4. The expression in Eq. 3 was deduced for standard stellar models at core hydrogen exhaustion, irrespective of the kind and level of mixing that the star underwent during the main sequence. While Maeder (1971) found that rotation hardly affects it, the chemical evolution of the star due to near-core boundary mixing (Michielsen et al. 2021) and envelope mixing (Pedersen et al. 2021) does play a role in the behaviour of \(\mu\). We therefore compare the asteroseismically inferred values for \(M_{\rm cc}/M\) at the current evolutionary stage of the stars with the current value of \(M_{\rm SC}/M\) computed via Eq. 3.
Our sample contains stars with masses between 1.38 M\({}_{\odot}\) and 9.52 M\({}_{\odot}\). This lower limit is close to the value where stars transition from being below to above their \(M_{\rm SC}\) limit at the hydro
Figure 5: Same as Fig. 2, but plotted as a function of the main sequence stage, defined as the current core hydrogen mass fraction divided by the initial hydrogen mass fraction at birth.
Figure 6: Same as Fig. 2, but plotted as a function of \(\mu_{\rm e}/\mu_{\rm c}\).
Figure 7: Same as Fig. 2, but plotted as a function of the Schönberg-Chandrasekhar core mass limit.
gen exhaustion. As long as the helium core mass remains below \(M_{\rm SC}\), the core can stay inert at hydrogen exhaustion and the subsequent hydrogen shell burning happens on a nuclear timescale. If, in contrast, the helium core mass exceeds \(M_{\rm SC}\), it will start to contract while hydrogen burns in a shell and so this stage of evolution happens on a much shorter contraction time scale. For this reason, the value of \(M_{\rm SC}\) is an important quantity for the star's evolution. We compare the current values of \(M_{\rm cc}/M\) and \(M_{\rm SC}/M\) according to Eq. 3 for our 63 sample stars in Fig. 8, where the symbols are linearly scaled with the total stellar mass. During their evolution, stars evolve from the right to the left and those with a shrinking convective core also from the top to the bottom in such a diagram. It can be seen that all of the \(\gamma\) Dor stars have a convective core mass unrelated to and below their Schonberg-Chandrasekhar limit. They will steadily evolve almost horizontally to the left in the figure as they approach hydrogen depletion, because they hardly experience envelope mixing and thus keep their \(\mu_{\rm e}\) unchanged. Since none of them are close to hydrogen exhaustion and given their mass range, it is not expected that they would have already surpassed their \(M_{\rm SC}\) value.
The more massive SPB stars, on the other hand, have convective cores tightly correlated with the evolution of their \(M_{\rm SC}/M\) limit, already surpassing that limit on the main sequence for the more massive sample stars as expected. For completeness, we also used the new fourth-degree polynomial fit for \(M_{\rm SC}\) proposed by Chowdhury & Sarkar (2023, Eq. (15) in their manuscript) instead of the second-degree formula in Eq. (3). This gives the same results as those in Fig. 8 for the linear correlation coefficients of \(r_{\rm c}\) and \(r_{\rm d}\) and for the plot to remain within the symbol sizes.
Pedersen (2022) predicted the values of the helium core masses at core hydrogen exhaustion of the 26 SPB stars based on their current asteroseismic modes and \(M_{\rm cc}\) values. She concluded that due to their levels of envelope mixing, most of these SPB stars will have higher helium core masses than those resulting from standard stellar evolution without extra mixing. Her results are in line with convective core mass estimations from massive eclipsing binaries (Tkachenko et al., 2020; Johnston, 2021). These binary and asteroseismic results for higher-than-standard convective core masses have not yet been taken into account in chemical yield computations guiding the overall chemical evolution models of galaxies (Karakas & Lattanzio, 2014; Kobayashi et al., 2020). However, the higher levels of core masses that have been measured will have a major impact on yield prediction models, given that the uncertainties of such models for intermediate-mass stars mainly come from unknown internal mixing.
## 3 Discussion and future prospects
In this paper, we provide numerical values for the coupling coefficients, \(\varepsilon\) and \(\bar{\varepsilon}\), which are unit-less measures below value 1 indicative of the opportunity to have inertial modes in rapidly rotating convective cores couple to gravito-inertial envelope modes. We deduced numerical values for \(\varepsilon\) and \(\bar{\varepsilon}\) from forward asteroseismic models of 63 gravity-mode pulsators, covering a mass range from 1.38 M\({}_{\odot}\) to 9.52 M\({}_{\odot}\). Our findings are in agreement with the theoretical expectations and interpretations recently proposed by Tokuno & Takata (2022). From the perspective of the \(\varepsilon\) or \(\bar{\varepsilon}\) values with respect to the near-core rotation frequency, the opportunity for mode coupling is similar for SPB and \(\gamma\) Dor star models. In practice, however, the sample of the few tens of SPB stars with identified modes from period spacing patterns available in the literature contains slower rotators than many of the \(\gamma\) Dor stars in the sample of published pulsators with proper diagnostic gravity-mode patterns, which is 20 times larger (see Aerts 2021, for an overview).
We find that the inferred values of \(\varepsilon\) or \(\bar{\varepsilon}\) offer a useful diagnostic to hunt for core-to-envelope mode coupling. Yet the values of \(\varepsilon\) and \(\bar{\varepsilon}\) alone do not allow us to distinguish among the few gravito-inertial pulsators with a core mode coupled to a gravito-inertial mode from the majority of them that do not reveal this coupling phenomenon. The values of \(\varepsilon\) and \(\bar{\varepsilon}\) for the two \(\gamma\) Dor stars included in our sample of 37 do not stand out from the other rapid rotators in terms of their core properties deduced from forward modeling of their identified gravito-inertial modes. A visual inspection of characteristic dips in the period spacing patterns as done by Saio et al. (2021) thus remains the best (and currently only) way to find pulsators with coupled inertial core modes. These selected targets then allow for the derivation of the rotation frequency in their convective core via the matching of the frequencies of the inertial and gravito-inertial modes, given the near-core rotation frequency measured from the tilt of their period spacing pattern(s). Such type of core rotation measurement was first proposed by Ouazzani et al. (2020) and further elaborated upon by Saio et al. (2021). This is currently the best way to constrain the internal rotation profile from gravity-mode pulsators, given the challenges encountered for frequency inversions for this type of pulsators caused by nonlinear effects, notably the dense occurrence of avoided crossings (Vanlaer et al., 2023).
The observational challenge remains to distinguish dips in the period spacing patterns stemming from inertial mode coupling rather than from periodic deviations induced by mode trapping at the bottom of the envelope due to a strong \(\mu\)-gradient in that position, because both phenomena occur simultaneously. This also explains the current absence of any mode coupling detection in SPB stars so far, given their much larger \(\nabla_{\mu}\) value and more extended \(\mu-\)gradient zone in the near-core boundary layer compared to the one of \(\gamma\) Dor stars (cf. Fig. 1). On the other hand, this observational challenge is in agreement with the pop
Figure 8: Convective core mass versus Schönberg-Chandrasekhar mass limit expressed as a fraction of the total mass. The \(\gamma\) Dor and SPB stars are indicated in red and blue symbols, respectively. The symbol size scales linearly with the total stellar mass, within the two extreme masses occurring in the samples, as indicated in the legend.
ulation statistics. Indeed, we currently have about 28 SPB stars with period spacing patterns and none of them have a convincing dip structure that would be expected for inertial mode coupling. Saio et al. (2021) found 16 of the current sample of 670 \(\gamma\) Dor stars to have the signature (2.4%). While it concerns a low number of stars, these are crucial targets to map the internal rotation (and possibly the magnetic field, see e.g. Van Beeck et al., 2020; Dhouily et al., 2022) of intermediate- and high-mass stars. In this respect, the potential of the TESS mission has yet to be explored. Indeed, the 61 TESS \(\gamma\) Dor and 2 SPB stars with detected period spacing patterns found by Garcia et al. (2022a,b) are promising in this respect, but their period-spacing patterns from 352 d light curves are too short to offer proper dip structures caused by mode coupling or mode trapping. However, progress can be ensured by analyzing the thousands of light curves for the SPB and \(\gamma\) Dor candidates from TESS data assembled throughout the nominal and extended mission, covering more than five years. The catalogues from Pedersen et al. (2019), Antoci et al. (2019), and Skarka et al. (2022) are only the tip of the iceberg in discovery space for TESS gravito-inertial asteroseismology.
###### Acknowledgements.
The research leading to these results has received funding from the KU Leuven Research Council (grant C16/18/005: PARADISE). CA acknowledges financial support from the Research Foundation Flanders (FWO) under grant K802922N (Sabbatical leave); she is grateful for the kind hospitality offered by CEA/Saclay during her sabbatical work visits in the spring of 2023. The authors are grateful to May Gade Pedersen and Joey Mombay for providing their forward asteroseismic models in electronic format, to Alex Kemp for valuable comments on the manuscript prior to its submission, and to the referee for constructive comments and encouragements to expand the manuscript with more details.
|
2310.15645 | Light up that Droid! On the Effectiveness of Static Analysis Features
against App Obfuscation for Android Malware Detection | Malware authors have seen obfuscation as the mean to bypass malware detectors
based on static analysis features. For Android, several studies have confirmed
that many anti-malware products are easily evaded with simple program
transformations. As opposed to these works, ML detection proposals for Android
leveraging static analysis features have also been proposed as
obfuscation-resilient. Therefore, it needs to be determined to what extent the
use of a specific obfuscation strategy or tool poses a risk for the validity of
ML malware detectors for Android based on static analysis features. To shed
some light in this regard, in this article we assess the impact of specific
obfuscation techniques on common features extracted using static analysis and
determine whether the changes are significant enough to undermine the
effectiveness of ML malware detectors that rely on these features. The
experimental results suggest that obfuscation techniques affect all static
analysis features to varying degrees across different tools. However, certain
features retain their validity for ML malware detection even in the presence of
obfuscation. Based on these findings, we propose a ML malware detector for
Android that is robust against obfuscation and outperforms current
state-of-the-art detectors. | Borja Molina-Coronado, Antonio Ruggia, Usue Mori, Alessio Merlo, Alexander Mendiburu, Jose Miguel-Alonso | 2023-10-24T09:07:23Z | http://arxiv.org/abs/2310.15645v1 | # Light up that Droid! On the Effectiveness of Static Analysis Features against App
###### Abstract
Malware authors have seen obfuscation as the mean to bypass malware detectors based on static analysis features. For Android, several studies have confirmed that many anti-malware products are easily evaded with simple program transformations. As opposed to these works, M. detection proposals for Android leveraging static analysis features have also been proposed as obfuscation-resilient. Therefore, it needs to be determined to what extent the use of a specific obfuscation strategy or tool poses a risk for the validity of ML malware detectors for Android based on static analysis features. To shed some light in this regard, in this article we assess the impact of specific obfuscation techniques on common features extracted using static analysis and determine whether the changes are significant enough to undermine the effectiveness of ML malware detectors that rely on these features. The experimental results suggest that obfuscation techniques affect all static analysis features to varying degrees across different tools. However, certain features retain their validity for ML malware detection even in the presence of obfuscation. Based on these findings, we propose a ML malware detector for Android that is robust against obfuscation and outperforms current state-of-the-art detectors.
machine learning, static analysis, malware detection, obfuscation, reliability, evasion
## 1 Introduction
With the spread of Android devices, the amount of malware crafted for this OS has also experienced a extraordinary growth [1, 2]. This has led researchers to devise cutting-edge anti-malware solutions based on machine learning (ML) algorithms. When fed with app data, these algorithms are able to find patterns that are characteristic and informative enough to classify apps as either goodware or malware. In this sense, the performance of ML highly depends on the quality and soundness of the data that is used to build the classifier [3, 4]. In the case of Android malware detection, the extraction of this data, in the form of a vector of features that represents the behavior of apps, is performed using either dynamic or static analysis [5, 6].
Dynamic analysis is performed on a controlled environment (sandbox) where the app is executed. During execution, traces that describe the behavior of the app, e.g., network activity, system calls, etc. are logged [5]. On the contrary, static analysis is based on the inspection of the content of the package file (APK) of an app. This includes the compiled code and other resources such as image and database files [7]. Both techniques are valid to extract valuable data from apps. However, dynamic analysis involves a costly process whose success is dependent on the emulation method used and the absence of sandbox evasion artifacts in the code of apps. Instead, static analysis is computationally cheaper, but it can be counteracted by applying app code transformations. Such transformations are commonly known as obfuscation [8].
Obfuscation is a security through obscurity technique that aims to prevent automatic or manual code analysis. It involves the transformation of the code of apps, making it more difficult to understand but without altering its functionality [9]. This characteristic has made obfuscation a double edged sword, used by both, goodware and malware authors. Developers of legitimate software leverage obfuscation to protect their code from being statically analyzed by third parties, e.g., trying to avoid app repackaging or intellectual property abuses [10]. Malware authors have seen obfuscation as a mean to conceal the purpose of their code [11], preventing static analyses from obtaining meaningful information about the behavior of apps.
It may seem common sense that the application of any, or the combination of several, obfuscation techniques will make malware analysis relying on features extracted using static analysis fruitless. However, it is unclear to what extent this aspect is true. Some studies on Windows and Android executables have demonstrated that obfuscation harms detectors that rely on static analysis features. For example,
packing1 prevents obtaining informative features [12, 13], which are essential to train classifiers. Similar conclusions have been drawn for other forms of transformation [14, 15], showing a major weakness in Android malware detectors. However, other studies contradict what has been stated in the aforementioned works, proposing feature extraction techniques via static analysis that enable a successful identification of malware even when apps are obfuscated [16, 17, 18].
Footnote 1: Packing is a particular form of obfuscation which hides the real code through one or more layers of compression/encryption. At runtime, the unpacking routine restores the original code in memory to be then executed.
All of these works appear promising in demonstrating either the flaws or the strengths of static analysis features for malware detection. However, these discrepancies complicate the extraction of sound conclusions regarding the validity of static analysis features for Android malware detection. In addition, many of these works focus solely on the labels predicted by the detectors, without analyzing the effect of the obfuscation on the apps and/or features used to train them [14, 15, 17, 19, 20]. This additional feature-centered detectors are working or failing when obfuscation is present, and is crucial for building more robust detectors. Finally, another evident flaw of some of these studies is the lack of details concerning their datasets and the configuration of their experimental setups [16, 18, 21, 22]. Apart from the lack of reproducibility, biases in the datasets may lean the results towards non-generalizable results. Therefore, the conclusions drawn from all these works may have limited applicability beyond the evaluated scenarios, and can be the cause of the contradictions found in the literature.
To the best of our knowledge, this work presents the first comprehensive study about the impact of common obfuscation techniques in the information that is obtained through static analysis to perform malware detection with ML algorithms. The contributions of this paper can be summarized in the following highlights:
* We provide an agnostic 2 evaluation of the strength, validity and detection potential of a complete set of features obtained by means of static analysis of APKs when obfuscation is used.
* We analyze the impact of a variety of obfuscation strategies and tools on static analysis features, providing insights about the use of these features for malware detection in obfuscated scenarios.
* We propose a high-performing ML-based Android malware detector leveraging a set of robust static analysis features. We demonstrate the ability of this detector to identify goodware and malware despite obfuscation, outperforming the state-of-the-art.
* We present a novel dataset with more than 95K obfuscated Android apps, allowing researchers to test the robustness of their malware detection proposals.
* In spirit of open science and to allow reproducibility, we make the code publicly available at gitlab-borja.
Footnote 2: In this context, we refer agnostic as an analysis carried out without focusing on a specific malware detection proposal.
The rest of this paper is organized as follows. Section 2 introduces the literature that has previously tackled obfuscation as a problem in malware analysis. Section 3 provided basic information about topics that are required to understand the content of this paper. Section 4 describes the construction of the app dataset and presents the features that are considered in our experiments. Section 5 evaluates the impact of different obfuscation strategies and tools in static analysis features, as well as their validity for malware detection. Section 6 is devoted to assess the robustness of our ML malware detection proposal. Section 7 includes a discussion of the main findings made along this paper. Finally, we conclude this paper in Section 8.
## 2 Related Work
The related work can be divided into two groups: (I) studies that analyze the vulnerabilities of malware detectors when obfuscation is present, and (2) works that propose novel malware detectors which are presumably robust to obfuscation.
### _Study of the Vulnerabilities of Malware Detectors_
The works that evaluate the negative effects of obfuscation on Android malware detectors have mainly been carried out for black box malware detectors, i.e., the system or model is analyzed and evaluated based solely on its input-output behavior, without direct access to or knowledge of its internal workings. The first work of this type [19] studied how obfuscation impacts the detection ability of 10 popular anti-virus programs available in the VirusTotal platform. The work demonstrated that these detectors are vulnerable and loose their reliability in the identification of obfuscated malware. Similarly, in [20], 13 Android anti-virus programs from VirusTotal are assessed using different obfuscation strategies to modify malware. The results showed a meek improvement in detection accuracy concerning the findings of previous works [19] and proved that companies responsible of developing these tools are trying to counteract obfuscation. A more comprehensive analysis for 60 anti-virus tools in VirusTotal has been presented in [14]. Again, the work demonstrated the vulnerabilities of most detectors when facing obfuscated malware. However, this analysis shows that the success on bypassing detection highly depends on the obfuscation tools and strategies considered.
In the mentioned studies, the detectors are commercial products with unknown characteristics. Some other works have focused on assessing the impact of obfuscation in published ML based detectors. In [17], an analysis of the effect of obfuscation in two detectors, one relying on static and the other on dynamic analysis features, is presented. It is shown that the performance of the detector using dynamic analysis features is not altered by obfuscation, contrary to the detector that uses static analysis features. However, authors indicated that this effect can be easily mitigated by including obfuscated samples during the training phase of ML models. In [15], eight state-of-the-art Android malware detectors leveraging static analysis features and ML algorithms are assessed using obfuscated malware samples. The authors demonstrated that obfuscation is a major weakness of these popular solutions, since all of them suffered a drop in their performance. One of the most recent and comprehensive studies is carried out in [12]. This work analyzes
the effect of packing in ML malware detectors relying on static analysis for Windows executables. The conclusions drawn from the extensive set of experiments indicate that ML malware detectors for Windows fail to identify the class of transformed samples due to the insufficient informative capacity of static analysis features.
All these works prove the added difficulty that obfuscation entails for malware detection. However, most of them fail to provide explanations behind accurate or erroneous detections. In this sense, they treat the detectors as black-box tools and do not analyze the effect of different obfuscation strategies and tools on the apps and, specifically, on the features that will be used for training the detectors. This makes it difficult to extract meaningful insights and provides no useful information to build more robust classifiers.
### _Obfuscation-Resilient Detectors_
A second group of proposals focuses on the development of obfuscation-resilient detectors, specifically designed to operate effectively in the presence of obfuscated apps. Two of the most relevant works in this regard are DroidSieve [16] and RevealDroid [18]. The former categorizes static analysis features as obfuscation-sensitive and obfuscation-insensitive based on theoretical aspects. Feature frequency is studied for different datasets with obfuscated and un-obfuscated malware samples to support the idea that most changing features provide better information. In consequence, they proposed a detector that relies on the features of both groups, and offering good performance in terms of malware detection and family identification. The latter work argues against static analysis features such as Permissions, Intents or Strings for robust malware detection. Contrary to the authors of DroidSieve, they suggest that obfuscation-sensitive features do not provide useful information to detect malware. Instead, the authors propose a new set of static analysis features based on a backward analysis of the calls to dynamic code loading and reflection APIs. In this way, the functions invoked at runtime are identified, nullifying the effect of obfuscation, making the proposed detector obfuscation-resilient.
Two allegedly obfuscation-resilient detectors leveraging deep learning algorithms are presented in [21] and [22]. The authors of these works suggest that the capacity of deep learning to embed and extract useful information from the features is enough to tackle obfuscation. The first work relies on strings extracted from the app code. Strings are then transformed into sequences of characters to obtain an embedded representation of the app that is then used for classification. Despite the excellent results reported for malware detection, the ability of the detector to identify obfuscated apps is based on (unproven) statements that are not specifically tested. The latter proposal incorporates obfuscation-sensitive and insensitive features, including permissions, opcodes and meta-data from ApkID 3, a signature-based fingerprinting tool. Similarly to the previous proposal, the obfuscation-resiliency of this work cannot be confirmed based on the results, since the effect of the use of obfuscation in the detector is based on theoretical aspects not specifically covered by the experiments.
Footnote 3: [https://github.com/rednaga/APKiD](https://github.com/rednaga/APKiD)
The experiments carried out in all these works present some flaws that, in our opinion, put in question their capability. For example, most of them do not describe, or vaguely analyze, the composition of their datasets in terms of the number of obfuscated malware or goodware samples, as well as the tools and strategies considered to obfuscate the samples. Some articles focus their analyses exclusively on obfuscated malware, either for the training or evaluation of the detectors, but what about obfuscated goodware? How do detectors behave in the presence of such apps? The use of different obfuscation tools or strategies for malware and for goodware introduces biases in ML algorithms, since the generated models may associate obfuscation, or the use of a particular obfuscation tool, to a specific class in the data [12]. Additionally, experiments performed with malware and goodware captured from different periods can cause biases in the detectors [23]. Also, most of these studies focused on a small set of features, arguing against other types of features without providing any proof. All these aspects may justify the good published results and cause contradictions concerning other analyses carried out for ML-based detectors [15, 17]. Finally, we also found that most of them do not provide enough details to reproduce their systems and thus, lack of reproducibility.
## 3 Background
This section briefly introduces some basic concepts needed to understand the rest of this paper. This includes the structure and content of an Android Application Package (APK) from which static analysis features are extracted, and the types of obfuscation techniques and their effect in the apps.
### _Android Apps_
Android apps are usually developed in Java or Kotlin4. When an app has to meet very strict performance constraints, or interact directly with hardware components, Android allows developers to introduce native components written in C and C++ (i.e., _native code_). An Android app is distributed and installed via an APK, a compressed (ZIP) file containing all the resources needed (e.g., code, images) to firstly execute the app. Figure 1 shows the internal structure of an APK file.
Footnote 4: From now on, we will refer to Java code, although the techniques we describe are also valid for apps written in Kotlin.
Every APK must be signed with the private key of the developer. To validate this signature, the APK contains the public certificate of the developer inside the META-INF folder. This mechanism guarantees the integrity of the APK5. In a nutshell, before installing an app, Android verifies if the files in the APK match a pre-computed signature and continues with the installation only if the integrity check succeeds.
Footnote 5: Note that Android does not verifies the validity of the developer’s certificate but instead, uses this mechanism to validate the integrity of the content within the APK. Therefore, the developers’ certificates can be self-signed.
The AndroidManifest.xml defines the structure of an Android app and its meta-data, such as the package name of the app, the required permissions, and the main
components (i.e., Activity, Service, Broadcast Receiver, and Content Provider). An Android app can contain one or multiple DEX file(s) (i.e., classes*.dex), which include the compiled Java code. Each.dex file can reference up to 64k methods [24], such as the Android framework methods, other library methods, and the app-specific methods. For the native components, Android provides an Android Native Development Kit (NDK) [25] that generates native libraries in the form of Linux shared objects. Such objects are stored into the lib folder.
Finally, the res folder contains the compiled resources (e.g., images, and strings), and the assets directory includes the raw resources, providing a way to add arbitrary files such as text, HTML, font, and video content into the app.
### _Obfuscation_
Obfuscation is the process of modifying an executable without altering its functionality [26]. It aims to counteract automatic or manual code analysis. In the Android context, many strategies can be applied to modify the code or resources within the APK file: from simple operations that change some metadata to bypass basic checks (e.g., signature-based anti-malware), to techniques that explicitly modify the DEX code or resources of the app [27]. It is worth emphasizing that in Android obfuscation is more common than in other binary code (e.g., x86 executables), because analyzing and repackaging an Android app is straightforward [13]. In the rest of this Section, we present the type of modifications considered in this work.
#### 3.2.0.1 Renaming
A DEX file stores the original string-valued identifiers (names) of fields, methods and classes [28]. Often, these identifiers leak information about code functionalities, lifecycle components and how they interact with each other. For instance, a common practice by programmers is to add "Activity" to each Java class that implements an activity component. The renaming technique replaces these identifiers with meaningless strings, aiming to remove information about the functionality of the app. Consequently, renaming involves modifying the.dex files and the Manifest file ( AndroidManifest.xml). Note that this technique cannot be applied to methods of the Android lifecycle (e.g., onCreate, onPause) or Android framework components because that would break the execution logic.
#### 3.2.0.2 Code manipulation
These techniques manipulate the DEX code to remove useless operations, hide specific API invocations, and modify the execution flow. The main techniques in this category are:
* _Junk code insertion (JCI)_ This technique introduces sequences of useless instructions, such as nop (i.e., _no-operation_ instructions that do nothing). Other JCI strategies transform the control-flow graph (CFG) of apps by inserting goto instructions or arithmetic branches. For example, a goto may be introduced in the code pointing to an useless code sequence ending on another goto instruction, which points to the instruction after the first goto. The arithmetic branch technique inserts a set of arithmetic computations followed by branch instruction that depends on the result of these computations, crafted in such a way that the branch is never taken [29].
* _Call indirection (CI)_ This technique aims to modify the call graph and, therefore, the CFG of the app. It introduces a new intermediate chain of method invocations in the code, adding one or several nodes between a pair of nodes in the original graph. For example, given a method invocation from \(m_{\sigma 1}\) to \(m_{\sigma 2}\) in the code, \(m_{\sigma 1}\) is modified to call to the start of a sequence of \(n\) intermediate methods (\(m_{i}\) : \(1\)<= \(i\)<= \(\eta\) that end in a call to \(m_{\sigma 2}\). In this way, the analysis could not reveal that \(m_{\sigma 2}\) is actually invoked by \(m_{\sigma 1}\)[30].
* _Reflection_ This technique uses the reflection capability of the Java language to replace direct method invocations with Java reflection methods that use class and method identifiers as parameters to perform the call. This makes actual method invocations difficult to inspect [30]. Listings 1 and 2 show an example of this transformation. In Listing 1, the method m1 (of the class MyObject) is accessed through the operator "." from the object instance, whereas in Listing 2 shows the same invoked method using the Java reflection API. In this example, a java.lang.reflect.Method.invoke() object is created (lines 2-3) and invoked (line 4) for a specific object instance (i.e., obj), whereas the class and method names are passed as parameters of these functions.
#### 3.2.0.3 Encryption
This technique prevents accessing to parts or the entire code or resources (e.g., strings and asset files) of the app by using symmetric encryption algorithms. It involves storing the original code or resources in an encrypted form so that a decryption routine, inserted in the code, is invoked whenever an encrypted part needs to be accessed. The decryption key is stored somewhere in the APK or calculated at runtime. This technique introduces extra latency during app execution and severely complicates the analysis of the functionality of the encrypted part [27].
It is worth emphasizing that different obfuscation techniques can be combined to improve their effectiveness. For example, encrypting the strings of reflective calls can hide the method and class names invoked at runtime. This makes
Fig. 1: Structure of an APK file
it difficult to recover these values by static analysis of the apps. Listing 3 shows an example of the application of both obfuscation techniques to the code in Listing 1. In particular, the class and method names are decrypted at runtime (lines 2-3), hiding which methods are actually invoked. Note how these values are exposed only in an encrypted form, and could change if a different encryption key or algorithm was employed.
## 4 Dataset
For our experiments, firstly, we constructed a dataset with obfuscated and non-obfuscated apps. From this collection of apps and by means of static analysis, we obtain a set feature vectors that constitute the object of this study. This section describes how the app dataset is built and the types of features derived from the apps.
### _App Dataset_
We build our app dataset using a subset of APKs from the AndroZoo repository [31], which contains more than 20 million of APKs with associated meta-data. This meta-data includes the source of the APK, the date, and the number of positive detections (VTD) in VirusTotal. Our objective was to obtain a dataset with the same number of malware and goodware samples, all of them free of obfuscation. We downloaded thousands of samples and filtered out those marked by APKiD6 as "suspicious" of including obfuscation. To label samples we relied on the VTD values [32]: an app with VTD\(\geq\)7 is considered malware, while an app with VTD=0 was considered goodware (apps with intermediate VTD values were filtered out).
Footnote 6: [https://github.com/rednaga/APKiD](https://github.com/rednaga/APKiD)
In a second step, we generated obfuscated versions of the apps in the filtered dataset. To perform this process, we used the DroidChameleon [30], AAMO[33], and ObfuscAPK[29] tools. We chose these tools because (1) they are open source, (2) they provide a wide range of obfuscation techniques, and (3) they have previously shown to effectively evade Android malware detectors. Specifically, for each obfuscation tool, we try to obfuscate every app in the filtered dataset using six obfuscation techniques: Renaming, Junk Code Insertion, Reflection, Call Ind Injection and Encryption. The configuration of the tools was left as default for all techniques. The results of this process are summarized in Table 1.
Note that some tool combinations failed due to errors during the APK decompilation process. It is worth noticing that there were more failures in the case of malware apps than in goodware apps. ObfuscAPK was the tool with the best success rate, correctly obfuscating an average of 85% of the apps. On the contrary, we were unable to obtain obfuscated samples when trying to apply Encryption with AAMO, due to bugs introduced in the code by this tool that prevent the APK from being rebuilt. The attempts to use Renaming with DroidChameleon were also unsuccessful due to an error in the implementation of the tool. For other techniques, DroidChameleon and AAMO had average success rates of 55% and 28%, respectively. During this process, we realized that for some apps all the tool-technique combinations failed, and thus these apps were removed from the filtered dataset. As a result of this process, we obtained a "Clean" dataset which consists of 4 749 goodware and 4 067 malware (presumably) non obfuscated samples.
Table 2 summarizes the different datasets that will be used in the experiments. The criteria for the composition of these datasets will be explained in Section 5.
* NonObj: It includes the non obfuscated versions of the apps for which we could not obtain an obfuscated version with all the tools for at least one technique, i.e., apps that can be obfuscated using a specific tool and technique but not with the remaining tools using
the same technique.
* CleanSuccObf: includes the subset of non obfuscated apps present in Clean, but not in NonObf. That is, all the apps for which all the tools have worked for at least one technique.
* The remainder datasets (Renaming, JCI, CallIndirection, Reflection, and Encryption) contain the obfuscated versions of the apps in CleanSuccObf for that particular technique using all the tools.
### _Feature Dataset_
An app dataset has to be transformed into a dataset feature vectors prior to perform malware detection using ML. Following a detailed literature analysis, we identified seven families of static analysis features that have proven to be useful for ML-based malware detection [6]. We used two well-known and widely used static analysis frameworks for Android to extract these features: Androguard [34] and FlowDroid [35]. Sources of these features include: the _classes.dex_ and _AndroidManifest.xml_ files, as well as the contents of the _res_ and _assets_ directories of APKs.
#### 4.2.1 Permissions
Permissions have commonly been used as a source of information for malware detection in Android [36, 37, 38, 39]. In this category, we consider as features the full set of permissions provided by Google in the Android documentation 7, as well as the set of custom 8 permissions that developers may declare to enforce some functionality in their apps. Following this procedure, we extracted a set of binary features, each corresponding to the presence or absence of a given permission.
Footnote 7: [https://developer.android.com/reference/android/Manifest](https://developer.android.com/reference/android/Manifest). permission
Footnote 8: [https://developer.android.com/guide/topics/permissions/](https://developer.android.com/guide/topics/permissions/)
#### 4.2.2 Components
An app consists of different software components that must be declared in the _AndroidManifest.xml_ file. These elements have been widely used as a source of information for malware detectors [36, 38, 40, 41]. We extract a list of hardware and software components that can be declared using the _juses-feature_2g from the Android documentation 9, as well as every identifier for Activity, Service, ContentProvider, BroadcastReceivers and Intent Filters. In total, we obtained a set of 85 476 binary features, whose value is set to True or False for an app according to the presence of the feature in its _AndroidManifest.xml_ file. We additionally derive seven frequency features accounting for the number of elements of each type in the app.
Footnote 9: [https://developer.android.com/guide/topics/manifest/uses-feature-element.html](https://developer.android.com/guide/topics/manifest/uses-feature-element.html)
#### 4.2.3 API functions
API libraries allow developers to easily incorporate additional functionality and features into their apps, being the main mean of communication between the programming layer and the underlying hardware. As such, analyzing the calls to methods of these libraries (API functions) constitutes a good instrument to characterize the functionality of apps, and, therefore, for malware detection. Following similar approaches to those proposed in the literature [36, 39, 40, 42], we extract a binary feature for each API method, and set its value to True if the app contains any call to that method within its code. In total, this set consist of 66 118 binary features.
#### 4.2.4 Opcodes
The compiled Android code (Dalvik) consists of a sequence of opcodes. Opcode-based features provide insights about the code habits of developers as they represent fine-grained information about the functionality of apps [43]. Subsequences of opcodes, or simply _n_-grams, have been used for Android malware detection in [44, 45, 46, 47]. Concerning the size of the subsequences, Jerome et. al [44] and Canfora et. al [45] observed that \(n\) = 20ffers a good trade-off between the size of the feature vector generated and the performance
\begin{table}
\begin{tabular}{l|c|c|c}
**tool-technique** & **\#Goodware** & **\#Malware** & **\#Malware** \\ & **samples** & **samples** & **\#Success** \\ \hline DC\_rim & - & - & 0\% \\ \hline AA\_rm & 2244 & 1953 & 34\% \\ \hline OA\_rm & 5690 & 4317 & 81\% \\ \hline DC\_cjns & 1855 & 1123 & 24\% \\ \hline AA\_cjns & 2289 & 2019 & 35\% \\ \hline OA\_cjns & 6003 & 4755 & 87\% \\ \hline DC\_ci & 3664 & 2209 & 47\% \\ \hline AA\_ci & 1337 & 1362 & 22\% \\ \hline OA\_ci & 6050 & 4765 & 87\% \\ \hline DC\_cff & 6200 & 3993 & 82\% \\ \hline AA\_relf & 1332 & 1402 & 22\% \\ \hline OA\_relf & 6080 & 4802 & 88\% \\ \hline DC\_gncr & 5008 & 3746 & 70\% \\ \hline AA\_encr & - & - & 0\% \\ \hline OA\_gncr & 6074 & 4814 & 88\% \\ \hline \end{tabular}
\end{table} TABLE I: Success rate of different technique-tool obfuscation combinations for the apps in the Clean dataset. The first part of the name refers to the tool used to obfuscate the apps, with DC for DroidChamaleon, _AA_ for AAMO, and _OA_ for ObfuscAPK. The characters after the underscore refer to the strategy followed to obfuscate the apps: renaming (_rnm_), junk code insertion (_jcins_), call indirection (_ci_), reflection (_ref_) and encryption (_encr_).
\begin{table}
\begin{tabular}{l|c|c}
**Dataset** & **\#Goodware** & **\#Malware** \\ & **samples** & **samples** \\ \hline Clean & 4749 & 4067 \\ \hline NonObf & 1345 & 1211 \\ \hline CleanSuccObf & 3404 & 2856 \\ \hline Renaming & 3238 & 2868 \\ \hline ICI & 1515 & 1008 \\ \hline CallIndirection & 2118 & 1737 \\ \hline Reflection & 2667 & 2484 \\ \hline Encryption & 4790 & 4060 \\ \hline \end{tabular}
\end{table} TABLE II: Composition of datasets used in this work. The columns indicate the number of samples that comprise each set. The CleanSuccObf dataset contains the clean (original) apps for which we obtained obfuscated versions with all tools for at least one technique.
obtained by detectors. Therefore, we extract unique opcode subsequences of length 2 (or bi-grams) from the code of the apps, and create a feature to represent the number of appearances of each bigram in the code. The resulting vector contains a total of 25 354 frequency features.
#### 4.2.5 Strings
The APK file strings are a valuable source of information for malware detection. In this regard, the most common strings include IP addresses, host names and URLs [36, 48]; command names [49, 50] and numbers [48]. We processed app files and found 2 425 892 unique strings. Following the procedure in [12], we observed that 98.5% of the strings were present in less than 1% of the samples. After removing these rare strings, we obtained 39 793 binary features, each representing the presence or absence of a specific string within the app files.
#### 4.2.6 File related features
This type of features includes the size of code files and different file types inside the APK [16, 48, 49, 51]. We base our file type extractor on both, the extension of the file and the identification of the first bytes of the content (i.e., magic numbers) of files. The result is a new frequency feature for every unique combination of the extension (_ext_) and magic type (_mtype_), identified as _ext_mtype_. For files without extension, we use the complete file name instead. In total, this set consist of 65 986 frequency features per app.
#### 4.2.7 Ad-hoc Features
As explained earlier, some specific detectors claim to use obfuscation-resistant features. We call the features used by these detectors that do not fall into any of the above categories ad-hoc features. They include: semantic features based on sink and source relationships in the code [50]; certificate information [16]; flags about the use of cryptographic, reflective, and command execution classes [42, 48, 51]; and resolved function names for native and reflective calls [18]. Due to the computational cost of obtaining these features, we limited the time spent computing them to 15 minutes per sample. The result is a set of 35 387 frequency features, each representing the number of occurrences of the feature within the app.
## 5 Feature Validity
As a first step in this study, we have designed a set of experiments to determine the robustness and detection ability when obfuscation is present of the seven feature families described in the previous section. The first experiment analyzes the impact that different obfuscation strategies and tools have on the features. In the second experiment we evaluate the performance and stability of ML algorithms when using these features for malware detection.
### _Feature persistence_
In this experiment, we aim to examine the impact of obfuscation on the features presented above. We analyze when and how much the features change in the presence of obfuscation. We highlight the disparities among obfuscation tools and how different implementation strategies to achieve the same obfuscation objective can affect the features.
To analyze these aspects, we calculate the feature _persistence_ for each tool-technique obfuscation combination. This is done by determining the average level of overlap between the features of an original (clean) app and its obfuscated counterparts. To compute the feature overlap, we compare each pair of feature vectors calculated for an original app and its obfuscated version, and quantify the proportion of features with exact value matches. Note that for binary-featured representations (Permissions, Components, Strings and API functions), this is equivalent to computing the Jaccard index that measures the ratio between the shared elements and the total number of elements in the union of both feature vectors. Note also that, for frequency vectors, an increment or decrease in one unit or ten units has the same effect in this metric.
The results of this experiment are shown in Table III. We find various degrees of persistence, in most cases over 0.8, with many exact matches between the feature vectors of clean and obfuscated APKs. Components and Permission features suffer the smallest changes when applying strategies such as Junk Code Insertion, Call Indirection, Reflection and Encryption (independently of the tool). Despite being affected by all techniques, File-Related features are also among the least affected on average. On the contrary, Ad-hoc, API functions and Opcode feature vectors change the most when obfuscation is applied. Nonetheless, the average persistence values for these features indicate that most fields (about 75%) are not affected by obfuscation. Therefore, in most cases, we conclude that the use of obfuscation is not reflected as a radical change in the feature vectors.
Persistence values refer to the proportion of features that remain unchanged, but do not tell us which particular features change the most when a tool-technique combination is applied. To shed some light on this regard, we selected the 15 features that change the most when obfuscation is applied. They may belong to different families. To obtain them, we measured the degree of discrepancy in the number of occurrences of each of these features, comparing the original application and the obfuscated version. To simplify the visualization, we show the results for each technique, averaging the discrepancy values for the three tools. The resulting rankings are shown in Figure 4. The name of each bar is the feature name (which includes its family). The number at the right of each bar is the degree of discrepancy, i.e., the average difference in the frequency of the feature between original and obfuscated versions of apps. Note that for easier interpretation, the scales are specific to each figure.
Regarding the persistence of the different feature families, Renaming mainly affected Components and API functions features, due to changes in the names of user-defined packages, classes, methods and fields. It also alters the declaration of custom permissions present in the code, since they depend on the name of the class where they are declared. However, as can be seen in Figure 3(a), none of these features are among the 15 most affected, mainly because the names assigned to the classes are app-specific. In contrast, Opcode features are among those most significantly affected, due to changes in the order of methods when processing class files. This mainly changes the frequency of sequences that present |
2302.05647 | The Kruskal Wallis test can not be recommended | Although the Kruskal-Wallis (KW) test is widely used, it should not be
recommended: it is not robust to arbitrary alternatives, it is only a global
test without confidence intervals for the marginal hypotheses, it is inherently
defined for two-sided hypotheses, it is not very suitable for pre/post hoc test
combinations and hard to modified for factorial designs or the analysis of
covariance. As an alternative a double maximum test is proposed: a maximum over
multiple contrasts against the grand mean (approximating global power as a
linear test statistics) and a maximum over three rank scores, sensitive for
location, scale and shape effects. The joint distribution of this new test is
achieved by the multiple marginal models approach. Related R-code is provided. | Ludwig A. Hothorn | 2023-02-11T10:33:39Z | http://arxiv.org/abs/2302.05647v1 | # The Kruskal Wallis test can not be recommended
###### Abstract
Although the Kruskal-Wallis (KW) test is widely used, it should not be recommended: it is not robust to arbitrary alternatives, it is only a global test without confidence intervals for the marginal hypotheses, it is inherently defined for two-sided hypotheses, it is not very suitable for pre/post hoc test combinations and hard to modified for factorial designs or the analysis of covariance. As an alternative a double maximum test is proposed: a maximum over multiple contrasts against the grand mean (approximating global power as a linear test statistics) and a maximum over three rank scores, sensitive for location, scale and shape effects. The joint distribution of this new test is achieved by the multiple marginal models approach. Related R-code is provided.
## 1 The Kruskal Wallis test and three alternative proposals
The Kruskal-Wallis test [8] is among the most widely used tests for analyzing randomized one-way designs with treatment groups \(T_{j}\):\([T_{1},T_{2},...,T_{k}]\) as a nonparametric alternative to ANOVA F-test. Its test statistics represent a quadratic form for global ranks \(R_{ji}\) (in \(j\) groups with \(i\) replicates) for a continuous variable: KW=\(\sum_{j=1}^{k}n_{j}[\frac{\sum_{i=1}^{n_{j}}R_{ji}}{n_{j}}-\frac{N+1}{2}]^{2}/(N(N-1)/12\) which is asymptotically \(\chi^{2}\) distributed or related permutation version are available [3]. For tied data a modified variance estimator is available. The null hypothesis \(H_{0}:F_{1}=F_{2}=...=F_{k}\) is against the alternative of any heterogeneity, at least \(F_{j}\neq F_{j^{\prime}}\), i.e. a global test only. However, in most k-sample layouts, the inference between the treatments is of interest, not just a global outcome of any heterogeneity. Its use as pre-test before post-hoc tests is a confusing concept using quite different alternatives conditionally [2]. It does not provide simultaneous confidence intervals for the interesting marginal hypotheses, is hard to generalize for factorial layouts or adjusting for covariates (ANCOVAR) and is inherently for two-sided hypotheses only.
Here, an alternative approach is proposed which based first on the similar power between ANOVA F-test and a multiple contrast test for comparisons the treatments against the grand mean \(T..\) (MCT-GM) [5]. Therefore, a non-parametric MCT-GM [10] is considered, as a non-parametric global-rank test for relative effect size [6][7] and as maxT-test over three scores tests (joint double maximum test (Joint)). The maxT-test based on the multiple marginal model approach (mmm) [11]: a first maximum over the multiple contrasts, a second maximum over rank-transformed responses (RT, sensitive against location effects), Ansari-Bradly scores (AB, sensitive against scale effects) and Savage scores (SA, sensitive for Lehman-type alternatives: \(T^{Joint}=max(F^{RT}_{T_{j}-T..},F^{AB}_{T_{j}-T..},F^{SA}_{T_{j}-T..}\). The use of these three scores tests was motivated by a related sum-test over these scores tests [9]. \(T^{Joint}\) is \(3k\)-variate normal distributed with the correlation matrix \(R\) between these \(3k\) test statistics- estimated via mmm.
## 2 Simulations
In a tiny simulation study, size and power of the permutative Kruskal-Wallis test (KW), the joint test (Joint Test), the global rank nonparametric MCT (NonparMCT) and the MCT-GM for most likely transformations (MLT) [4] are compared in a balanced one-way layout with k=4, \(n_{i}=20\) for Gaussian distribution and a skewed distribution in the Fleishman system (skewness=1.5, kurtosis=3) [1]. Three particular alternatives are considered i) just location, ii) just scale and iii) location-scale:
Of course, there can be no most powerful test for these very different alternatives, but the joint test shows consistently high power in these simulations (in bold).
## 3 An example
As example the reaction time data of mice in a control and 3 treatments design is used [12]:
Because the design of this example is \([C,T_{1},T_{2},T_{3}]\), two types of analyses are compared: i) the global test version, ii) the one-sided Dunnett-type version for comparing the treatment groups against the control only.
\begin{table}
\begin{tabular}{l l l l l l l} \hline Distribution & location & scale & Joint Test & NonparMCT & KW-test & MLT \\ \hline Normal & \(H_{0}\) & \(H_{0}\) & _0.065_ & 0.051 & 0.046 & 0.055 \\ Normal & \(H_{1}\) & \(H_{0}\) & **0.851** & 0.821 & 0.825 & 0.844 \\ Normal & \(H_{0}\) & \(H_{1}\) & **0.765** & 0.033 & 0.109 & 0.206 \\ Normal & \(H_{1}\) & \(H_{1}\) & **0.853** & 0.121 & 0.245 & 0.386 \\ \hline Skewed & \(H_{0}\) & \(H_{0}\) & _0.067_ & 0.057 & 0.053 & 0.048 \\ Skewed & \(H_{1}\) & \(H_{0}\) & 0.910 & **0.975** & 0.908 & 0.888 \\ Skewed & \(H_{0}\) & \(H_{1}\) & **0.799** & 0.074 & 0.163 & 0.302 \\ Skewed & \(H_{1}\) & \(H_{1}\) & **0.975** & 0.913 & 0.647 & 0.543 \\ \hline \end{tabular}
\end{table}
Table 1: Simulation results: size and power of 4 global tests
Figure 1: Shirley’s reaction time data in mice
### Global test
The p-values of the Kruskal-Wallis test (\(p=0.0007\)), the MLT (p=0.0015), the NonparMCT-GM (\(p<0.0001\)) and Joint test (\(p<0.0001\)) are all small, but the joint test provides more detailed information:
### Dunnett-type evaluation
The one-sided Dunnett-type version for comparing the treatment groups against the control is for the above discussed test versions:
The experimental question is answered directly, that all groups cause an increase of the reaction time and the higher the dose the more significant - and this with very small p-values. The amount of location effect shows the plot with the simultaneous lower confidence limits, at the highest dose at least an increase of 10.6 min:
\begin{table}
\begin{tabular}{l l|r l} \hline Effect & Treatment vs. GM & test stat & p-value \\ \hline location & 0 & -5.90 & **0.00001** \\ location & 1 & 0.07 & 1.00000 \\ location & 2 & 1.04 & 0.90901 \\ location & 3 & 2.99 & **0.04382** \\ \hline scale & 0 & -1.51 & 0.65375 \\ scale & 1 & 1.57 & 0.60633 \\ scale & 2 & 0.61 & 0.99412 \\ scale & 3 & -0.81 & 0.97239 \\ \hline shape & 0 & -4.63 & **0.00061** \\ shape & 1 & -0.40 & 0.99959 \\ shape & 2 & 0.77 & 0.97975 \\ shape & 3 & 2.17 & 0.25429 \\ \hline \end{tabular}
\end{table}
Table 2: Shirley example: adjusted p-values of the joint global test
\begin{table}
\begin{tabular}{l l|l l l} \hline Effect & Treatment vs. control & NonparamMCT & MLT & Joint \\ \hline ’location’ & 1 - 0 & 0.0022 & 0.0141 & 0.01703 \\ ’location’ & 2 - 0 & 0.0012 & 0.0048 & 0.00295 \\ ’location’ & 3 - 0 & 0.00036 & 0.0004 & 0.00007 \\ scale & 1 - 0 & - & - & 0.19622 \\ scale & 2 - 0 & - & - & 0.46656 \\ scale & 3 - 0 & - & - & 0.90015 \\ shape & 1 - 0 & - & - & 0.25311 \\ shape & 2 - 0 & - & - & 0.04242 \\ shape & 3 - 0 & - & - & 0.00306 \\ \hline \end{tabular}
\end{table}
Table 3: Shirley example: adjusted p-values 3 versions of Dunnett-type tests
## 4 Conclusions
It is clear that even for a global test for an one-way layout for arbitrary distributions no most powerful test can exist. The alternatives to the KW-test proposed here meet or exceed its power, but prepare additional information, namely adjusted p-values and, or simultaneous confidence intervals on the marginal hypotheses and the joint test provides even information with respect to the underlying location/scale/shape effects. The related R-code is simple and provided within a data example.
Therefore, in summary, I suggest to use one of the alternative global tests instead of the KW-test, or even better instead of a global test (alone or within a pre-test/post-hoc test system) to use a test for the marginal hypotheses of interest, e.g. for comparisons with a control.
## 5 R-Code
library(nparcomp)
data(reaction)
library(toxbox)
boxclust(data=reaction, outcome="Time", treatment="Group", ylabel="Reaction time in mins", xlabel="Treatment option="uni", hjitter=0.3, legpos ="none", printN="FALSE", white=TRUE, psize=1.4, vlines="bg")
reaction$group<-as.factor(reaction$Group)
ni<-table(reaction$group); DF=sum(ni-4)
npc<-mctp(Time-group, data=reaction, type ="GrandMean", alternative ="two.sided",
asy.method = "mult.tt", plot.simci = FALSE, control = NULL, info = FALSE,
correlation = FALSE)
library(coin)
kwp<-pvalue(kruskal_test(Time-group, data=reaction, distribution ="approximate"))
reaction$AB<-ansari_trafo(reaction$Time, ties.method ="mid-ranks")
reaction$AS<-asavage_trafo(reaction$Time, ties.method ="mid-ranks")
reaction$Bld<-rank(reaction$Time)
mod2<-ln(Rld1-group, data=reaction)
mod3<-ln(AB-group, data=reaction)
mod4<-ln(SA-group, data=reaction)
library(multcomp)
Joint2<-glht(mmm(location = mod2, scale= mod3, shape=mod4), mlf(mcp(group ="GrandMean")), df=DF)
library(tram)
TO<-glht(Colr(Time-group, data=reaction), linfct = mcp(group = "GrandMean"), df=DF)
Figure 2: Shirley example: lower confidence limits |
2301.04009 | On the Complexity of the Two-Stage Majoritarian Rule | Sequential voting rules have been extensively used in parliamentary and
legislative decision making. After observing that the prevalent successive and
the amendment rules fail several fundamental axioms, Horan and Sprumont [2021]
proposed very recently a two-stage sequential rule which satisfies a variety of
desirable properties. This paper examines this rule by investigating the
complexity of Agenda Control, Coalition Manipulation, Possible Winner,
Necessary Winner, and eight standard election control problems. Our study
offers a comprehensive understanding of the complexity landscape of these
problems. | Yongjie Yang | 2023-01-10T14:53:42Z | http://arxiv.org/abs/2301.04009v2 | # On the Complexity of the Two-Stage Majoritarian Rule+
###### Abstract
Sequential voting rules have been extensively used in parliamentary and legislative decision making. After observing that the prevalent successive and the amendment rules fail several fundamental axioms, Horan and Sprumont [2022] proposed very recently a two-stage sequential rule which satisfies a variety of desirable properties. This paper examines this rule by investigating the complexity of Agenda Control, Coalition Manipulation, Possible Winner, Necessary Winner, and eight standard election control problems. Our study offers a comprehensive understanding of the complexity landscape of these problems.
**keywords:** parameterized complexity, successive rule, amendment rule, two-stage majoritarian rule, NP-hard, W[2]-hard
## 1 Introduction
Exploring the complexity of strategic voting problems has been being a vibrant topic in computational social choice (see, e.g., [7, 17, 22, 25, 33]). The motivation is that malicious strategic voting may undermine election results, and it is widely believed that complexity could serve as a barrier against strategic actions [3, 4]. In particular, to what extent a voting rule resists strategic voting has been commonly recognized as a crucial factor to evaluate the applicability of the rule. Over the past three decades, the complexity of many different strategic voting problems under numerous voting rules has been established [5, 20]. Needless to say, as long as a new meritorious voting rule in terms of axiomatic properties has emerged, comparing it with existent rules with respect to their resistance degree to strategic voting becomes of great importance.
This paper aims to complete the complexity landscape of several strategic voting problems under a sequential voting rule proposed recently by Horan and Sprumont [26]. Taking as input preferences of voters over candidates and an agenda over candidates (a linear order specifying the priorities of candidates being considered during the decision-making process), a sequential rule outputs one candidate as the winner. Sequential rules are exceedingly useful in parliamentary and legislative decision making. So far, the successive rule and the amendment rule are among the most popular sequential rules used in many countries [34]. However, these rules fail several fundamental axioms from a theoretical point of view. This motivates Horan and Sprumont [26] to study a new rule called _two-stage majoritarian rule (TSMR)_, which has been shown to satisfy a variety of desirable axiomatic properties several of which are failed by the successive and the amendment rules.
The work of Horan and Sprumont [26] naturally raises the question of whether the newly proposed rule is comparable to the successive and the amendment rules in terms of resistance to strategic voting. This paper aims to answer this question. In addition, we also study two winner determination problems in a scenario where only partial information on voters' preferences are available. Our main contributions are as follows.
1. We study the Agenda Control problem, which models the scenario where an external agent empowered to set the agenda attempts to make a distinguished candidate the winner.
2. We study the Coalition Manipulation problem in which a set of voters, called manipulators, aim to make a distinguished candidate the winner by coordinating their votes.
3. We study eight standard election control problems, namely, CCAV, CCDV, CCAC, CCDC, DCAV, DCDV, DCAC, and DCDC. In the abbreviations, "CC"/"DC" stands for "constructive control"/"destructive control", the third letter "A"/"D" stands for "adding"/"deleting", and the last letter "V"/"C" stands for "voters"/"candidates". These problems model the scenario where a powerful external agent aims to make a distinguished candidate the winner (constructive) or not the winner (destructive) by adding or deleting a limited number of voters or candidates.
4. We study the Possible Winner and the Necessary Winner problems under TSMR. These two problems are relevant to a setting where only partial information on the preferences of voters and agenda are known. Possible Winner consists in determining which candidates have positive chances to win at least one completion of the partial input, and Necessary Winner consists in determining which candidates necessarily win regardless of the missing information.
5. For the above problems, we offer a comprehensive (parameterized) complexity landscape. Particularly, for the eight election control problems, we study both the special case where the given distinguished candidate \(p\) is the first one, and the case where \(p\) is the last one in the agenda. We refer to Table 1 for a summary of our concrete results as well as previous results for the successive rule and the amendment rule.
### Related Works
Agenda Control is arguably one of the most sought-after problems in the context of sequential votig rules and has a long history of study (see, e.g., [6, 32]). However, the complexity of Agenda Control was only first studied several years ago [8]. It should be pointed out that the complexity of some analogous problems in the setting of knockout tournaments has been studied earlier [1, 3, 4, 11, 30, 39, 40].
Coalition Manipulation is a natural generalization of the well-known Manipulation problem [3], and was first studied by Conitzer, Sandholm, and Lang [12]. We refer to [5, 13, 36, 37, 38] for detailed results on the complexity of this problem for many traditional rules (i.e., voting rules like Borda, Maximin, etc., which do not need an agenda to determine the winner).
The constructive control problems were first studied by Bartholdi, Tovey, and Trick [4], and their destructive counterparts were initiated by Hemaspaandra et al. [24]. Heretofore the complexity of these problems for many rules has been extensively investigated. We refer to the book chapters [5, 20] for important progress by 2016, and refer to [19, 33, 42, 43, 44] for some recent new results.
The complexity of Possible Winner and Necessary Winner for the successive and the amendment rules has been studied by Bredereck et al. [8]. These two problems for traditional voting rules were first studied by Konczak and Lang [28], and the complexity of the problems for many rules has been subsequently established [9, 10, 41].
### Organization
The remainder of the paper is organized as follows. In Section 2, we give the formal definitions of important notions used in the paper. Then, in Section 3, we unfold our concrete results for the strategic problems including Agenda Control, Coalition Manipulation, and the eight standard election control problems. Then, we study the Possible Winner and the Necessary Winner problems in Section 4. Finally, Section 5 recaps our results and layouts some topics for future research.
## 2 Preliminaries
We assume the reader is familiar with basic notions in graph theory, complexity theory, and parameterized complexity theory [2, 14, 15, 35].
Let \([i]\) be the set of positive integers equal to or smaller than \(i\). For a binary relation \(R\), we often use \(xRy\) to denote \((x,y)\in R\).
### Graphs
An undirected graph is a tuple \(G=(N,A)\), where \(N\) is a set of vertices and \(A\) is a set of edges. An edge between two vertices \(v\) and \(v^{\prime}\) is denoted by \(\{v,v^{\prime}\}\). We use \(\Gamma_{G}(v)\) to denote the set of neighbors of \(v\) in \(G\), i.e., \(\Gamma_{G}(v)=\{v^{\prime}\in N:\{v,v^{\prime}\}\in A\}\).
A digraph is a tuple \(G=(N,A)\) where \(N\) is a set of vertices and \(A\) is a set of arcs. Each arc from a vertex \(a\) to a vertex \(b\) is denoted by \((a,b)\). The set of inneighbors of a vertex \(a\) in \(G\) is \(\Gamma_{G}^{-}(a)=\{b\in N:(b,a)\in A\}\), and the set of outneighbors of \(a\) in \(G\) is \(\Gamma_{G}^{+}(a)=\{b\in N:(a,b)\in A\}\). When it is clear which graph \(G\) is discussed, we drop \(G\) from the notions. For \(S\subseteq N\), let \(\Gamma_{G}^{+}(S)=\bigcup_{a\in S}\Gamma_{G}^{+}(a)\setminus S\) be
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|} \hline & & CCAV & CCDV & CCAC & CCDC \\ \hline \multirow{2}{*}{**TSMR**} & first & **W[2]-h**\((k+n_{\text{rg}}\), Thm. 3) & **W[2]-h**\((k,n-k\), Thms. 5, 6) & **W[2]-h**\((k\), Thm. 9) & **P** (Thm. 10) \\ \cline{2-5} & last & **W[2]-h**\((k+n_{\text{rg}}\), Thm. 4) & **W[2]-h**\((k,n-k\), Thms. 7, 8) & immune (Cor. 1) & \\ \hline \multirow{2}{*}{successive} & first & **P** & **P** & immune & **W[1]-h**\((k,m-k)\) \\ \cline{2-5} & last & **W[1]-h**\((k+n_{\text{rg}})\) & **W[2]-h**\((k)\) & **W[2]-h**\((k)\) & **P** \\ \hline \multirow{2}{*}{amendment} & first & **W[1]-h**\((k+n_{\text{rg}})\) & **W[1]-h**\((k)\) & **P** & **P** \\ \cline{2-5} & last & **W[2]-h**\((k+n_{\text{rg}})\) & **W[2]-h**\((k)\) & **W[2]-h**\((k)\) & **P** & **P** \\ \hline \hline \multirow{2}{*}{**TSMR**} & last & **W[2]-h**\((k+n_{\text{rg}}\), Thm. 11) & **W[2]-h**\((k,n-k\), Thms. 12, 13) & **P** (Thm. 14) & **P** (Cor. 3) \\ \cline{2-5} & last & **P**[4] & **P**[4] & **P**[3] \\ \hline \multirow{2}{*}{successive} & first & **P** & **P** & **W[2]-h**\((k)\) & immune \\ \cline{2-5} & last & **P** & **P** & **P** & **W[1]-h**\((k,m-k)\) \\ \hline \multirow{2}{*}{amendment} & first & **P** & **P** & **W[1]-h**\((k,m-k)\) \\ \cline{2-5} & first & **P** & **P** & **P** & **immune** \\ \cline{2-5} & last & **W[1]-h**\((k)\) & **W[2]-h**\((k)\) & **P** & **P** \\ \hline \hline \multirow{2}{*}{**TSMR**} & **Agenda Control** & Coalition Manipulation & Possible Winner & Necessary Winner \\ \cline{2-5} & **P** (Thm. 1) & **P** (Thm. 2) & **NP-h** (Thms. 15, 16) & **P** (Thm. 17) \\ \hline \multirow{2}{*}{successive} & **P** & **P** & **NP-h** & **P** \\ \cline{2-5} & amendment & **P** & **P** & **NP-h** & **coNP-h** \\ \hline \end{tabular}
\end{table}
Table 1: A summary of the complexity of many voting problems under several sequential rules. Our main results are in bold face. In the table, “first”, “last”, and “\(\overline{\text{last}}\)” mean that the distinguished candidate is respectively the first one, the last one, and not the last one in the agenda. **P**-results spanning two rows hold for the general case, i.e., that they hold regardless of the position of the distinguished candidate in the agenda. In addition, \(m\) is the number of candidates, \(n\) is the number of votes, \(n_{\text{rg}}\) is the number of registered votes, and \(k\) is the solution size.
the set of outneighbors of vertices in \(S\) without \(S\) itself. An oriented graph is a digraph so that between every two vertices there is at most one arc.
For a graph \(G\) (be it directed or undirected) and a subset \(S\) of vertices, the subgraph of \(G\) induced by \(S\) is denoted by \(G[S]\).
### Elections and Voting Rules
An election is a tuple \((C,V)\) of a set of candidates \(C\) and a multiset of votes \(V\) where every \(\succ\in V\) is defined as a linear order over \(C\). For two candidates \(c,c^{\prime}\in C\), we say that \(c\) is ranked before \(c^{\prime}\) in a vote \(\succ\) if \(c\succ c^{\prime}\). In addition, we say that \(c\) is ranked immediately before \(c^{\prime}\) if \(c\succ c^{\prime}\) and there are no other candidates ranked between them. A vote \(\succ\) specifies the preference of a voter casting \(\succ\) where \(a\) is preferred to \(b\) if \(a\) is ranked before \(b\). For notational brevity, we sometimes write a preference in the format of a sequence of candidates from the most preferred one to the least preferred one. For instance, if we say a vote has the preference \(a\ b\ c\), we mean that \(a\) is ranked before \(b\), and \(b\) ranked before \(c\) in the vote.
An agenda \(\vartriangleright\) is a linear order over \(C\). For \(c\in C\), we call candidates before \(c\) in \(\vartriangleright\) the predecessors of \(c\), and call those after \(c\) the successors of \(c\). A sequential rule \(\tau\) maps each election \((C,V)\) and an agenda \(\vartriangleright\) to a single candidate \(\tau(C,V,\vartriangleright)\in C\), the winner.
For \(c,c^{\prime}\in C\), we use \(n_{V}(c,c^{\prime})\) to denote the number of votes in \(V\) ranking \(c\) before \(c^{\prime}\). We say \(c\) beats (resp. ties) \(c^{\prime}\) with respect to \(V\) if \(n_{V}(c,c^{\prime})>n_{V}(c^{\prime},c)\) (resp. \(n_{V}(c,c^{\prime})=n_{V}(c^{\prime},c)\)). A candidate is a weak Condorcet winner if it is not beaten by anyone else. In addition, a candidate is a Condorcet winner if it beats all the other candidates. The majority graph of an election \(\mathcal{E}=(C,V)\), denoted \(G_{E}\), is an oriented graph with the vertex set \(C\), and there is an arc from \(c\in C\) to \(c^{\prime}\in C\) if and only if \(n_{V}(c,c^{\prime})>n_{V}(c^{\prime},c)\).
* **Two-stage majoritarian rule (TSMR)**1 Let \(G_{E}^{\vartriangleright}\) be the subdigraph of \(G\) with only forward arcs with respect to \(\vartriangleright\), i.e., \(G_{E}^{\vartriangleright}\) takes \(C\) as the vertex set and there is an arc from \(c\) to \(c^{\prime}\) in \(G_{E}^{\vartriangleright}\) if and only if \(c\vartriangleright c^{\prime}\) and there is an arc from \(c\) to \(c^{\prime}\) in \(G\). Let \(C^{\prime}\subseteq C\) be the set of candidates without inneighbors in \(G_{E}^{\vartriangleright}\). Then, the procedure returns the right-most candidate in \(C^{\prime}\) as the winner, i.e., the \(c\in C^{\prime}\) such that \(c^{\prime}\vartriangleright c\) for all \(c^{\prime}\in C^{\prime}\setminus\{c\}\).
Footnote 1: We use the notation \(\mathcal{E}\) to denote the set of candidates with respect to \(\vartriangleright\).
We also give the formal definitions of the successive and amendment rules as they are closely related to our discussions.
* **Successive** For a candidate \(c\in C\) and a subset \(C^{\prime}\subseteq C\setminus\{c\}\), we say \(c\) beats \(C^{\prime}\) if there is a strict majority of votes each of which ranks \(c\) before all candidates in \(C^{\prime}\). The successive winner is the first one who beats the set of all her successors.
* **Amendment** This procedure takes \(|C|\) rounds, where each round determines a temporary winner. Precisely, the winner of the first round is the first candidate in the agenda. The winner of round \(i\) where \(i\geq 2\) is determined as follows. Let \(c\) be the winner of round \(i-1\), and let \(c^{\prime}\) be the \(i\)-th candidate in the agenda. The winner of round \(i\) is \(c\) if \(c\) beats \(c^{\prime}\), and is \(c^{\prime}\) otherwise. The amendment winner is the winner of the last round.
We note that the successive rule and the amendment rule have been also studied under several other names (cf. [6, 21]).
**Example 1**.: _Let \(C=\{a,b,c,d\}\), and let \(V\) be a set of three votes respectively with the preferences \(b\ d\ c\ a\), \(c\ a\ b\ d\), and \(a\ d\ b\ c\). The majority graph of \((C,V)\), three different agendas, and the winners under different rules and agendas are shown in Figure 1._
By the definitions of the sequential rules, it is easy to verify that the first and the last candidates in an agenda are somehow related to (weak) Condorcet winner, as summarized below.
**Observation 1**.: _For an election \((C,V)\) and an agenda \(\rhd\) over \(C\), the following hold._
1. _The first candidate in_ \(\rhd\) _is the amendment winner of_ \((C,V)\) _if and only if it is the Condorcet winner of_ \((C,V)\)_._
2. _The last candidate in_ \(\rhd\) _is the TSMR winner of_ \((C,V)\) _if and only if it is a weak Condorcet winner of_ \((C,V)\)_._
3. _If the first candidate in_ \(\rhd\) _is the successive winner of_ \((C,V)\)_, then it is also the Condorcet winner of_ \((C,V)\)_._
4. _If the first candidate in_ \(\rhd\) _is the Condorcet winner of_ \((C,V)\)_, then it is also the TSMR winner of_ \((C,V)\)_._
5. _If the last candidate in_ \(\rhd\) _is a weak Condorcet winner of_ \((C,V)\)_, it is also the successive winner and the amendment winner of_ \((C,V)\)_._
6. _The converses of (3)-(5) do not necessarily hold._
### Other Useful Notions
Throughout the paper, unless stated otherwise, for a set \(S\) we use \(\overrightarrow{S}\) to denote an arbitrary but fixed linear order over \(S\). Once such an \(\overrightarrow{S}\) is used, \(\overleftarrow{S}\) denotes then the reverse of \(\overrightarrow{S}\). For \(S^{\prime}\subseteq S\), we use \(\overrightarrow{S}[S^{\prime}]\) to denote \(\overrightarrow{S}\) restricted to \(S^{\prime}\), and use \(\overrightarrow{S}\setminus S^{\prime}\) to denote \(\overrightarrow{S}[S\setminus S^{\prime}]\).
### Problem Formulations
For a sequential voting rule \(\tau\), we study the following problems defined in [8].
Agenda Control
**Given:** An election \((C,V)\) and a distinguished candidate \(p\in C\).
**Question:** Is there an agenda \(\rhd\) over \(C\) so that \(p\) is the winner of \((C,V,\rhd)\) with respect to \(\tau\), i.e., \(p=\tau(C,V,\rhd)\)?
Coalition Manipulation
**Given:** An election \((C,V)\), a distinguished candidate \(p\in C\), an agenda \(\rhd\) over \(C\), and a positive integer \(k\).
**Question:** Is there a multiset \(V^{\prime}\) of \(k\) votes over \(C\) so that \(p=\tau(C,V\cup V^{\prime},\rhd)\)?
Figure 1: An illustration of TSMR, the successive rule, and the amendment rule. For TSMR, arcs not in \(G^{\rhd}_{E}\) (backward arcs with respect to \(\rhd\)) are drawn as dashed lines.
For a partial order \(R\) over a set \(X\), a linear extension of \(R\) is a linear order over \(X\) containing \(R\), i.e., a linear order \(R^{\prime}\) so that \((x,y)\in R\) implies \((x,y)\in R^{\prime}\) for all \(x,y\in X\).
A partial election is a tuple \((C,V)\) where \(V\) is a multiset of partial orders over \(C\). An election \((C,V^{\prime})\) is a completion of a partial election \((C,V)\) if elements of \(V^{\prime}\) one-to-one correspond to elements of \(V\) so that every \(v^{\prime}\in V^{\prime}\) is a linear extension of the partial order in \(V\) corresponding to \(v^{\prime}\). A partial agenda over \(C\) is a partial order over \(C\).
Possible Winner
**Given:** A partial election \((C,V)\), a distinguished candidate \(p\in C\), and a partial agenda \(\rhd\) over \(C\).
**Question:** Is there a completion \((C,V^{\prime})\) of \((C,V)\) and a linear extension \(\rhd^{\prime}\) of \(\rhd\) so that \(p=\tau(C,V^{\prime},\rhd^{\prime})\)?
Necessary Winner
**Given:** A partial election \((C,V)\), a distinguished candidate \(p\in C\), and a partial agenda \(\rhd\) over \(C\).
**Question:** Is \(p\) the \(\tau\) winner of every completion of \((C,V,\rhd)\), i.e., \(p=\tau(C,V^{\prime},\rhd^{\prime})\) for all completions \((C,V^{\prime})\) of \((C,V)\) and all linear extensions \(\rhd^{\prime}\) of \(\rhd\)?
We also study eight standard control problems which are special cases of the following problems.
Constructive Multimode Control
**Given:** An election \((C\cup D,V\cup W)\) with a set \(C\) of registered candidates,[2] a set \(D\) of unregistered candidates, a multiset \(V\) of registered votes, a multiset \(W\) of unregistered votes, a distinguished candidate \(p\in C\), an agenda \(\rhd\) over \(C\cup D\), and four integers \(k_{\text{AV}}\), \(k_{\text{DV}}\), \(k_{\text{AC}}\), and \(k_{\text{DC}}\) such that \(k_{\text{AV}}\leq|W|\), \(k_{\text{DV}}\leq|V|\), \(k_{\text{AC}}\leq|D|\), and \(k_{\text{DC}}\leq|C|\).
**Question:** Are there \(V^{\prime}\subseteq V\), \(W^{\prime}\subseteq W\), \(C^{\prime}\subseteq C\setminus\{p\}\), and \(D^{\prime}\subseteq D\) such that \(|V^{\prime}|\leq k_{\text{DV}}\), \(|W^{\prime}|\leq k_{\text{AV}}\), \(|C^{\prime}|\leq k_{\text{DC}}\), \(|D^{\prime}|\leq k_{\text{AC}}\), and \(p\) wins \(((C\setminus C^{\prime})\cup D^{\prime},(V\setminus V^{\prime})\cup W^{ \prime},\rhd^{\prime})\) with respect to \(\tau\), where \(\rhd^{\prime}\) is \(\rhd\) restricted to \((C\setminus C^{\prime})\cup D^{\prime}\)?
In Destructive Multimode Control, we have the same input as Constructive Multimode Control, and are asked whether there are \(V^{\prime}\), \(W^{\prime}\), \(C^{\prime}\), and \(D^{\prime}\) as in the above definition such that \(p\) is not the \(\tau\) winner of \(((C\setminus C^{\prime})\cup D^{\prime},(V\setminus V^{\prime})\cup W^{ \prime},\rhd^{\prime})\).
The eight standard control problems studied in the paper are special cases of Constructive Multimode Control and Destructive Multimode Control. The specifications of the eight standard control problems are summarized in Table 2.
For simplicity, when we study a problem in Table 2, we use \(k\) to denote the integer in the input not required to be 0, and omit components in the input requested to be 0 or \(\emptyset\). For example, an instance of CCAV is written as \(((C,V\cup W),p,\rhd,k)\), where \(k\) represents \(k_{\text{AV}}\).
Our hardness results are based on reductions from the following problem.
\begin{table}
\begin{tabular}{l l} \hline \hline problems & restrictions \\ \hline XAV & \(k_{\text{DV}}=k_{\text{AC}}=k_{\text{DC}}=0\), \(D=\emptyset\) \\ XAC & \(k_{\text{AV}}=k_{\text{DV}}=k_{\text{DC}}=0\), \(W=\emptyset\) \\ XDV & \(k_{\text{AV}}=k_{\text{AC}}=k_{\text{DC}}=0\), \(D=W=\emptyset\) \\ XDC & \(k_{\text{AV}}=k_{\text{DV}}=k_{\text{AC}}=0\), \(D=W=\emptyset\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Special cases of Constructive/Destructive Multimode Control. Here, X is either CC standing for constructive control or DC standing for destructive control.
RBDS is NP-complete [23], and from a parameterized complexity point of view it is W[2]-complete with respect to \(\kappa\)[16].
### Remarks
Most previous studies make the assumption that there are no ties in elections (see, e.g., [8, 26]). Our results are presented without this assumption, but all of them still hold when the no-tie assumption is made. This is clear for polynomial-time solvability results. Regarding hardness results for voter control problems, some of our reductions can be slightly adapted to show the same hardness if the no-tie assumption is adopted, and others directly apply to the case with the no-tie assumption. We note that in these problems the no-tie assumption means that after the addition or the deletion of votes there are no ties. All our other reductions directly apply to the case with the no-tie assumption, because in these reductions the elections constructed do not admit ties and the feasible solutions do not remove the assumption.
All our reductions take polynomial time, and all computationally hard problems proved in the paper are clearly in NP (Necessary Winner is in coNP). Therefore, a problem shown to be W[2]-hard in the paper is also NP-complete. We won't explicitly state the NP-completeness in the corresponding theorems.
## 3 Strategic Problems
In this section, we study the complexity of many strategic voting problems for TSMR.
### Agenda Control and Manipulation
We first present a P-algorithm for Agenda Control.
**Theorem 1**.: Agenda Control _for TSMR is in P._
Proof.: Let \(I=((C,V),p)\) be an instance of Agenda Control. Let \(G\) be the majority graph of \((C,V)\). We construct an agenda \(\rhd\) as follows. Let \(A=C\setminus(\Gamma_{G}^{-}(p)\cup\{p\})\) be the set of candidates which do not beat \(p\) with respect to \(V\). We fill all candidates from \(A\cup\{p\}\) in the first \(|A\cup\{p\}|\) positions in the agenda \(\rhd\) so that \(p\) is after all candidates from \(A\) (the relative orders of candidates from \(A\) are set arbitrarily). Then, we fill candidates from \(\Gamma_{G}^{-}(p)\) into the agenda iteratively as follows. First, let \(S=A\). In each iteration we compute the set \(S^{\prime}=\Gamma_{G}^{+}(S)\), and fill candidates from \(S^{\prime}\) in the subsequent \(|S^{\prime}|\) positions in the agenda \(\rhd\). Then, we update \(S:=S\cup S^{\prime}\). The iterations terminate when \(S^{\prime}\) defined above turned out to be empty.
After the iterations terminate, if all candidates in \(C\) are in the agenda \(\rhd\), \(p\) is the TSMR winner of \((C,V)\) with respect to \(\rhd\). Thus, in this case, we conclude that \(I\) is a Yes-instance. If, however, there are still some candidates not filled in the agenda, we conclude that \(I\) is a No-instance. The reason is as follows. By the above iterations, in this case it holds that (1) none of \(C\setminus(S\cup\{p\})\) is beaten by anyone from \(S\cup\{p\}\), and (2) everyone in \(C\setminus S\) beats \(p\). Condition (2) entails that every candidate from \(C\setminus S\) must be after \(p\) in the agenda. However, as long as this is the case, Condition (1) warrants the winning of someone from \(C\setminus S\)
For Coalition Manipulation, we have again a \(\mathsf{P}\)-algorithm.
**Theorem 2**.: Coalition Manipulation _for_ TSMR _is in \(\mathsf{P}\)._
Proof.: Let \(I=((C,V),p,\rhd,k)\) be an instance of Coalition Manipulation. Let \(B\) be the set of predecessors of \(p\), and let \(B^{\prime}\) be the set of successors of \(p\) in the agenda \(\rhd\). Let \(V^{\prime}\) be the multiset of \(k\) votes with the same preference \(p\ \overrightarrow{B}\ \overrightarrow{B^{\prime}}\), where \(\overrightarrow{B}\) and \(\overrightarrow{B^{\prime}}\) are respectively the linear orders over \(B\) and \(B^{\prime}\) consistent with \(\rhd\), i.e., \(\overrightarrow{B}=\rhd[B]\) and \(\overrightarrow{B^{\prime}}=\rhd[B^{\prime}]\). We conclude that \(I\) is a Yes-instance if and only if \(p\) is the TSMR winner of \((C,V\cup V^{\prime},\rhd)\).
The algorithm clearly runs in polynomial time. It remains to prove its correctness. To this end, we assume that \(I\) is a Yes-instance, and to complete the proof it suffices to show that \(I\) has a feasible solution \(V^{\prime}\) so that every vote in \(V^{\prime}\) has the same preference \(p\ \overrightarrow{B}\ \overrightarrow{B^{\prime}}\). Observe first that \(I\) has a feasible solution where \(p\) is ranked in the first place in all votes. Let \(U\) be a feasible solution of \(I\) where \(p\) is in the top in all votes in \(U\). If \(U\) equals \(V^{\prime}\) defined above, we are done. Otherwise, we show below how to transform \(U\) into \(V^{\prime}\) without destroying the feasibility of the solution. If there exists at least one vote \(\succ\in U\) and two candidates \(b\in B\) and \(b^{\prime}\in B^{\prime}\) so that \(b^{\prime}\) is ranked immediately before \(b\) in \(\succ\), we do the following. Let \(\succ^{\prime}\) be the vote obtained from \(\succ\) by swapping \(b\) and \(b^{\prime}\), and let \(U^{\prime}=U\setminus\{\succ\}\cup\{\succ^{\prime}\}\). It is easy to verify that every candidate who is beaten by at least one of her predecessors with respect to \(V\cup U\) is also beaten by at least one of her predecessors with respect to \(V\cup U^{\prime}\), and every candidate which does not beat \(p\) with respect to \(V\cup U\) still does not beat \(p\) with respect to \(V\cup U^{\prime}\). Therefore, \(p\) still wins after the swapping of \(b\) and \(b^{\prime}\). After the swapping operations are exhaustively applied, we obtain a feasible solution \(\widetilde{U}\) of \(I\) so that \(p\) is ranked in the top, and all candidates in \(B\) are ranked before all candidates in \(B^{\prime}\) in every vote of \(\widetilde{U}\). If \(\widetilde{U}=V^{\prime}\), we are done. Otherwise, there exists at least one vote \(\succ\in\widetilde{U}\) such that one of the following conditions holds:
* \(\exists a,b\in B\) such that \(a\) is ranked immediately before \(b\) in \(\succ\) and \(b\rhd a\);
* \(\exists a^{\prime},b^{\prime}\in B^{\prime}\) s.t. \(a^{\prime}\) is ranked immediately before \(b^{\prime}\) in \(\succ\) and \(b^{\prime}\rhd a^{\prime}\).
Then, analogous to the above discussion, we can swap \(a\) and \(b\) (resp. \(a^{\prime}\) and \(b^{\prime}\)) in \(\succ\) without changing the winning status of \(p\). After the swapping operations are exhaustively used, we eventually obtain \(V^{\prime}\).
### Constructive Controls
In this section, we study constructive control problems for TSMR. We first present results for control by adding/deleting votes. We show that these problems are \(\mathsf{W[2]}\)-hard with respect to several meaningful parameters, for both the special case where the distinguished candidate is the first one in the agenda and the case where the distinguished candidate is the last one in the agenda.
**Theorem 3**.: CCAV _for_ TSMR _is \(\mathsf{W[2]}\)-hard with respect to the number of added votes plus the number of registered votes. Moreover, this holds even when the distinguished candidate is the first one in the agenda._
Proof.: We prove the theorem via a reduction from RBDS. Let \((G=(R\cup B,A),\kappa)\) be an instance of RBDS. We construct an instance of CCAV for TSMR as follows. We create for each vertex in \(G\) a candidate denoted by the same symbol for simplicity. In addition, we create a candidate \(p\). Let \(C=B\cup R\cup\{p\}\). The agenda is \(\rhd=(p,\overrightarrow{B},\overrightarrow{R})\). We create the following registered votes:
* \(\kappa\) votes with the preference \(\overleftarrow{B}\ \overleftarrow{R}\ \ p\); and
* one vote with the preference \(\overleftarrow{R}\ \ p\ \overleftarrow{B}\).
Let \(V\) be the multiset of the above \(\kappa+1\) registered votes. We create \(|B|\) unregistered votes corresponding to \(B\). In particular, for each \(b\in B\), we create one vote \(\succ_{b}\) with the preference
\[p\ \left(\overleftarrow{R}\setminus\Gamma_{G}(b)\right)\ b\ \left(\overleftarrow{R}[\Gamma_{G}(b)]\right)\ \left(\overleftarrow{B}\setminus\{b\}\right).\]
Let \(W\) be the set of the above \(|B|\) unregistered votes. Finally, we set \(k=\kappa\). The instance of CCAV for TSMR is \(((C,V\cup W),p,\rhd,k)\). In the following we show the correctness of the reduction.
\((\Rightarrow)\) Suppose that there exists \(B^{\prime}\subseteq B\) such that \(|B^{\prime}|=\kappa\) and \(B^{\prime}\) dominates \(R\) in \(G\). Let \(W^{\prime}=\{\succ_{b}\): \(b\in B^{\prime}\}\) be the set of the \(\kappa\) unregistered votes corresponding to \(B^{\prime}\). We show below that \(p\) becomes the TSMR winner of the election \(\mathcal{E}=(C,V\cup W^{\prime})\). Obviously, \(|V\cup W^{\prime}|=2\kappa+1\). As one of the registered votes ranks \(p\) before \(B\), and all the \(\kappa\) votes in \(W^{\prime}\) rank \(p\) before \(B\) too, there are \(\kappa+1\) votes in \(V\cup W^{\prime}\) ranking \(p\) before \(B\). So, none of \(B\) is the winner of \(\mathcal{E}\). Let us consider a candidate \(r\in R\). Note that there are \(\kappa\) registered votes which rank \(B\) before \(R\). As \(B^{\prime}\) dominates \(R\), there is at least one \(b\in B^{\prime}\) so that \(r\in\Gamma_{G}(b)\). By the definition of \(\succ_{b}\), \(b\) is ranked before \(r\) in \(\succ_{b}\). Therefore, there are in total \(\kappa+1\) votes in \(V\cup W^{\prime}\) which rank \(b\) before \(r\), precluding the winning of \(r\). As this holds for all \(r\in R\), and all candidates from \(B\) are before all candidates from \(R\) in the agenda \(\rhd\), none of \(R\) is the TSMR winner of \(\mathcal{E}\) either. This leaves only the possibility that \(p\) is the winner.
\((\Leftarrow)\) Suppose that there exists a subset \(W^{\prime}\subseteq W\) of at most \(\kappa\) votes so that \(p\) is the TSMR winner of \((C,V\cup W^{\prime})\). Observe that \(W^{\prime}\) must contain exactly \(\kappa\) votes, since otherwise someone in \(B\) precludes \(p\) from winning. Observe that all candidates in \(R\) beat \(p\) with respect to \(V\cup W^{\prime}\) no matter which votes are contained in \(W^{\prime}\). Furthermore, everyone in \(R\) beats all her predecessors in \(R\) with respect to \(V\cup W^{\prime}\). So, if \(p\) wins \((C,V\cup W^{\prime})\) it must be that every \(r\in R\) is beaten by someone in \(B\). This implies that for every \(r\in R\), there is at least one vote in \(W^{\prime}\) which ranks some \(b\in B\) before \(r\). By the construction of the unregistered votes, this vote must be \(\succ_{b}\) such that \(b\) dominates \(r\). It follows that \(\{b\in B:\succ_{b}\in W^{\prime}\}\) dominates \(R\). This implies that the RBDS instance is a Yes-instance.
Now we consider the case where the distinguished candidate is the last one in the agenda. Recall that the last candidate in the agenda is the TSMR winner if and only if it is a weak Condorcet winner (Observation 1). The \(\mathsf{W[1]}\)-hardness of CCAV for Condorcet winner established by Liu et al. [29] can be adapted for showing the same hardness for weak Condorcet winner3. We strengthen the result by establishing a \(\mathsf{W[2]}\)-hard reduction, excluding the possibility of the problem being complete to \(\mathsf{W[1]}\).
Footnote 3: For this, we mean the problem of determining if we can add a limited number of votes to make a particular candidate a weak Condorcet winner.
**Theorem 4**.: CCAV _for TSMR is \(\mathsf{W[2]}\)-hard with respect to the number of added votes plus the number of registered votes even when the distinguished candidate is the last one in the agenda._
Proof.: We prove the theorem via a reduction from RBDS. Let \((G,\kappa)\) be an instance of RBDS, where \(G=(R\cup B,A)\) is a bipartite graph. We create an instance of CCAV as follows. The candidate set is \(C=R\cup\{p,q\}\). Let \(\rhd=(\overrightarrow{R},q,p)\). We create a multiset \(V\) of \(\kappa\) registered votes as follows:
* \(\kappa-1\) votes with the preference \(q\ p\ \overrightarrow{R}\); and
* one vote with the preference \(q\ \overrightarrow{R}\ p\).
For each \(b\in B\), we create one unregistered vote \(\succ_{b}\) with the preference
\[\left(\overrightarrow{R}\setminus\Gamma_{G}(b)\right)\ p\ \left(\overrightarrow{R}[\Gamma_{G}(b)]\right)\ q.\]
For a given \(B^{\prime}\subseteq B\), let \(W(B^{\prime})=\{\succ_{b}:b\in B\}\) be the multiset of unregistered votes corresponding to \(B^{\prime}\). Let \(k=\kappa\). The instance of CCAV is \(((C,V\cup W(B)),p,\rhd,k)\). It remains to show the correctness of the reduction.
\((\Rightarrow)\) Assume that there exists \(B^{\prime}\subseteq B\) such that \(|B^{\prime}|=\kappa\) and \(B^{\prime}\) dominates \(R\). Let \(\mathcal{E}=(C,V\cup W(B^{\prime}))\). We show that the CCAV instance is a Yes-instance by showing that \(p\) is the TSMR winner of \(\mathcal{E}\). First, observe that \(p\) ties \(q\) in \(\mathcal{E}\). As \(B^{\prime}\) dominates \(R\), for every \(r\in R\) there is at least one \(b\in B^{\prime}\) which dominates \(r\). This implies that in the vote \(\succ_{b}\in W(B^{\prime})\), \(p\) is ranked before \(r\), and hence \(p\) is not beaten by \(r\) in \(\mathcal{E}\). As \(p\) is the last one in the agenda, it follows that \(p\) wins \(\mathcal{E}\).
\((\Leftarrow)\) Assume that there exists \(B^{\prime}\subseteq B\) such that \(|B^{\prime}|\leq k=\kappa\) and \(p\) is the TSMR winner of \(\mathcal{E}=(C,V\cup W(B^{\prime}))\). This means that \(p\) is not beaten by anyone else in \(\mathcal{E}\). Therefore, \(|B^{\prime}|=k\), since otherwise \(q\) beats \(p\). It follows that \(|V\cup W(B^{\prime})|=2\kappa\). Let \(r\in R\). As we have exactly \(\kappa-1\) registered votes ranking \(p\) before \(r\) in \(V\), there is at least one \(b\in B^{\prime}\) so that \(p\) is ranked before \(r\) in the vote \(\succ_{b}\). By the definition of \(\succ_{b}\), this implies that \(b\) dominates \(r\). It follows that \(B^{\prime}\) dominates \(R\). Thus, the RBDS instance is a Yes-instance.
Let us move on to constructive control by deleting votes. This problem possesses two natural parameters: the solution size \(k\) and its dual parameter \(n-k\) where \(n\) is the number of votes. We show that the problem is \(\mathsf{W[2]}\)-hard with respect to both parameters, even when the distinguished candidate is the first or the last one in the agenda. These results are encapsulated in the following four theorems.
**Theorem 5**.: CCDV _for TSMR is \(\mathsf{W[2]}\)-hard with respect to the number of deleted votes even when the distinguished candidate is the first one in the agenda._
Proof.: We prove the theorem via a reduction from RBDS. Let \((G,\kappa)\) be an instance of RBDS where \(G=(B\cup R,A)\) is a bipartite graph. We assume that \(G\) does not contain any isolated vertices, \(\kappa\geq 4\), and every red vertex is of degree \(\ell\) where \(\ell\geq 1\). These assumptions do not change the \(\mathsf{W[2]}\)-hardness of the problem. 4 We construct an instance of CCDV as follows. The candidate set is \(C=R\cup\{p,q,q^{\prime}\}\), and the agenda is \(\rhd=(p,q^{\prime},\overrightarrow{R},q)\). We create the following six groups of votes:
Footnote 4: The assumption that \(G\) does not contain any isolated vertices and \(\kappa\geq 4\) are clear. If an instance does not satisfy the second assumption, we can obtain an equivalent instance by the following operation: letting \(\ell\) be the maximum degree of vertices in \(R\), for each red vertex \(r\in R\) of degree strictly smaller than \(\ell\), we create new degree-1 vertices adjacent only to \(r\) until \(r\) has degree exactly \(\ell\). A noteworthy observation for the equivalency to the two instances is that there is an optimal solution (a subset \(B^{\prime}\subseteq B\) dominating \(R\) with the minimum cardinality) of the new instance which does not contain any of the newly introduced degree-1 vertices.
* a multiset \(V_{1}\) of \(\ell+1\) votes with the preference \[q^{\prime}\ p\ q\overleftarrow{R}\ q^{\prime};\]
* a multiset \(V_{3}\) of \(|B|-\kappa+1\) votes with the preference \[\overleftarrow{R}\ p\ q\ q^{\prime};\]
* a singleton \(V_{4}\) of one vote with the preference \[\overleftarrow{R}\ q\ p\ q^{\prime};\]
* a multiset \(V_{5}\) of \(\kappa-2\) votes with the preference \[\overleftarrow{R}\ q^{\prime}\ p\ q;\]
* for every blue vertex \(b\in B\), we create one vote \(\succ_{b}\) with the preference \[q\ q^{\prime}\ \left(\overleftarrow{R}\left[\Gamma_{G}(b)\right]\right)\ p\ \left( \overleftarrow{R}\setminus\Gamma_{G}(b)\right).\]
Let \(V\) denote the multiset of the above \(2|B|+\kappa+2\ell-1\) votes. For a given \(B^{\prime}\subseteq B\), let \(V(B^{\prime})=\{\succ_{b}:b\in B^{\prime}\}\) be the multiset of votes created for vertices in \(B^{\prime}\). We complete the construction by setting \(k=\kappa\). The instance of CCDV is \(((C,V),p,\rhd,k)\) which can be constructed in polynomial time. It remains to show the correctness of the reduction.
\((\Rightarrow)\) Assume that there exists \(B^{\prime}\subseteq B\) such that \(|B^{\prime}|=\kappa\) and \(B^{\prime}\) dominates \(R\). Let \(\mathcal{E}=(C,V\setminus V(B^{\prime}))\). We show below that \(p\) is the TSMR winner of \(\mathcal{E}\) with respect to the agenda \(\rhd\). To this end, it suffices to show that \(p\) beats everyone else in \(\mathcal{E}\). Let \(r\in R\). As \(B^{\prime}\) dominates \(R\), there exists \(b\in B^{\prime}\) such that \(b\) dominates \(r\), and thus \(\succ_{b}\) ranks \(r\) before \(p\). As there are in total \(|B|-\ell\) votes in \(V(B)\) ranking \(p\) before \(r\), we know that there are at least \(|B|-\ell-\kappa+1\) votes in \(V(B)\setminus V(B^{\prime})\) ranking \(p\) before \(r\). As all votes in \(V_{1}\cup V_{2}\) rank \(p\) before all candidates in \(R\), there are at least \(|B|-\ell-\kappa+1+\ell+\kappa+\ell-1=|B|+\ell\) votes ranking \(p\) before \(r\) in \(\mathcal{E}\). As \(|V\setminus V(B^{\prime})|=2|B|+2\ell-2\), we know that \(p\) beats \(r\) in \(\mathcal{E}\). It is easy to verify that there are \(|B|+\ell\) votes ranking \(p\) before \(q\) and \(q^{\prime}\) in \(V\setminus V(B^{\prime})\), meaning that \(p\) beats both \(q\) and \(q^{\prime}\) in \(\mathcal{E}\) too. To sum up, \(p\) beats everyone else in the election \(\mathcal{E}\) and hence is the winner of \(\mathcal{E}\).
\((\Leftarrow)\) Assume that there exists \(V^{\prime}\subseteq V\) such that \(|V^{\prime}|\leq k=\kappa\) and \(p\) is the TSMR winner of \(\mathcal{E}=(C,V\setminus V^{\prime})\). Observe that by the construction of the votes and the assumption that \(\kappa\geq 4\), no matter which at most \(k\) votes are contained in \(V^{\prime}\), every candidate in \(C\setminus\{p\}\) beats all her predecessors in \(C\setminus\{p\}\). Then, as \(p\) is the first candidate in the agenda and \(p\) wins \(\mathcal{E}\), we know that \(p\) beats all the other candidates. It follows that \(V^{\prime}\) and \(V_{1}\cup V_{3}\cup V_{5}\) are disjoint and \(|V^{\prime}|=\kappa\), since otherwise \(p\) cannot beat \(q\) in \(\mathcal{E}\). Similarly, it holds that \(V^{\prime}\) and \(V_{2}\cup V_{4}\) are disjoint, since otherwise \(p\) cannot beat \(q^{\prime}\). As a consequence, it holds that \(V^{\prime}\subseteq V(B)\). Without loss of generality, let \(B^{\prime}\subseteq B\) be such that \(V(B^{\prime})=V^{\prime}\). We claim that \(B^{\prime}\) dominates \(R\). Assume, for the sake of contradiction, that this is not the case. Let \(r\in R\) be a red vertex not dominated by any vertex in \(B^{\prime}\). Then, by the construction of the votes, all votes in \(V(B^{\prime})\) rank \(p\) before \(r\). This implies that there are in total at most \(|B|-\ell-\kappa+|V_{1}\cup V_{2}|=|B|+\ell-1\) votes ranking \(p\) before \(r\) in \(\mathcal{E}\). In other words, \(p\) is beaten by \(r\) in \(\mathcal{E}\). However, in this case \(p\) cannot be the TSMR winner of \(\mathcal{E}\), a contradiction.
**Theorem 6**.: CCDV _for_ TSMR _is_ W[2]-hard _with respect to the number of votes not deleted even when the distinguished candidate is the first one in the agenda._
Proof.: We prove the theorem via a reduction from RBDS. Let \((G,\kappa)\) be an instance of RBDS where \(G=(B\cup R,A)\) is a bipartite graph. As in the proof of Theorem 5, we assume that every red vertex has degree exactly \(\ell\) for some positive integer \(\ell\). We construct an instance of CCDV as follows. The candidate set is \(C=R\cup\{p,q\}\), and the agenda is \(\rhd=(p,\overrightarrow{R},q)\). We create the following three groups of votes:
* a multiset \(V_{1}\) of \(\kappa\) votes with the preference \(p\ q\ \overleftarrow{R}\);
* a singleton \(V_{2}\) of one vote with the preference \(\overleftarrow{R}\ p\ q\); and
* for every blue vertex \(b\in B\), one vote \(\succ_{b}\) with the preference \[q\ \left(\overleftarrow{R}\setminus\Gamma_{G}(b)\right)\ p\ \left(\overleftarrow{R}\left[\Gamma_{G}(b)\right]\right).\]
Let \(V\) denote the multiset of the above \(|B|+\kappa+1\) votes. For a given \(B^{\prime}\subseteq B\), we use \(V(B^{\prime})=\{\succ_{b}:b\in B^{\prime}\}\) to denote the multiset of votes corresponding to \(B^{\prime}\). We complete the construction by setting \(k=|B|-\kappa\). The instance of CCDV is \(((C,V),p,\rhd,k)\), which can be constructed in polynomial time. It remains to show the correctness of the reduction.
\((\Rightarrow)\) Assume that there exists \(B^{\prime}\subseteq B\) such that \(|B^{\prime}|=\kappa\) and \(B^{\prime}\) dominates \(R\). Let \(\mathcal{E}=(C,V_{1}\cup V(B^{\prime}))\). We show below that \(p\) is the TSMR winner of \(\mathcal{E}\) with respect to the agenda \(\rhd\). To this end, it suffices to show that \(p\) beats everyone else in \(\mathcal{E}\). Let \(r\in R\). As \(B^{\prime}\) dominates \(R\), there is at least one \(b\in B^{\prime}\) such that \(b\) dominates \(r\), and hence \(\succ_{b}\) ranks \(p\) before \(r\). Therefore, in total there are \(\kappa+1\) votes in \(\mathcal{E}\) ranking \(p\) before \(r\). Clearly, there are \(\kappa+1\) votes in \(\mathcal{E}\) ranking \(p\) before \(q\). As \(|V_{1}\cup V(B^{\prime})|=2\kappa+1\), we know that \(p\) beats all the other candidates in \(\mathcal{E}\), and hence \(p\) is the winner of \(\mathcal{E}\).
\((\Leftarrow)\) Assume that there exists \(V^{\prime}\subseteq V\) such that \(|V^{\prime}|\leq k=|B|-\kappa\) and \(p\) is the TSMR winner of the election \(\mathcal{E}=(C,V\setminus V^{\prime})\). Observe first that \(V^{\prime}\subseteq V(B)\) and \(|V^{\prime}|=k\), since otherwise \(q\) is not beaten by any of her predecessors, leading to \(q\) winning \(\mathcal{E}\), a contradiction. So, without loss of generality, let \(B^{\prime}\subseteq B\) be such that \(|B^{\prime}|=k=|B|-\kappa\) and \(V(B^{\prime})=V^{\prime}\). Let \(\overline{B}=B\setminus B^{\prime}\). Obviously, \(|\overline{B}|=\kappa\) and \(|V\setminus V^{\prime}|=2\kappa+1\). By the construction of the votes, no matter which \(k\) votes are contained in \(V(B^{\prime})\), everyone from \(C\setminus\{p\}\) beats all her predecessors in \(C\setminus\{p\}\). As \(p\) is the first candidate in the agenda, the winning of \(p\) in \(\mathcal{E}\) implies that \(p\) beats all the other candidates. We claim that \(\overline{B}\) dominates \(R\). Assume, for the sake of contradiction, that this is not the case. Let \(r\in R\) be a red vertex not dominated by any vertex in \(\overline{B}\). Then, by the construction of the votes, all votes in \(V(\overline{B})\) rank \(r\) before \(p\). As the only vote in \(V_{2}\) also ranks \(r\) before \(p\), there are in total \(|\overline{B}|+1=\kappa+1\) votes ranking \(r\) before \(p\) in \(\mathcal{E}\), contradicting that \(p\) beats \(r\) in \(\mathcal{E}\).
**Theorem 7**.: CCDV _for TSMR is_ W[2]-hard _with respect to the number of deleted votes. This holds even if the distinguished candidate is the last one in the agenda._
Proof.: We prove the theorem by a reduction from RBDS. Let \((G,\kappa)\) be an instance of RBDS, where \(G=(R\cup B,A)\) is a bipartite graph. We assume that \(G\) does not contain any isolated vertices, \(\kappa\geq 4\), and every red vertex is of degree \(\ell\) where \(\ell\geq 1\). These assumptions do not change the W[2]-hardness of the problem.5 Let \(C=R\cup\{p,q\}\), and let \(\rhd\) be an agenda over \(C\) where \(p\) is the last one (the relative orders of other candidates are immaterial to the correctness of the reduction). We create the following \(2|B|+2\ell+\kappa\) votes in \(V\):
Footnote 5: The assumption that \(G\) does not contain any isolated vertices and \(\kappa\geq 4\) are clear. If an instance does not satisfy the second assumption, we can obtain an equivalent instance by the following operation: letting \(\ell\) be the maximum degree of vertices in \(R\), for each red vertex \(r\in R\) of degree strictly smaller than \(\ell\), we create new degree-1 vertices adjacent only to \(r\) until \(r\) has degree exactly \(\ell\). An important observation for the equivalency to the two instances is that there is an optimal solution (a subset \(B^{\prime}\subseteq B\) dominating \(R\) with the minimum cardinality) of the new instance which does not contain any of the newly introduced degree-1 vertices.
* \(|B|+1\) votes with the preference \(\overleftarrow{R}\)\(p\)\(q\);
* \(\ell+\kappa\) votes with the preference \(q\)\(p\)\(\overleftarrow{R}\);
* \(\ell-1\) votes with the preference \(p\)\(q\)\(\overleftarrow{R}\); and
* for each blue vertex \(b\in B\), one vote \(\succ_{b}\) with the preference \[q\]
For a given \(B^{\prime}\subseteq B\), let \(V(B^{\prime})=\{\succ_{b}:b\in B^{\prime}\}\) be the multiset of votes corresponding to \(B^{\prime}\). Finally, we set \(k=\kappa\). The instance of CCDV is \(((C,V),p,\rhd,k)\). In the following, we prove the correctness of the reduction.
\((\Rightarrow)\) Assume that there exists \(B^{\prime}\subseteq B\) of cardinality \(\kappa\) such that \(B^{\prime}\) dominates \(R\). Let \(\mathcal{E}=(C,V\setminus V(B^{\prime}))\). Clearly, \(|V\setminus V(B^{\prime})|=2|B|+2\ell\). We show below that \(p\) is not beaten by anyone else in \(\mathcal{E}\) and hence is the TSMR winner of \(\mathcal{E}\). As all votes in \(V(B^{\prime})\) rank \(q\) before \(p\), it holds that \(n_{V\setminus V(B^{\prime})}(p,q)=(|B|+1)+(\ell-1)=|B|+\ell\), meaning that \(p\) ties \(q\) in \(\mathcal{E}\). Moreover, as \(B^{\prime}\) dominates \(R\), for every \(r\in R\)
there exists \(b\in B^{\prime}\) dominating \(r\). By the construction of the votes, \(r\) is ranked before \(p\) in the vote \(\succ_{b}\in V(B^{\prime})\). It follows that at most \(\kappa-1\) votes in \(V(B^{\prime})\) rank \(p\) before \(r\). By the construction of the votes, we know that there are at least \((\ell+\kappa)+(\ell-1)+(|B|-\ell)-(\kappa-1)=|B|+\ell\) votes ranking \(p\) before \(r\) in \(V\setminus V(B^{\prime})\), implying that \(p\) ties \(r\) in \(\mathcal{E}\).
\((\Leftarrow)\) Assume there exists \(V^{\prime}\subseteq V\) such that \(|V^{\prime}|\leq k\) and \(p\) is the TSMR winner of \(\mathcal{E}=(C,V\setminus V^{\prime})\) with respect to \(\rhd\). As \(p\) is the last one in the agenda, it holds that \(p\) beats or ties everyone else in \(\mathcal{E}\). As a consequence, all votes in \(V^{\prime}\) must rank \(q\) before \(p\) and, moreover, it must be that \(|V^{\prime}|=k=\kappa\), since otherwise \(p\) is beaten by \(q\) in \(\mathcal{E}\). There are two groups of votes ranking \(q\) before \(p\): those corresponding to the blue vertices, and those with the preference \(q\;\;p\;\overleftarrow{R}\). We may assume that all votes in \(V^{\prime}\) are from \(V(B)\). Indeed, if \(V^{\prime}\) contained some vote with the preference \(q\;\;p\;\overleftarrow{R}\), we can obtain another feasible solution \(V^{\prime\prime}\) from \(V^{\prime}\) by replacing this vote with any vote in \(V(B)\setminus V^{\prime}\). Let \(r\in R\). As \(n_{V}(r,p)=(|B|+1)+\ell\) and \(|V\setminus V^{\prime}|=2|B|+2\ell\), we know that there is at least one vote \(\succ_{b}\in V^{\prime}\) which ranks \(r\) before \(p\). By the reduction, we know that the vertex \(b\) corresponding to \(\succ_{b}\) dominates \(r\). It is clear now that \(\{b\in B:\succ_{b}\in V^{\prime}\}\) dominates \(R\), implying that the RBDS instance is a Yes-instance.
We point out that Theorem 7 strengthens the W[1]-hardness of CCAV for (weak) Condorcet winner by Liu et al. [29].
**Theorem 8**.: CCDV _for TSMR is_ W[2]-hard _with respect to the number of votes not deleted. This holds even when the distinguished candidate is the last candidate in the agenda._
Proof.: We prove the theorem by a reduction from RBDS. Let \((G,\kappa)\) be an instance of RBDS, where \(G\) is a bipartite graph with the vertex bipartition \((R,B)\). We create an instance of CCDV as follows. Let \(C=R\cup\{q,p\}\). Let \(\rhd\) be an agenda over \(C\) where \(p\) is in the last position. We create the following votes:
* a multiset \(V_{1}\) of \(\kappa-1\) votes with the preference \(p\;\;q\;\overleftarrow{R}\);
* a singleton \(V_{2}\) of one vote with the preference \(\overrightarrow{R}\;\;p\;q\); and
* for each blue vertex \(b\in B\), one vote \(\succ_{b}\) with the preference \[q\;\left(\overrightarrow{R}\setminus\Gamma_{G}(b)\right)\;\;p\;\left( \overrightarrow{R}\left[\Gamma_{G}(b)\right]\right).\]
For a given \(B^{\prime}\subseteq B\), we use \(V(B^{\prime})=\{\succ_{b}:b\in B^{\prime}\}\) to denote the set of votes created for the blue vertices in \(B^{\prime}\). Let \(V=V_{1}\cup V_{2}\cup V(B)\). Clearly, \(|V|=|B|+\kappa\). Finally, let \(k=|B|-\kappa\). The instance of CCDV is \(((C,V),p,\rhd,k)\). We prove the correctness as follows.
\((\Rightarrow)\) Assume that there exists \(B^{\prime}\subseteq B\) such that \(|B^{\prime}|=\kappa\) and \(B^{\prime}\) dominates \(R\). Let \(V^{\prime}=V_{1}\cup V_{2}\cup V(B^{\prime})\), and let \(\mathcal{E}=(C,V^{\prime})\). We claim that \(p\) is the TSMR winner of \(\mathcal{E}\). As \(p\) is the last candidate in the agenda, it suffices to show that \(p\) is not beaten by any other candidates in \(\mathcal{E}\). It is clear that \(p\) ties \(q\) in \(\mathcal{E}\). Let \(r\in R\) be a red vertex. As \(B^{\prime}\) dominates \(R\), there exists \(b\in B^{\prime}\) dominating \(r\). From the construction of the votes, \(p\) is ranked before \(r\) in the vote \(\succ_{b}\). Therefore, there are at least \(|V_{1}|+1=\kappa\) votes ranking \(p\) before \(r\) in \(V^{\prime}\), implying that \(p\) is not beaten by \(r\). As this holds for all \(r\in R\), the correctness for this direction follows.
\((\Leftarrow)\) Assume that there exists \(V^{\prime}\subseteq V\) so that \(|V^{\prime}|\geq 2\kappa\) and \(p\) is the TSMR winner of \((C,V^{\prime})\). As \(|V_{1}|+|V_{2}|=\kappa\) and all votes in \(V(B)\) rank \(q\) in the first place, it must be that \((V_{1}\cup V_{2})\subseteq V^{\prime}\) and \(V^{\prime}\) contains exactly \(\kappa\) votes from \(V(B)\), since otherwise \(q\) will be the winner of \((C,V^{\prime})\), contradicting the winning of \(p\). Let \(V(B^{\prime})=V^{\prime}\cap V(B)\), where \(B^{\prime}\subseteq B\). As just discussed, \(|V(B^{\prime})|=\kappa\). We claim that \(B^{\prime}\) dominates \(R\). Suppose for contradiction that this is not the case. Then, there exists \(r\in R\) not dominated by any vertex in \(B^{\prime}\). From the construction of the votes, \(r\) is ranked before \(p\) in all votes of \(V(B^{\prime})\). Together with the vote in \(V_{2}\), there are \(\kappa+1\) votes in \(V^{\prime}\) ranking \(r\) before \(p\), meaning that
beats \(p\). However, in this case, \(p\) cannot be the winner of \((C,V^{\prime})\), a contradiction. As \(|B^{\prime}|=\kappa\), the RBDS instance is a Yes-instance.
Let us now explore the complexity landscape of constructive control by adding or deleting candidates. Unlike voter controls, we have only one hardness result as stated in the following theorem.
**Theorem 9**.: CCAC _for_ TSMR _is_ W[2]-hard _with respect to the number of added candidates. This holds even when the distinguished candidate is the first one in the agenda._
Proof.: We prove the theorem via a reduction from RBDS. Let \((G=(R\cup B,A),\kappa)\) be an instance of RBDS. We construct an instance of CCAC for TSMR as follows. For each vertex in \(G\) we create one candidate denoted by the same symbol for notational simplicity. In addition, we create a distinguished candidate \(p\). Let \(C=R\cup\{p\}\) and let \(D=B\). Besides, let \(k=\kappa\) and let \(\rhd=(p,\overrightarrow{B},\overrightarrow{R})\). We create a multiset \(V\) of votes in a way so that
* every candidate from \(R\) beats all her predecessors in \(R\cup\{p\}\);
* \(p\) beats every candidate from \(B\); and
* for each \(r\in R\) and each \(b\in B\), if \(b\) dominates \(r\) in \(G\), then \(b\) beats \(r\); otherwise, \(r\) beats \(b\).
By the famous McGarvey's theorem [31] such votes can be constructed in polynomial time. The instance of CCAC for TSMR is \(((C\cup D,V),p,\rhd,k)\).
The correctness of the reduction is easy to see. In particular, if there exists \(B^{\prime}\subseteq B\) of \(\kappa\) vertices dominating \(R\), then after adding the candidates corresponding to \(B^{\prime}\), every \(r\in R\) has at least one predecessor from \(B^{\prime}\) who beats her, excluding the winning of \(r\). Candidates in \(B^{\prime}\) cannot win as they are beaten by \(p\). Therefore, after adding these candidates, \(p\) becomes the winner. If, however, the RBDS instance is a No-instance, no matter which at most \(k\) candidates from \(B\) are added, there is at least one candidate in \(R\) who beats all her predecessors in the resulting election. In this case we cannot add at most \(k\) candidates to make \(p\) the winner.
When the distinguished candidate is the last one in the agenda, we have the following corollary as a consequence of Observation 1 and the immunity of weak Condorcet to CCAC [4].
**Corollary 1**.: _If the distinguished candidate is the last in the agenda, TSMR is immune to_ CCAC_._
For CCDC, a greedy P-algorithm can be easily obtained.
**Theorem 10**.: CCDC _for_ TSMR _is in_ P_._
Proof.: Let \(I=((C,V),p,\rhd,k)\) be an instance of CCDC. To solve \(I\), we first remove all predecessors of \(p\) in \(\rhd\) who beat \(p\) with respect to \(V\). Then, we iteratively remove each successor \(c\) of \(p\) so that \(c\) is not beaten by any of her predecessors. After the removals, \(p\) becomes the TSMR winner. We conclude that \(I\) is a Yes-instance if and only if at most \(k\) candidates are removed in total.
### Destructive Controls
Now we start the exploration on destructive control problems. One may expect more tractability results, because destructive controls are generally easy to solve compared with their constructive counterparts. Nevertheless, let us start with a hardness result.
**Theorem 11**.: DCAV _for_ TSMR _is_ W[2]-hard _with respect to the number of added votes plus the number of registered votes. This holds as long as the distinguished candidate is not the last one in the agenda._
Proof.: We prove Theorem 11 via a reduction from RBDS. Let \((G=(R\cup B,A),\kappa)\) be an instance of RBDS. We construct an instance of DCAV for TSMR as follows. Let \(C=R\cup\{p,q\}\), and let \(\rhd\) be an agenda where \(q\) is the last candidate. We create the following registered votes:
* \(\kappa-1\) votes with the preference \(p\ q\ \overrightarrow{R}\);
* two votes with the preference \(p\ \overrightarrow{R}\ q\); and
* one vote with the preference \(q\ p\ \overrightarrow{R}\).
Let \(V\) be the multiset of the above \(\kappa+2\) registered votes. The unregistered votes are created according to \(B\). In particular, for each \(b\in B\), we create one vote \(\succ_{b}\) with the preference
\[\left(\overrightarrow{R}\setminus\Gamma_{G}(b)\right)\ q\ p\ \left( \overrightarrow{R}[\Gamma_{G}(b)]\right).\]
For a given \(B^{\prime}\subseteq B\), let \(W(B^{\prime})=\{\succ_{b}:b\in B^{\prime}\}\) be the multiset of unregistered votes corresponding to \(B^{\prime}\). For simplicity, let \(W=W(B)\) be the set of the above \(|B|\) unregistered votes. Let \(k=\kappa\). The instance of DCAV is \(((C,V\cup W),p,\rhd,k)\). We prove the correctness of the reduction as follows.
\((\Rightarrow)\) Suppose that there is a \(B^{\prime}\subseteq B\) of \(\kappa\) vertices which dominate \(R\) in \(G\). Then, one can check that \(q\) beats or ties every other candidate with respect to \(V\cup W(B^{\prime})\), implying that \(q\) is the winner of \((C,V\cup W(B^{\prime}))\). Thus, in this case the instance of DCAV is a Yes-instance.
\((\Leftarrow)\) Suppose that there exists a subset \(W^{\prime}\subseteq W\) of at most \(k\) votes so that \(p\) is not the TSMR winner of \(\mathcal{E}=(C,V\cup W^{\prime})\). Observe that no matter which at most \(k\) votes are contained in \(W^{\prime}\), \(p\) beats all candidates in \(R\), implying that the only candidate which is able to preclude \(p\) from winning is \(q\). As \(q\) is the last candidate in the agenda \(\rhd\), \(q\) is the winner if and only if \(q\) beats or ties everyone else. This implies that \(W^{\prime}\) contains exactly \(\kappa\) votes, since otherwise \(p\) beats \(q\) in \(\mathcal{E}\). Moreover, for each \(r\in R\), at least one vote in \(W^{\prime}\) ranks \(q\) before \(r\). By the construction of the unregistered votes, an unregistered vote \(\succ_{b}\) ranks \(q\) before \(r\) if and only if \(b\) dominates \(r\) in \(G\). This implies that the set of vertices corresponding to \(W^{\prime}\) dominates \(R\), and hence the instance of RBDS is a Yes-instance.
It is known that DCAV and DCDV for weak Condorcet winner are polynomial-time solvable [24]. By Observation 1, we have the following corollary.
**Corollary 2** ([24]).: DCAV _and_ DCDV _for_ TSMR _are in \(\mathsf{P}\) if the distinguished candidate is in the last position of the agenda._
However, the complexity of DCDV increases if the distinguished candidate is not the last one in the agenda.
**Theorem 12**.: DCDV _for_ TSMR _is_ \(\mathsf{W[2]}\)-hard _with respect to the number of deleted votes. This holds as long as the distinguished candidate is not the last one in the agenda._
Proof.: The reduction is the same as the one in the proof of Theorem 7 with only the difference that \(q\) is the distinguished candidate. The correctness hinges upon the fact that no matter which at most \(k\) votes are deleted, \(q\) beats all candidates in \(R\), which leaves \(p\) the unique candidate preventing \(q\) from winning and, moreover, this holds as long as \(q\) is not the last one in the agenda.
Parameterizing by the dual parameter of the solution size yields the same result.
**Theorem 13**.: DCDV _for_ TSMR _is_ \(\mathsf{W[2]}\)-hard _with respect to the number of votes not deleted. This holds as long as the distinguished candidate is not the last one in the agenda._
Proof.: The reduction is the same as the one in the proof of Theorem 8 with only the difference that \(q\) is the distinguished candidate. The correctness arguments are the same as in the proof of Theorem 12.
For destructive control by modifying candidates, we have polynomial-time solvability results, regardless of the positions of the distinguished candidate in the agenda.
**Theorem 14**.: DCAC _for_ TSMR _is in \(\mathsf{P}\)._
Proof.: Let \(I=((C\cup D,V),p,\rhd,k)\) be an instance of DCAC. We assume that \(k\geq 1\) and \(p\) is the winner of \((C,V)\), since otherwise \(I\) can be solved trivially. Our algorithm goes as follows.
As \(p\) wins \((C,V)\), \(p\) is not beaten by any of her predecessors, and each successor \(c\in C\setminus\{p\}\) of \(p\) is beaten by at least one of \(c\)'s predecessors. If there exists \(c\in D\) which is before \(p\) in the agenda and beats \(p\), we conclude that \(I\) is a Yes-instance because \(p\) does not win \((C\cup\{c\},V)\). Additionally, if there exists \(c\in D\) so that \(p\rhd c\), and \(c\) is not beaten by any of her predecessors in \(C\), we also conclude that \(I\) is a Yes-instance, since \(p\) does not win \((C\cup\{c\},V)\). If neither of the two cases occurs, then no matter which unregistered candidates are added, \(p\) remains the winner. Therefore, in this case, we conclude that \(I\) is a No-instance.
The following result is a consequence of Theorem 10.
**Corollary 3**.: DCDC _for_ TSMR _is in \(\mathsf{P}\)._
## 4 Possible and Necessary Winner
In this section, we study Possible Winner and Necessary Winner for TSMR. Bredereck et al. [8] showed that except Necessary Winner for the successive rule which is polynomial-time solvable, other cases of the two problems for the successive and the amendment rules are computationally hard (\(\mathsf{NP}\)-hardness for Possible Winner and coNP-hardness for Necessary Winner). We show below that TSMR behaves the same as the successive rule in terms of their complexity of determining necessary and possible winners, though the proofs for these results for the two rules are different.
**Theorem 15**.: Possible Winner _for_ TSMR _is_ NP-hard_, even if the given agenda is complete and the distinguished candidate is the first one in the agenda._
Proof.: We prove the theorem via a reduction from RBDS. Let \((G,\kappa)\) be an instance of RBDS where \(G\) is a bipartite graph with the partition \((R,B)\). We assume that \(G\) does not contain any isolated vertices, and all vertices in \(R\) have the same degree \(\ell\) where \(\ell\geq 1\). We create an instance of Possible Winner for TSMR as follows. Let \(C=R\cup\{p,q\}\) and let \(\rhd=(p,q,\overrightarrow{R})\). We create five groups of votes as follows, where only the first group contains partial votes:
* for each \(b\in B\), one partial vote \(\succ_{b}\) with the following partial preference \[\left(\overleftarrow{R}[\Gamma_{G}(b)]\right)\ p\ \left(\overleftarrow{R}\setminus\Gamma_{G}(b)\right)\] and \[q\ \left(\overleftarrow{R}\setminus\Gamma_{G}(b)\right);\]
* a multiset \(V_{1}\) of \(|B|\) votes with the preference \(\overleftarrow{R}\ q\ p\);
* a multiset \(V_{2}\) of \(2\ell+\kappa\) votes with the preference \(q\ \overleftarrow{R}\ p\);
* a multiset \(V_{3}\) of \(\ell+2\kappa+1\) votes with the preference \(\overleftarrow{R}\ p\ q\);
* a multiset \(V_{4}\) of \(\ell+\kappa\) votes with the preference \(p\ q\ \overleftarrow{R}\).
Let \(V(B)=\{\succ_{b}:b\in B\}\) be the set of the \(|B|\) partial votes in the first group. Let \(V\) be the multiset of the above \(2|B|+4\ell+4\kappa+1\) votes, and let \(V(\overline{B})=V\setminus V(B)\). The instance of Possible Winner is \(((C,V),p,\rhd)\). Clearly, the above construction can be done in polynomial time. We show below that the RBDS instance is a Yes-instance if and only if the constructed Possible Winner instance is a Yes-instance.
\((\Rightarrow)\) Suppose that there is a subset \(B^{\prime}\subseteq B\) such that \(|B^{\prime}|=\kappa\) and \(B^{\prime}\) dominates \(R\). We complete each \(\succ_{b}\) where \(b\in B\) as follows:
* if \(b\in B^{\prime}\), we complete it as \(q\ \left(\overleftarrow{R}\left[\Gamma_{G}(b)\right]\right)\ p\ \left(\overleftarrow{R}\setminus\Gamma_{G}(b)\right)\),
* otherwise, we complete it as \(\left(\overleftarrow{R}\left[\Gamma_{G}(b)\right]\right)\ p\ q\ \left(\overleftarrow{R}\setminus\Gamma_{G}(b)\right)\).
It is fairly easy to verify that with respect to the completion \(p\) beats \(q\), and \(q\) beats all candidates in \(R\). Then, by the definition of the agenda, \(p\) is the TSMR winner with respect to the above completion of \((C,V)\).
\((\Leftarrow)\) Suppose that there is a completion \(V^{\prime}\) of \(V(B)\) so that \(p\) wins the completion \(\mathcal{E}=(C,V(\overline{B})\cup V^{\prime})\) of \((C,V)\). Observe that in all completions of \((C,V)\), everyone in \(R\) beats all her predecessors in \(R\cup\{p\}\). Then, by the definition of the agenda, and the fact that \(p\) wins \(\mathcal{E}\), it holds that (1) \(q\) beats all candidates in \(R\), and (2) \(q\) is beaten by \(p\) in \(\mathcal{E}\). As \(V(\overline{B})\) contains exactly \(2\ell+3\kappa+1\) votes (those in \(V_{3}\cup V_{4}\)) ranking \(p\) before \(q\), Condition (2) implies that there are at least \(|B|-\kappa\) votes in \(V^{\prime}\) ranking \(p\) before \(q\). Let \(B^{\prime}\) be the subset of \(B\) corresponding to votes in \(V^{\prime}\) ranking \(p\) before \(q\), and let \(B^{\prime\prime}=B\setminus B^{\prime}\). Clearly, \(|B^{\prime\prime}|\leq\kappa\). We show below that Condition (1) implies that \(B^{\prime\prime}\) dominates \(R\). For the sake of contradiction, assume that there exists \(r\in R\) not dominated by any vertex in \(B^{\prime\prime}\). In other words, all the \(\ell\) neighbors of \(r\) in \(G\) are contained in \(B^{\prime}\). This implies that there are \(\ell\) votes in \(V^{\prime}\) (the \(\ell\) completions of votes corresponding to the \(\ell\) neighbors of \(r\)) ranking \(r\) before \(q\). Together with the \(|B|+\ell+2\kappa+1\) votes in \(V(\overline{B})\) ranking \(r\) before \(q\) (those from \(V_{1}\cup V_{3}\)), we have \(|B|+2\ell+2\kappa+1\) votes ranking \(r\) before \(q\), implying that \(r\) beats \(q\) in \(\mathcal{E}\). However, this is impossible since otherwise \(r\) beats all her predecessors in \(\mathcal{E}\) which contradicts that \(p\) wins \(\mathcal{E}\). This completes the proof that \(B^{\prime\prime}\) dominates \(R\). Then, from \(|B^{\prime\prime}|\leq\kappa\), we know that the RBDS instance is a Yes-instance.
Our reduction in the proof of Theorem 15 is completely different from those used in [8] for showing the NP-hardness of Possible Winner for the successive and the amendment rules. In fact, their reductions are from the Independent Set and Vertex Cover problems, while our reduction is from RBDS. Moreover, in their reductions for Possible Winner under the successive and the amendment rules the distinguished candidate is respectively the penultimate and the third candidates in the agenda. Our reduction can be adapted to show the NP-hardness of Possible Winner for TSMR when the distinguished candidate is the \(i\)-th candidate in the agenda for every constant \(i\), by adding \(i-1\) dummy candidates before \(p\) in the agenda, and ranking all of them below all the other candidates in all votes.
Notice that Possible Winner for TSMR becomes polynomial-time solvable if the given agenda is complete and \(p\) is the last one in the agenda. This follows from Observation 1 and the polynomial-time solvability of determining if a partial election can be completed so that a candidate becomes a (weak) Condorcet winner [28].6 The algorithm in [28] can be also trivially adapted for showing that Possible Winner for the amendment rule becomes polynomial-time solvable if the given agenda is complete and \(p\) is in the top-2 positions, and their algorithm also applies to the determination for the winning of a particular candidate as a weak Condorcet winner. So, there is a radical complexity shift for the amendment rule as the distinguished candidate moves from the second place to the third place in the agenda. Our next result also reveals a seamless complexity shift for TSMR as \(p\) moves from the last position just one position up.
Footnote 6: The result in [28] is for Condorcet winner but the algorithm also accommodates weak Condorcet winner.
**Theorem 16**.: Possible Winner _for_ TSMR _is_ NP-hard _even when the given agenda is complete with the distinguished candidate being the penultimate candidate in the agenda._
Proof.: We prove the theorem via a reduction from RBDS. Let \((G,\kappa)\) be an instance of RBDS where \(G=(B\cup R,A)\) is a bipartite graph and \(1\leq\kappa\leq|B|\). Similar to the previous proofs, we assume that every red vertex has degree exactly \(\ell\) where \(\ell>0\) in the graph \(G\). We construct an instance of Possible Winner as follows. Let \(C=R\cup\{p,q,q^{\prime}\}\) and let \(\rhd=(q^{\prime},\overrightarrow{R},p,q)\). We create five groups of votes where only the first group of contains partial votes.
* For every \(b\in B\), we create one partial vote \(\succ_{b}\) with the following partial preference \[\left(\overrightarrow{R}\setminus\Gamma_{G}(b)\right)\ q^{\prime}\] and \[q\ p\ \left(\overrightarrow{R}[\Gamma_{G}(b)]\right).\] Let \(V(B)\) be the set of the \(|B|\) partial votes corresponding to \(B\).
* We create a multiset \(V_{1}\) of \(|B|+1\) votes with the preference \[q^{\prime}\ q\ \overrightarrow{R}\ p.\]
* We create a multiset \(V_{2}\) of \(2\kappa\) votes with the preference \[q\ p\ \overrightarrow{R}\ q^{\prime}.\]
* We create a multiset \(V_{3}\) of \(\kappa\) votes with the preference \[q\ p\ q^{\prime}\ \overrightarrow{R}.\]
* Finally, we create a multiset \(V_{4}\) of \(\kappa\) votes with the preference \[\overrightarrow{R}\ p\ q^{\prime}\ q.\]
Let \(V\) be the multiset of the above \(2|B|+4\kappa+1\) votes, and let \(V(\overline{B})=V\setminus V(B)\). The instance of Possible Winner is \(((C,V),p,\rhd)\) which can be constructed in polynomial time. In the following, we prove that the RBDS instance is a Yes-instance if and only if the constructed instance of Possible Winner is a Yes-instance.
\((\Rightarrow)\) Suppose that there is a subset \(B^{\prime}\subseteq B\) such that \(|B^{\prime}|=\kappa\), and \(B^{\prime}\) dominates \(R\). We complete each vote \(\succ_{b}\in V(B)\) as follows.
* if \(b\in B^{\prime}\), we complete it as \[\left(\overrightarrow{R}\setminus\Gamma_{G}(b)\right)\ q^{\prime}\ q\ p\ \left(\overrightarrow{R}[\Gamma_{G}(b)]\right),\]
* otherwise, we complete it as \[q\ p\ \left(\overrightarrow{R}\setminus\Gamma_{G}(b)\right)\ q^{\prime}\ \left( \overrightarrow{R}[\Gamma_{G}(b)]\right).\]
It is easy to verify that after completing votes as above, \(p\) beats all her predecessors in \(\rhd\), and \(q\) is beaten by her predecessor \(q^{\prime}\), which implies that \(p\) is the TSMR winner of the completion.
\((\Leftarrow)\) Assume that there is a completion \(V^{\prime}\) of \(V(B)\) so that \(p\) wins the election \(\mathcal{E}=(C,V(\overline{B})\cup V^{\prime})\). Observe that no matter how we complete the votes, \(q\) beats all her predecessors except \(q^{\prime}\). As \(p\) wins \(\mathcal{E}\), it must be that \(q^{\prime}\) beats \(q\) in \(\mathcal{E}\). This implies that there are at least \(\kappa\) partial votes in \(V(B)\) which are completed so that \(q^{\prime}\) is ranked before \(q\). There is only one such completion for each partial vote \(\succ_{b}\in V(B)\), i.e., the completion with the preference
\[\left(\overrightarrow{R}\setminus\Gamma_{G}(b)\right)\ q^{\prime}\ q\ p\ \ \left( \overrightarrow{R}[\Gamma_{G}(b)]\right).\]
Let \(B^{\prime}\subseteq B\) be such that the partial votes corresponding to \(B^{\prime}\) are completed this way. As just discussed, \(|B^{\prime}|\geq\kappa\). Without loss of generality, let us assume that \(|B^{\prime}|=\kappa+t\) for some nonnegative integer \(t\). Observe further that as \(p\) wins \(\mathcal{E}\) and \(|V|\) is odd, \(p\) beats all candidates in \(R\). For every \(r\in R\), there are in total \(3\kappa\) votes in \(V(\overline{B})\) (precisely, votes in \(V_{2}\cup V_{3}\)) which rank \(p\) before \(r\). This implies there are at least \(|B|-\kappa+1\) completions of partial votes in \(V(B)\) which rank \(p\) before \(r\). Then, from \(|B\setminus B^{\prime}|=|B|-\kappa-t\), it follows that there are at least \(t+1\) completions of partial votes corresponding to \(B^{\prime}\) where \(p\) is ranked before \(r\). By the definitions of these completions, \(p\) is ranked before \(r\) in a completion corresponding to some \(b\in B^{\prime}\) if and only if \(r\) is a neighbor of \(b\) in \(G\). Therefore, every \(r\in R\) has at least \(t+1\) neighbors in \(B^{\prime}\) in the graph \(G\). Then, by removing any arbitrary \(t\) vertices from \(B^{\prime}\), we obtain a \(\kappa\)-subset of \(B\) that dominate \(R\), and hence the RBDS instance is a Yes-instance.
It would be interesting to see if similar complexity shift also applies to the successive rule. This mounts to determining the complexity of Possible Winner for the successive rule when the agenda is compete with the distinguished candidate being the last one. We leave it as an open question.
In contrast to the hardness of Necessary Winner, we show that Possible Winner is polynomial-time solvable.
**Theorem 17**.: Necessary Winner _for TSMR is in \(\mathsf{P}\)._
Proof.: Let \(I=((C,V),p,\rhd)\) be an instance of Necessary Winner. We determine if there is a completion of \((C,V)\) and a completion of the agenda \(\rhd\) so that \(p\) is not the TSMR winner of the completion. Note that \(p\) is not the winner if and only if
1. either some of her predecessors beats her,
2. or some of her successors, say \(c\), is not beaten by any of the predecessors of \(c\).
We consider first if there is a completion leading to the occurrence of Case 1. For this purpose, let \(B=\{c\in C\setminus\{p\}:(p,c)\not\in\rhd\}\) be the set of all candidates that can be predecessors of \(p\) in some completion of \(\rhd\). We consider candidates in \(B\) one by one, and for each considered \(c\in B\), we greedily complete the preference profile to determine if there exists at least one completion so that \(c\) beats \(p\). More precisely, for every partial vote \(\succ\in V\) such that \((p,c)\not\in\succ\), we complete it so that \(c\) is ranked before \(p\). If in the completion of \((C,V)\) obtained this way \(c\) beats \(p\), we conclude that \(I\) is a No-instance.
If we cannot draw the conclusion that \(I\) is a No-instance above, we consider whether it is possible to make the second case happen. To this end, we enumerate all candidates which can be successors of \(p\) in some completion of the partial agenda. More precisely, these candidates are those in \(B^{\prime}=\{c\in C\setminus\{p\}:(c,p)\not\in\rhd\}\). For each enumerated \(c\in B^{\prime}\), we compute the minimum set \(A_{c}\) of candidates that are necessarily the predecessors of \(c\) under the restriction that \(p\) is before \(c\) in the agenda, and then we greedily complete the preference profile to check if it can be completed so that \(c\) is not beaten by anyone in \(A_{c}\). More precisely, for each enumerated \(c\in B^{\prime}\), we compute \(A_{c}=\{c^{\prime}\in C:(c^{\prime},c)\in\rhd\}\), and for each partial vote \(\succ\in V\), we complete \(\succ\) so that \(c\) is ranked as high as possible, i.e., we complete
so that \(c\) is ranked below all candidates in \(\{c^{\prime}\in C:(c^{\prime},c)\in\geq\}\) and is above all the other candidates. If in the completion \(c\) is not beaten by anyone from \(A_{c}\cup\{p\}\), we conclude that \(I\) is a No-instance.
If none of the above enumerations provides us a conclusion that \(I\) is a No-instance, we conclude that \(I\) is a Yes-instance.
## 5 Conclusion
We have conducted the (parameterized) complexity of many well-motivated voting problems under the recently proposed voting rule TSMR, with respect to the solution size and the dual parameters. We obtained fruitful results including polynomial-time solvability results, NP-hardness results, and W[2]-hardness results. Remarkably, many of our hardness results hold even when the distinguished candidate is the first or the last one in the agenda. Our exploration offers a complete picture of the complexity of these problems under TSMR, enabling us to compare TSMR with the successive rule and the amendment rule. See Table 1 for more details. Our results indicate that TSMR resists most of the control problems, but is vulnerable to Agenda Control and Coalition Manipulation. In addition, we showed that for TSMR Possible Winner is NP-hard, while Necessary Winner can be solved in polynomial time. Compared with previous works, our study suggests that TSMR behaves at least as well as the other two salient sequential rules regarding their resistance to strategic voting problems, and their complexity of calculating possible and necessary winners.
It should be pointed out that our exploration is a pure theoretic analysis. Whether many problems are hard to solve in specific practical settings demands further investigation.
Another significant topic for future research is to investigate if restricting the preference domains radically changes the complexity. We refer to [18, 27] for a comprehensive survey on many restricted preference domains.
|
2305.01486 | ARBEx: Attentive Feature Extraction with Reliability Balancing for
Robust Facial Expression Learning | In this paper, we introduce a framework ARBEx, a novel attentive feature
extraction framework driven by Vision Transformer with reliability balancing to
cope against poor class distributions, bias, and uncertainty in the facial
expression learning (FEL) task. We reinforce several data pre-processing and
refinement methods along with a window-based cross-attention ViT to squeeze the
best of the data. We also employ learnable anchor points in the embedding space
with label distributions and multi-head self-attention mechanism to optimize
performance against weak predictions with reliability balancing, which is a
strategy that leverages anchor points, attention scores, and confidence values
to enhance the resilience of label predictions. To ensure correct label
classification and improve the models' discriminative power, we introduce
anchor loss, which encourages large margins between anchor points.
Additionally, the multi-head self-attention mechanism, which is also trainable,
plays an integral role in identifying accurate labels. This approach provides
critical elements for improving the reliability of predictions and has a
substantial positive effect on final prediction capabilities. Our adaptive
model can be integrated with any deep neural network to forestall challenges in
various recognition tasks. Our strategy outperforms current state-of-the-art
methodologies, according to extensive experiments conducted in a variety of
contexts. | Azmine Toushik Wasi, Karlo Šerbetar, Raima Islam, Taki Hasan Rafi, Dong-Kyu Chae | 2023-05-02T15:10:01Z | http://arxiv.org/abs/2305.01486v3 | ARBEX: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning
###### Abstract
In this paper, we introduce a framework ARBEx, a novel attentive feature extraction framework driven by Vision Transformer with reliability balancing to cope against poor class distributions, bias, and uncertainty in the facial expression learning (FEL) task. We reinforce several data pre-processing and refinement methods along with a window-based cross-attention VIT to squeeze the best of the data. We also employ learnable anchor points in the embedding space with label distributions and multi-head self-attention mechanism to optimize performance against weak predictions with reliability balancing, which is a strategy that leverages anchor points, attention scores, and confidence values to enhance the resilience of label predictions. To ensure correct label classification and improve the model's discriminative power, we introduce anchor loss, which encourages large margins between anchor points. Additionally, the multi-head self-attention mechanism, which is also trainable, plays an integral role in identifying accurate labels. This approach provides critical elements for improving the reliability of predictions and has a substantial positive effect on final prediction capabilities. Our adaptive model can be integrated with any deep neural network to forestall challenges in various recognition tasks. Our strategy outperforms current state-of-the-art methodologies, according to extensive experiments conducted in a variety of contexts.
Facial expression learning, Reliability balancing, Bias and uncertainty, Multi-head attention.
## 1 Introduction
One of the most universal and significant methods that people communicate their emotions and intentions is through the medium of their facial expressions [42]. In recent years, facial expression learning (FEL) has garnered growing interest within the area of computer vision due to the fundamental importance of enabling computers to recognize interactions with humans and their emotional affect states. While FEL is a thriving and prominent research domain in human-computer interaction systems, its applications are also prevalent in healthcare, education, virtual reality, smart robotic systems, etc [35, 29, 36].
The volume and quantity of large-scale benchmark FEL databases have significantly expanded in the past two decades [20, 26, 31, 12], resulting in considerable improvement of recognition accuracy of some Convolutional Neural Network (CNN) methods, which integrated landmarks [52], prior knowledge [24, 8], or image samples with optical flows [37] for enhancement of the interpretability and performance [7]. By separating the disturbances brought on by different elements, such as position, identity, lighting, and so on, several FEL approaches [4, 20, 34, 48] have also been developed to learn holistic expression aspects.
However, despite its recent outstanding performance, FEL is still considered as a difficult task due to a few reasons: (1) **Global information.** Existing FEL methods fail to acknowledge global factors of input images due to the constraints of convolutional local receptive fields, (2) **Interclass similarity.** Several expression categories frequently include similar images with little differences between them, (3) **Intra-class disparity.** Images from the same expression category might differ significantly from one another. For example, complexion, gender, image background, and an individual's age vary between insta
Fig. 1: A synopsis of **ARBEx. Feature Extraction** provides feature maps to generate initial predictions. Confidence distributions of initial labels are mostly inconsistent, unstable and unreliable. **Reliability Balancing** approach aids in stabilizing the distributions and addressing inconsistent and unreliable labeling.
of scales.** Variations in image quality and resolution can often compromise the efficacy of deep learning networks when used without necessary precaution. Images from in-the-wild datasets and other FEL datasets come in a wide range of image sizes. Consequently, it is essential for FEL to provide consistent performance across scales [39].
In view of these difficulties and with the ascent of Transformers in the computer vision research field [3], numerous FEL techniques incorporated with Transformers have been developed which have achieved state-of-the-art (SOTA) results. Kim et al. [15] further developed Vision Transformer (ViT) to consolidate both global and local features so ViTs can be adjusted to FEL tasks. Furthermore, [39] addresses scale sensitivity. Transformer models with multi channel-based cross-attentive feature extraction such as POSTER [56] and POSTER++ [29] tackle all of the aforementioned issues present in FEL, which use multi-level cross attention based network powered by ViT to extract features. This architecture has surpassed many existing works in terms of performance owing to the help of its cross-fusion function, multi-scale feature extraction, and landmark-to-image branching method. The downsides of this complex design is that it easily overfits, does not employ heavy augmentation methods, nor does it deal with the certain bias, uncertainty awareness, and poor distribution of labels; problems commonly recognized in facial expression classification.
To challenge this issue, we provide a novel reliability balancing approach where we place anchor points of different classes in the embeddings learned from [29]. These anchors have fixed labels and are used to calculate how similar the embeddings are to each label. We also add multi-head self-attention for the embeddings to find crucial components with designated weights to increase model reliability and robustness. This approach results in improved label distribution of the expressions along with stable confidence score for proper labeling in unreliable labels. To address the quick overfitting of enormous multi-level feature extraction procedures, we introduce heavy data augmentation and a robust training batch selection method to mitigate the risk of overfitting.
In summary, our method offers unbiased and evenly distributed data to the image encoder, resulting in accurate feature maps. These feature maps are then utilized for drawing predictions with the assistance of reliability balancing section, ensuring robust outcomes regardless of potential bias and unbalance in the data and labels.
### _Our Contributions_
Our contributions are summarized into four folds:
* We propose a novel approach **ARBEx**, a novel framework consisting multi-level attention-based feature extraction with reliability balancing for robust FEL with extensive data preprocessing and refinement methods to fight against biased data and poor class distributions.
* We propose **adaptive anchors in the embedding space and multi-head self-attention to increase reliability and robustness** of the model by correcting erroneous labels, providing more accurate and richer supervision for training the deep FEL network. We combine **relationships between anchors and weighted values from attention mechanism to stabilise class distributions** for poor predictions mitigating the issues of similarity in different classes effectively.
* Our streamlined data pipeline ensures **well-distributed quality input and output embeddings**, utilizing the full power of the Window-based Cross-Attention Vision Transformer providing robust feature maps to identify facial expressions in a confident manner.
* Empirically, our **ARBEx** method is rigorously evaluated on diverse in-the-wild FEL databases. Experimental outcomes exhibit that our method consistently surpasses most of the state-of-art FEL systems.
## 2 Related Works
In this section, we highlighted relevant works in FEL with respect to Transformers, uncertainty, and attention networks.
### _Facial Expression Learning (FEL)_
Most classic works related to FEL are used in a extensive variety, mainly in the computer vision and psychological science domains. In simple words, FEL is the task of labeling the expressions based on a facial image which consists of three phases, namely facial detection, feature extraction, and expression recognition [42]. In recent times, FEL systems have been more efficient and optimized by deep learning based algorithms, where self-supervised feature extraction [50] has been introduced. To extract global and local features from detected face, Weng _et al._[46] implement a multi-branch network. Xue _et al._[49] build a relation-aware local-patch representations using retention module to explore extensive relations on the manifold local features based on multi-head self-attention and Transformer frameworks. Recently, both Li _et al._[23] and Wang _et al._[41] suggest attention networks based on regions to extract discriminative features which outperform for robust pose and occlusion-aware FEL.
### _Transformers in FEL_
According to recent works [32], ViT [11] exhibits remarkable resilience against severe disruption and occlusion. To handle the shortcomings of FEL, such as poor quality samples, different backdrops and annotator's various contexts, Li _et al._[22] introduce Mask Vision Transformer (MVT) to provide a mask that can remove complicated backdrops and facial occlusion, as well as an adaptive relabeling mechanism to fix inaccurate labels in real-world FEL archives. Addressing the inadequate performance of Transformers in recognizing the subtlety of expression in videos, Liu _et al._[25] develop a novel Expression Snippet Transformer (EST) which successfully models minuscule intra/inter snippet visual changes and effectively learns the long-range spatial-temporal relations. While Transformers have proven to perform well in FEL tasks, it still has vulnerabilities when
dealing with multimodal (2D + 3D) datasets, as it needs more data. To combat this problem, Li _et al._[21] create a resilient lightweight multimodal facial expression vision Transformer for multimodal FEL data. Hwang _et al._[14] propose Neural Resizer, a method that helps Transformers by balancing the noisiness and imbalance through data-driven information compensation and downscaling. Zhang _et al._[53] develop a Transformer-based multimodal information fusion architecture that utilizes dynamic multimodal features to fully leverage emotional knowledge from diverse viewpoints and static vision points.
### _Uncertainty in FEL_
Uncertainties refer to mainly cryptic expressions, imprecise and conflicting annotations in FEL tasks. To eliminate uncertainty annotations, Fan _et al._[13] introduce mid-level representation enhancement (MRE) and graph-embedded uncertainty suppressing (GUS). For performing both cleaning noisy annotations and classifying facial images, Viet _et al._[18] introduce a multitasking network architecture and Wen _et al._[44] implement center loss for applying intra-class compactness to extract discriminative features to reduce uncertain predictions. For determining facial expression with noisy annotations, Max-Feature-Map activation function is adopted by Wu _et al._[47].
### _Attention Networks in FEL_
Despite FEL gaining widespread recognition in the field of computer vision, two major problems- pose alterations and occlusion have not been properly addressed in terms of automatic expression detection. Wang et al. [40] develop a system where Facial Expression Recognition (FER) datasets are annotated to include pose and real-world occlusion features. They also preset a Region Attention Network to encapsulate face areas and poses in an adaptive manner. Furthermore, a region-based loss function is introduced to induce larger attention weights. Another work that deals with such issues is by Zhao et al. [54] where they propose a global multi-scale and local attention network (MA-Net) which consists of a multi-scale based local attention component with a feature pre-extractor that aids in focusing more on salient details. Inter-class similarity and models lacking high-order interactions among local aspects are some other problems present in current FER systems. Wen et al. present Distract your Attention Network (DAN) [45] with integral sections. They are Feature Clustering Network (FCN) which retrieves robust features by using a large-margin learning
Fig. 2: Pipeline of **ABEx. Heavy Augmentation** is applied to the input images and **Data Refinement** method selects training batch with property distributed classes for each epoch. **Window-Based Cross-Attention VII** framework uses multi-level feature extraction and integration to provide embeddings (**Feature Vectors**). **Linear Reduction Layer** reduces the feature vector size for fast modeling. **MLP** predicts the primary labels and **Confence** is calculated from label distribution. **Reliability balancing** receives embeddings and processes in two ways. Firstly, it places anchors in the embedding space. It improves prediction probabilities by utilizing **trainable anchors** for searching similarities in embedding space. On the other way, **Multi-head self-attention** values are used to calculate label correction and confidence. **Weighted Average** of these two are used to calculate the final label **correction**. Using label correction, primary label distribution and confidence, final corrected label distribution is calculated, making the model more reliable.
goal, Multi-head cross Attention Network (MAN) initializes a variety of attention heads to focus on multiple facial features at the same time and develop attention maps on these areas, and Attention Fusion Network (AFN) which integrates multiple attention maps from various regions into a single, unified map. Fernandez et al. [30] propose Facial Expression Recognition with Attention Net (FERAtt) which utilizes Gaussian space representation to recognize facial expressions. The architecture focuses on facial image correction, which employs a convolutional-based feature extractor in combination with an encoder-decoder structure, and facial expression categorization, which is in charge of getting an embedded representation and defining the facial expression.
## 3 Approach
In our comprehensive approach, we propose a rigorous feature extraction strategy that is supported by ViT with reliability balancing mechanism to tackle the difficulties of FEL. Its cutting-edge framework is composed of a variety of components that function together to provide solutions that are accurate and reliable. We start by scaling the input photos before initiating the augmentation procedure in order to achieve better augmentation. Following image scaling, there is rotation, color enhancement, and noise balancing. The images are then randomly cropped for the optimal outcomes after extensive augmentation. Our pipeline meticulously addresses different biases and overfitting possibilities that may exist in the training data by randomly selecting a few images from each video and assembling them. Furthermore, we randomly select a set of images representing each expression for every epoch. The overall selection process and the parameters varies on different datasets based on their distribution of labels, number of classes and images.
In our approach, the cross-attention ViT is used in the feature extraction process. Which is aimed at tackling common FEL issues, including scale sensitivity, intra-class discrepancy, as well as inter-class similarity. We employ a pre-trained landmark extractor to locate different facial landmarks on a particular face. Afterward, we use a pre-trained image backbone model to extract features from the image accordingly. We utilize multiple feature extractors to detect low-level to high-level features in the image using different facial landmarks. After feature extraction, collected multi-level feature information is integrated. We use a cross-attention mechanism for linear computation and it integrates multi-level features and provides feature vector embedding using a optimised version of POSTER++ [29]. This comprehensive feature extraction framework provides correctness and dependability in the final output vector of length \(768\) by combining a cross-attention mechanism with substantial feature extraction. We also use an additional linear reduction layer to decrease the feature vector size to \(128\). Primary label distributions are generated using logits resulted by Multi-Layer Perceptrons. Multi-Layer Perceptrons (MLP) include variety of hidden layers that enable them to process information with great precision and accuracy. We calculated confidence value based on the primary label distribution using Normalized Entropy to evaluate the reliability [17] of these models.
We introduce a novel reliability balancing method to solve the limitations of modern FEL models. Modern FEL models still have several limitations, especially when it comes to making precise predictions for classes because their images are quite similar. This issue leads to a biased and unreliable model. Our reliability balancing method can increase the prediction capability of unbalanced and erroneous predictions, thereby improving the performance of the model. We achieve enhanced reliability and confidence by placing multiple learnable anchors in the embedding space and using multi-head self- attention mechanism, which help identify the closest neighbors of erroneous predictions to improve the prediction ability. The use of anchor spaces and attention values have proven to be highly effective in stabilizing label distribution, thereby resulting in better performance overall. We obtain additional regularization whenever possible by implementing dropout layers for more robustness. Our approach ensures that the model is resilient even in the circumstance of noisy or inadequate data; minimizing possible bias and overfitting. The resulting model integrating extensive feature extraction with reliability balancing, is remarkably precise and able to make credible predictions even in the context of ambiguity. The overall pipeline is illustrated in Fig. 2.
### _Problem Formulation_
Let \(x^{i}\) be the \(i\)-th instance variable in the input space \(\mathcal{X}\) and \(y^{i}\in\mathcal{Y}\) be the label of the \(i\)-th instance with \(\mathcal{Y}=\{y_{1},y_{2}\dots y_{N}\}\) being the label set. Let \(\mathcal{P}^{n}\) be the set of all probability vectors of size \(n\). Furthermore, let \(l^{i}\in\mathcal{P}^{N}\) be the discrete label distribution of \(i\)-th instance. Additionally, let \(e=p(x;\theta_{p})\) be the embedding output of the Window-Based Cross-Attention ViT (explained in 3.2) network \(p\) with parameters \(\theta_{p}\) and let \(f(e;\theta_{f})\) be the logit output of the MLP classification head network \(f\) with parameters \(\theta_{f}\).
### _Window-Based Cross-Attention ViT_
We use a complex image encoder to capture distinctive patterns from the input images. We obtain feature embedding vectors in our proposed pipeline with a refined and optimised version of POSTER++ [29], a window-based cross-attention ViT network.
We employ a window-based cross-attention mechanism to achieve linear computation. We extract features by the image backbone and facial landmark detectors. We used IR50 [43] as image backbone and MobileFaceNet [6] as facial landmark detector. For each level, firstly, division of image features \(X_{img}\in\mathcal{R}^{N\times D}\) is performed, where \(N\) represents the number of classes and \(D\) denotes the feature dimensions. These divided image features are transformed into many non-overlapping windows, \(z_{img}\in\mathcal{R}^{M\times D}\) where \(z_{img}\) contains \(M\) tokens. Then, down-sampling of the landmark feature \(X_{lm}\in\mathcal{R}^{C\times H\times W}\) takes place, where \(C\) is the number of channels in the attention network, \(H\) and \(W\) are the height and width of the image. The down-sampled features are converted into the window size, where the smaller representation of the image is taken and it is represented by \(z_{lm}\in\mathcal{R}^{c\times h\times w}\) where \(c=D,h\times w=\text{M}\). The features are reshaped in accordance with \(z_{img}\)'s shape.
The cross-attention with \(I\) heads in a local window can be formulated as follows at this point:
\[q=z_{lm}w_{q},k=z_{img}w_{k},v=z_{img}w_{v} \tag{1}\]
\[o^{(i)}=softmax(q^{(i)}k^{(i)T}/\sqrt{d}+b)v^{(i)},i=1,\dots,I \tag{2}\]
\[o=[o^{(1)},\dots,o^{(I)}]w_{o} \tag{3}\]
where \(w_{q}\), \(w_{k}\), \(w_{v}\) and \(w_{o}\) are the matrices used for mapping the landmark-to-image features, and \(q,k,v\) denote the query matrix for landmark stream, and key, and value matrices for the image stream, respectively from different windows used in the window-based attention mechanism. [\(\cdot\)] represents the merge operation where the images patches are combined to identify the correlations between them and lastly, the relative position bias is expressed as \(b\in\mathcal{R}^{I\times I}\) which aids in predicting the placement between landmarks and image sectors.
We use the equations above for calculating the cross-attention for all the windows. This method is denoted as Window-based Multi-head CrosS-Attention (W-MCSA). Using the equations below, the Transformer encoder for the cross-fusion can be calculated as:
\[X^{\prime}_{img}=W\text{-}MCSA_{(img)}+X_{img} \tag{4}\]
\[X_{img,O}=MLP(Norm(X^{\prime}_{img}))+X^{\prime}_{img} \tag{5}\]
where \(X^{\prime}_{img}\) is the combined image feature using W-MCSA, \(X_{img\_O}\) the output of the Transformer encoder, and \(Norm(\cdot)\) represents a normalization operation for the full image of all windows combined. Using window information and dimensions (\(z_{img},M,D,C,H,W,etc.\)), we extract and combine window based feature information to \(Xo_{i}\) (\(i\)-th level window-based combined features of each image) from \(X_{img\_O}\) (extracted features of all windows of each image together).
We introduce a Vision Transformer to integrate the obtained features at multiple scales \(Xo_{1},...,Xo_{i}\). Our attention mechanism is able to capture long-range dependencies as it combines information tokens of all scale feature maps like POSTER++ [29]. The method is described as:
\[Xo=[Xo_{1},...,Xo_{i}] \tag{6}\]
\[Xo^{\prime}=MHSA(Xo)+Xo \tag{7}\]
\[Xo_{out}=MLP(Norm(Xo))+Xo^{\prime} \tag{8}\]
where, [\(\cdot\)] denotes concatenation, \(MHSA(\cdot)\) denotes multi-head self-attention mechanism, \(MLP(\cdot)\) is the multi-layer perceptron.
Output of the multi-scale feature combination module \(Xo_{out}\), which is equal to feature embedding \(e\), is the final output of the encoder network denoted by \(p(x;\theta_{p})\).
### _Reliability Balancing_
Majority of Facial Expression Learning datasets are labeled using only one label for each sample. Inspired by [10, 17], we provide an alternative approach, in which, we learn and improve label distributions utilizing a label correction approach. We calculate a label distribution primarily that uses the embedding \(e\) directly into the MLP network. Subsequently, the reliability balancing section employs label correction techniques to stabilize the primary distribution. This results in improved predictive performance through more accurate and reliable labeling.
**Primary Label Distribution.** From sample \(x\), using the \(p\) network we can generate the corresponding embedding \(e=p(x;\theta_{p})\) and using the \(f\)-network consisting MLP, we can generate the corresponding discrete primary label distribution:
\[l=softmax(f(e;\theta_{f})) \tag{9}\]
We use the information contained in the label distribution with label corrections during training to improve the model performance.
**Confidence Function.** To evaluate the credibility of predicted probabilities, a confidence function is designed. Let \(C:\mathcal{P}^{N}\rightarrow[0,1]\), be the confidence function. \(C\) measures the certainty of a prediction made by the classifier using normalized entropy function H(l). The functions are defined as:
\[C(l)=1-H(l) \tag{10}\]
\[H(l)=-\frac{\sum_{i}l^{i}\log(l^{i})}{N} \tag{11}\]
For a distribution where all probabilities are equal; normalized entropy is 1, and confidence value is 0. Where one value is equal to 1, and others are equal to 0; normalized entropy is 0, and confidence value is 1.
### _Label Correction_
The conundrum of label accuracy, distribution stability, and reliability has been a mainstream problem in FEL. The novel approach we propose to resolve this is a combination of two distinct measures of label correction: Anchor Label Correction and Attentive Correction. By leveraging geometric similarities and state-of-the-art multi-head attention mechanism, we are designing predicted labels are not only accurate but also stable and reliable.
#### 3.4.1 Anchor Label Correction
**Anchor Notations.** We define anchor \(a^{i,j}\)\((i\in\{1,2\dots,N\},j\in\{1,2\dots K\})\) to be a point in the embedding space. Let \(\mathcal{A}\) be a set of all anchors. During training we use \(K\) trainable anchors for each label, with \(K\) being a hyperparameter. We assign another label distribution \(m^{i,j}\in\mathcal{P}^{N}\) to anchor \(a^{i,j}\), where \(m^{i,j}\) is defined as:
\[m^{i,j}_{k}=\begin{cases}1,\text{ if }k=i\\ 0,\text{ otherwise}\end{cases}\]
Fig. 3: Data flow in the Window-Based Cross-Attention VIT network
Intuitively, here it means anchors \(a^{1,1},a^{1,2}\dots a^{1,K}\) are labeled as belonging to class 1, anchors \(a^{2,1},a^{2,2}\dots a^{2,K}\) are labeled as belonging to class 2 and so on.
**Geometric Distances and Similarities.** To correct the final label and stabilize the distribution we use the geometric information about similarity between the embeddings and a fixed number of learnable points in the embedding space called anchors.
The similarity score \(s^{ij}(e)\) is a normalized measure of similarity between an embedding \(x\) and an anchor \(a^{ij}\in\mathcal{A}\).
The distance between embedding \(e\) and anchor \(a\) for each batch and class is defined as:
\[d(e,a)=\sqrt{\sum_{dim_{e}}|a-e|^{2}} \tag{12}\]
Here, \(dim_{e}\) is the dimension of embedding \(e\). Distances \(|a-e|^{2}\) are reduced over the last dimension \(dim_{e}\) and element-wise square root is taken for stabilizing values.
The similarity score \(s^{ij}\) is then obtained by normalizing distances as softmax:
\[s^{ij}(e)=\frac{\exp(-\frac{d(e,a^{ij})}{\delta})}{\sum_{i}^{N}\sum_{j}^{K}\exp (-\frac{d(e,a^{ij})}{\delta})} \tag{13}\]
where, \(\delta\) is a hyperparameter, which is used in the computation of softmax to control the steepness of the function. The default value used for \(\delta\) is 1.0.
**Correction.** From similarity scores we can calculate the anchor label correction term as follows:
\[t_{g}(e)=\sum_{i}^{N}\sum_{j}^{K}s^{ij}(e)m^{ij} \tag{14}\]
#### 3.4.2 Attentive Correction
**Multi-head Self-Attention.** For multi-head attention [38], Let a query with query embeddings \(q\in\mathcal{R}^{d_{Q}}\), key embeddings \(k\in\mathcal{R}^{d_{K}}\), and value embeddings \(v\in\mathcal{R}^{d_{V}}\) is given. With the aid of independently learned projections, they can be modified with \(h\), which is the attention head. These parameters are then supplied to attention pooling. Finally, these outputs are altered and integrated using another linear projection. The process is described as follows:
\[h_{i}=f(W_{i}^{(q)}q,W_{i}^{(k)}k,W_{i}^{(v)}v)\in\mathcal{R}^{p_{V}}, \tag{15}\]
where \(W_{i}^{(Q)}\in\mathcal{R}^{d_{Q}\times p_{Q}},W_{i}^{(K)}\in\mathcal{R}^{d_{K }\times p_{K}},W_{i}^{(V)}\in\mathcal{R}^{d_{V}\times p_{V}}\) are trainable parameters, and \(f\) is the attentive pooling and \(h_{i}(i=1,2,...,n_{heads})\) is the attention head. The output obtained through learnable features, \(W_{out}\in\mathcal{R}^{p_{out}\times hp_{out}}\) can be categorized as:
\[W_{out}\begin{bmatrix}h_{1}\\ \vdots\\ h_{n_{heads}}\end{bmatrix}\in\mathcal{R}^{p_{out}} \tag{16}\]
As we are using self-attention, all inputs (\(q,k,v\) denoting query, key and value parameters respectively) are equal to the embedding \(e\).
**Correction.** To additionally correct and stabilize the label distributions, we use attention-based similarity function. The embedding \(x\) is passed through the multi-head self-attention layer to obtain attentive correction term \(t_{a}\). General formula:
\[t_{a}=softmax(W_{out}) \tag{17}\]
\(t_{a}\) is reshaped as required.
#### 3.4.3 Final Label correction
To combine the correction terms, we use weighted sum, with weighting being controlled by the confidence of label corrections.
\[t=\frac{c_{g}}{c_{g}+c_{a}}t_{g}+\frac{c_{a}}{c_{g}+c_{a}}t_{a} \tag{18}\]
where \(c_{g}=C(t_{g})\) and \(c_{a}=C(t_{a})\).
Finally, to obtain the final label distribution \(L_{final}\), we use a weighted sum of label distribution \(l\) and label correction \(t\).
\[L_{final}=\frac{c_{l}}{c_{l}+c_{t}}l+\frac{c_{t}}{c_{l}+c_{t}}t \tag{19}\]
where \(c_{l}=C(l)\) and \(c_{t}=C(t)\).
The label with maximum value in final corrected label distribution \(L_{final}\) is provided as corrected label or final predicted label.
### _Loss Function_
Loss function used to train the model consists of three terms such as class distribution loss, anchor loss, and center loss.
**Class Distribution Loss (\(\mathcal{L}_{cls}\)):** To make sure each example is classified correctly, we use the negative log-likelihood loss between the corrected label distribution \(L^{i}\) and label \(y^{i}\):
\[\mathcal{L}_{cls}=-\sum_{i}^{m}\sum_{j}^{N}y_{j}^{i}\log L_{j}^{i} \tag{20}\]
**Anchor Loss (\(\mathcal{L}_{a}\)):** In order to amplify the discriminatory capacity of the model, we want to make margins between anchors large so that we add an additional loss term:
\[\mathcal{L}_{a}=-\sum_{i}\sum_{j}\sum_{k}\sum_{l}|a^{ij}-a^{kl}|_{2}^{2} \tag{21}\]
We include the negative term in front because we want to maximize this loss. The loss is also normalized for standard uses.
**Center Loss (\(\mathcal{L}_{c}\)):** To make anchors good representation of their class, we want to make sure anchors and embeddings of the same class stay close in the embedding space. To ensure that, we add an additional error term:
\[\mathcal{L}_{c}=\min_{k}|x^{i}-a^{y^{i}}k|_{2}^{2} \tag{22}\]
**Total Loss (\(\mathcal{L}_{total}\)):** Our final loss function can be defined as:
\[\mathcal{L}_{total}=\lambda_{cls}\mathcal{L}_{cls}+\lambda_{a}\mathcal{L}_{a}+ \lambda_{c}\mathcal{L}_{c} \tag{23}\]
with \(\lambda_{cls},\lambda_{a},\lambda_{c}\) being hyperparameters, used to keep the loss functions in same scale.
## 4 Experiments
### _Datasets_
**Aff-Wild2**[16] has 600 annotated audiovisual videos with 3 million frames for categorical and dimensional affect and action units models. It has 546 videos for expression recognition, with 2.6 million frames and 437 subjects. Experts annotated it frame-by-frame for six distinct expressions - surprise, happiness, anger, disgust, fear, neutral expression, and 'other' emotional states.
**RAF-DB**[19, 20] contains a whopping 30,000 distinct images that encompass 7 unique labels for expressions. The images have been meticulously labeled by 40 individual annotators and boast a diverse range of attributes, including subsets of emotions, landmark locations, bounding boxes, race, age range, and gender attributions.
**JAFFE**[27, 28] dataset comprises facial expressions from ten Japanese women, featuring seven prearranged expressions and multiple images for each individual depicting the respective expressions. The dataset encompasses a total of 213 images, with each image rated on six facial expressions by 60 Japanese viewers. The images are stored in an uncompressed 8-bit grayscale TIFF format, exhibit a resolution of 256x256 pixels, and contain no compression.
**FERG-DB**[1] database is yet another benchmark repository that features 2D images of six distinct, stylized personages that were produced and rendered using the sophisticated MAYA software. Comprising an impressive quota of 55,767 meticulously annotated facial expression images, this mammoth database is categorized into seven distinct emotion classes: disgust, surprise, sadness, fear, joy, neutral, and anger.
**FER+**[2] annotations have bestowed a novel set of labels upon the conventional Emotion FER dataset. Each image in FER+ has undergone rigorous scrutiny from a pool of 10 taggers, thereby generating high-quality ground truths for still-image emotions that surpass the original FER labels. It contains seven main labels and one for exclusions.
### _Data Distribution Adjustments_
#### 4.2.1 Augmentation
Sample augmentation entails artificially amplifying the training set by crafting altered replicas of a dataset employing extant data, expediting the discernment of significant attributes from the data. This technique is also particularly useful for achieving robust training data. In a typical FEL problem, the usual data preprocessing and augmentation steps include image resizing, scaling, rotating, padding, flipping, cropping, color augmentation and image normalization.
In this study, we are utilizing multiple image processing techniques, including Image Resizing and Scaling, Random Horizontal Flip, Random Crop, etc.
**Image Resizing and Scaling.** Bi-linear interpolation is used as image resizing method. It uses linear interpolation in both directions to resize an image. This process is repeated until the final result is achieved [33].
Suppose we seek to evaluate the function \(f_{s}\) at the coordinates \((x,y)\). We possess the function value of \(f_{s}\) at the quadrilateral vertices \(Q_{11}=\left(x_{1},y_{1}\right),Q_{12}=\left(x_{1},y_{2}\right),Q_{21}=\left(x _{2},y_{1}\right)\), and \(Q_{22}=\left(x_{2},y_{2}\right)\). Here is the equation for interpolation,
\[\begin{split}& f_{s}(x,y)=\frac{1}{\left(x_{2}-x_{1}\right)\left(y_ {2}-y_{1}\right)}\left[\begin{array}{cc}x_{2}-x&x-x_{1}\end{array}\right]\\ &\\ &\left[\begin{array}{cc}f_{s}\left(Q_{11}\right)&f_{s}\left(Q_{12}\right)\\ f_{s}\left(Q_{21}\right)&f_{s}\left(Q_{22}\right)\end{array}\right]\left[ \begin{array}{c}y_{2}-y\\ y-y_{1}\end{array}\right]\end{split}\end{split} \tag{24}\]
The operation is executed on all necessary pixels until the complete image is suitably rescaled.
**Random Horizontal Flip.** The random flip operation encompasses an arbitrary flipping of the input image with a designated probability. Consider \(A\in R^{m\times n}\) to be the provided torch tensor. The matrix \(A=A_{ij}\), where \(i\in\{1,\dots,m\}\) signifies the row and \(j\in\{1,\dots,n\}\) corresponds to the column of the image. Horizontal Flip equation \(\Longrightarrow A_{i(n+1-j)}\). This operation exchanges the columns of \(A\) such that the initial column of \(A\) corresponds to the final column of \(A_{i(n+1-j)}\), while the ultimate column of \(A\) is equivalent to the first column of \(A_{i(n+1-j)}\).
**Random Crop.** Random cropping entails excising a section of the input image at a serendipitous location. A torch tensor image is envisaged to possess a shape of [..., H, W], where...... denotes any number of antecedent dimensions. In cases where non-static padding is implemented, the input should comprise at most two introductory dimensions.
#### 4.2.2 Data Refinement
Inconsistent class distribution can cause bias and overfitting. Some datasets may have excess data on certain faces, introducing bias towards specific facial data. Providing the model with equally distributed information from all classes and faces can help it accurately distinguish between them and avoid overfitting on common data. Additionally, refining the training datasets can ensure the effective distribution of classes, preventing biases and enhancing the model's performance.
This refinement process is initiated during every epoch, each of which entails a unique set of data for training purposes. Our methodology for refining data comprises two sections. During the training stage, a total of \(N\) pre-processed, cropped, and aligned images are selected in a stochastic manner from each video or group of faces, based
Fig. 4: Examples of training samples in different datasets
on the dataset. These images are then aggregated into a pool, from which \(M\) images per expression are randomly selected for training purposes. Thus, a batch of (\(M\times\) number of classes) images is assembled randomly during every epoch, ensuring balanced training information and counteracting potential biases and overfitting.
### _Implementation Details_
For each dataset, we take the cropped and aligned images, exclusively. We resize them to 256\(\times\)256 and took a random crop of 224\(\times\)224. To deal with overfitting and imbalance of data in particular categories of expression, we pre-process the data using heavy augmentation methods. For the data refinement, we consider 512 images for each video/face and then, the images are combined from all to create an unbiased set. For training, 500 images are taken for each class category from the set.
The images' embeddings are collected from the Cross Attention ViT network from images. Three loss functions are combined to train our model. Anchor loss is aimed to keep the anchors far apart from each other and center loss is aimed at the embeddings to be close to the anchors. Class distribution loss is used to classify the classes correctly. The number of epochs used is 1000 for training. In order to optimize our model, we utilize the ADAM optimizer algorithm with an inaugural learning rate of 0.0003, thereby ensuring the convergence of the gradient descent process towards a global minimum with methodical and efficacious precision. But, the learning rate is scheduled using exponential decay with \(\gamma\) of 0.995. MLP consisting of 2 hidden layers of size 64 is used for primary prediction. Each layer is followed by a ReLU activation, a dropout layer, and a batch normalization layer except the last one. The dropout layers have a drop probability of 0.5 for regularization.
### _Evaluation Metric_
Throughout the experiments, accuracy is utilized as the primary evaluation metric, which is a fundamental concept that measures the correctness of predictions made by a model.
In the case of multi-class classification, accuracy measures the proportion of correct classifications (\(n_{correct}\)) and the total number of classified terms ( \(n_{total}\)). The equation is:
\[\text{Accuracy}\,=\frac{n_{correct}}{n_{total}} \tag{25}\]
### _Ablation Studies_
In order to demonstrate the efficacy of our approach, we undertake a series of ablation studies aimed at assessing the impact of critical parameters and components on the ultimate performance outcomes. The Aff-Wild2 dataset is utilized as the primary dataset throughout experimental procedures to enable a comprehensive evaluation of the effectiveness of our proposed method. It includes emotion information of 8 different classes.
**Number of Anchors \(K\) vs. Accuracy.** The presented findings in Table I reveal that the proposed approach attains optimal recognition accuracy when the number of anchors is set to a range of 8-10. The data shows a gradual rise in accuracy until reaching the certain range of \(K\), beyond which it experiences a sharp decline. A small number of anchors fail to effectively model expression similarities while an excessive number of anchors introduces redundancy and noise in the embedding space, resulting in a decline in performance. Subsequently, for our subsequent experiments, we have opted to set the number of anchors at 8-10.
**\(K\) for Different Noise vs. Accuracy.**
The presented findings in Table II demonstrate that as we increase the level of noise, our model's accuracy decreases. This reduction in accuracy is due to the impact of noise on the clarity and complexity of the data, which makes it harder for our model to make accurate predictions. However, we can improve our model's performance by increasing the value of K, which enables our model to take into account more neighboring points when making decision boundaries, thus reducing the influence of noisy or outlier data points.
In particular, we observe that as we increase K, there is a modest but consistent improvement in accuracy for each level of noise. For example, at a noise level of 20, our model's accuracy improves from 54.99% with K=0 to 60.33% with K=10, representing a significant improvement of 5.34% points. However, we should bear in mind that increasing K also increases computational complexity and memory usage, so we need to find a balanced tradeoff between model accuracy and practical considerations. We should also avoid excessively high values of K, which may lead to oversmoothing and loss of detail in the classification decision boundaries.
**\(K\) for Different Label Smoothing Terms vs. Accuracy.**
Table III demonstrates the influence of label smoothing on model accuracy at various K settings. When smoothing terms from 0 to 50 are examined, accuracy measurements are provided as percentages. Regardless of the smoothing term used, the findings show an evident pattern where an increase in K values typically results in an improvement in model accuracy. Nonetheless, depending on the smoothing term used, this improvement varies in scope.
When examining the data, we found that the baseline model's accuracy is 68.92% without smoothing. As K increases, accuracy increases in steps, peaking at 71.25% when K=10. Applying smoothing methods also follows a similar pattern, with maximum accuracy of 71.80% at smoothing term = 5. More smoothing leads to decreased accuracy. With K=10, max accuracy of 71.89% is seen at smoothing term = 5, declining to a minimum of 51.20% at smoothing term = 40. Smoothing terms from 5 to 20 have quite similar accuracy values, making 10 and 11 reasonable options to reduce overconfidence while discovering significant patterns. We conclude that a smoothing term of 11 is the best option for our model considering all relevant aspects.
**Different Anchor Loss Setups vs. Performance.** From table IV, we observe how the anchor loss adjustment hyperparameter (\(\lambda_{a}\)) affects the model's performance.
The ideal configuration has the highest precision (90.7%), accuracy (89.6%) and F1 Score (89.4%) but a lower recall (89.1%) than the maximum (90.3%). Higher emphasis on \(\lambda_{a}\) decreases model performance, but lower emphasis keeps it quite stable. When the anchor loss is not applied (\(\lambda_{a}=0\)) in label correction, it results in higher recall because it focuses on cross-entropy more, but all the other metrics fall due to
imbalance in distribution.
### _Comparison with State-of-the-Art Methods_
Using five separate datasets -- AffWild2, RAF-DB, JAFFE, FERG-DB, and FER+ (explained in 4.1), the table shows the comparison of the accuracy of multiple State-of-the-Art facial expression learning methods. In this study, the models SCN [42], RAN [41], RUL [51], EfficientFace [55], POSTER [56], POSTER++ [29], and ARBEx are compared.
Upon investigation of the results, it is apparent that ARBEx outperforms all other models across all datasets, attaining the highest accuracy scores for each dataset. Specifically, ARBEx earns an accuracy score of 72.48% on the AffWild2 [16] dataset, which is significantly higher than POSTER++, which has an accuracy score of 69.18%. ARBEx outperforms every other model in the study, with accuracy scores on the RAF-DB [19, 20], FERG-DB [1] and JAFFE [27, 28] datasets of 92.47%, 98.18% and 96.67%, respectively. Finally, on the FER+ [2] dataset, ARBEx acquires an accuracy score of 93.09%, which significantly outperforms every other model tested. Our novel reliability balancing section reduces all kinds of biases, resulting in exceptional performance in all circumstances.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \(\lambda_{a}\) & Precision & Accuracy & F1 Score & Recall & & & & \\ \hline Ideal (x1) & **90.7** & **89.6** & **89.4** & 89.1 & & & & \\ \hline Higher (x100) & 41.3 & 47.9 & 38.7 & 47.0 & & & & \\ \hline Lower (x0.01) & 80.5 & 82.3 & 80.5 & 81.1 & & & & \\ \hline No Effect (x0) & 87.3 & 88.5 & & & & & & \\ \hline \end{tabular}
\end{table} TABLE IV: Comparison of Accuracy (%) (\(\dagger\)) with SOTAs. [KEY: Best \({}^{*}\)= Our evaluation of the proposed model]
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \(K\) & \multicolumn{6}{c}{Noise} \\ \hline & 0 & 5 & 10 & 15 & 20 & 25 & 30 & 35 & 40 & 50 \\ \hline
0 & 68.92 & 69.16 & 69.18 & 69.18 & 69.03 & 68.64 & 67.50 & 64.27 & 61.75 & 59.34 & 55.88 & 50.88 \\ \hline
1 & 70.74 & 70.52 & 70.02 & 66.51 & 59.81 & 51.18 & 44.00 & 37.76 & 35.94 & 33.79 \\ \hline
2 & 70.95 & 70.61 & 70.23 & 66.72 & 60.02 & 51.39 & 44.21 & 37.97 & 36.15 & 34.00 \\ \hline
3 & 71.03 & 70.64 & 70.31 & 66.80 & 60.10 & 51.47 & 44.29 & 38.05 & 36.23 & 34.08 \\ \hline
4 & 71.11 & 70.65 & 70.39 & 66.88 & 60.18 & 51.55 & 44.37 & 38.13 & 36.31 & 34.16 \\ \hline
5 & 71.16 & 70.70 & 70.44 & 66.93 & 60.25 & 51.60 & 44.42 & 38.20 & 36.38 & 34.21 \\ \hline
6 & 71.18 & 70.71 & 70.46 & 66.95 & 60.28 & 51.65 & 44.47 & 38.23 & 36.41 & 34.26 \\ \hline
7 & 71.21 & 70.71 & 70.49 & 66.98 & 60.28 & 51.65 & 44.47 & 38.23 & 36.41 & 34.26 \\ \hline
8 & 71.24 & 70.72 & 70.52 & 70.61 & 60.31 & 51.66 & 44.49 & 38.26 & 36.43 & 34.29 \\ \hline
9 & 71.24 & 70.73 & 70.52 & 67.01 & 60.32 & 51.68 & 44.50 & 38.26 & 36.44 & 34.29 \\ \hline
10 & **71.25** & 70.73 & 70.53 & 67.02 & 60.33 & 51.69 & 44.51 & 38.27 & 36.45 & 34.3 \\ \hline \hline \end{tabular}
\end{table} TABLE II: \(K\) for Different Noise vs. Accuracy (%) (\(\dagger\)). [KEY: Best \({}^{*}\)= Our evaluation of the proposed model]
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \(\lambda_{a}\) & Precision & Accuracy & F1 Score & Recall \\ \hline Ideal (x1) & **90.7** & **89.6** & **89.4** & 89.1 \\ \hline Higher (x100) & 41.3 & 47.9 & 38.7 & 47.0 \\ \hline Lower (x0.01) & 80.5 & 82.3 & 80.5 & 81.1 \\ \hline No Effect (x0) & 87.3 & 88.5 & 88.4 & **90.3** \\ \hline \end{tabular}
\end{table} TABLE III: \(K\) for Different Label Smoothing Terms vs. Accuracy (%) (\(\dagger\)). [KEY: Best \({}^{*}\)= Our evaluation of the proposed model]
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \(\lambda_{a}\) & Precision & Accuracy & F1 Score & Recall \\ \hline Ideal (x1) & **90.7** & **89.6** & **89.4** & 89.1 \\ \hline Higher (x100) & 41.3 & 47.9 & 38.7 & 47.0 \\ \hline Lower (x0.01) & 80.5 & 82.3 & 80.5 & 81.1 \\ \hline No Effect (x0) & 87.3 & 88.5 & 88.4 & **90.3** \\ \hline \end{tabular}
\end{table} TABLE IV: \(\star\) Our evaluation of the proposed model
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \(\lambda_{a}\) & Precision & Accuracy & F1 Score & Recall \\ \hline Ideal (x1) & **90.7** & **89.6** & **89.4** & 89.1 \\ \hline Higher (x100) & 41.3 & 47.9 & 38.7 & 47.0 \\ \hline Lower (x0.01) & 80.5 & 82.3 & 80.5 & 81.1 \\ \hline No Effect (x0) & 87.3 & 88.5 & 88.4 & **90.3** \\ \hline \end{tabular}
\end{table} TABLE V: Comparison of Accuracy (%) (\(\dagger\)) with SOTAs. [KEY: Best \({}^{*}\)= Our evaluation of the proposed model]
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \(K\) & \multicolumn{6}{c}{Noise} \\ \hline & 0 & 5 & 10 & 15 & 20 & 25 & 30 & 35 & 40 & 50 \\ \hline
0 & 68.92 & 68.41 & 67.7 & 63.69 & 54.99 & 50.36 & 41.18 & 36.94 & 34.12 & 30.92 \\ \hline
1 & 70.74 & 70.52 & 70.02 & 66.51 & 59.81 & 51.18 & 44.00 & 37.76 & 35.94 & 33.79 \\ \hline
2 & 70.95 & 70.61 & 70.23 & 66.72 & 60.02 & 51.39 & 44.21 & 37.97 & 36.15 & 34.00 \\ \hline
3 & 71.03 & 70.64 & 70.31 & 66.80 & 60.10 & 51.47 & 44.29 & 38.05 & 36.23 & 34.08 \\ \hline
4 & 71.11 & 70.65 & 70.39 & 66.88 & 60.18 & 51.55 & 44.37 & 38.13 & 36.31 & 34.16 \\ \hline
5 & 71.16 & 70.70 & 70.44 & 66.93 & 60.25 & 51.60 & 44.42 & 38.18 & 36.36 & 34.21 \\ \hline
6 & 71.18 & 70.71 & 70.46 & 66.95 & 60.25 & 51.62 & 44.44 & 38.20 & 36.38 & 34.23 \\ \hline
7 & 71.21 & 70.71 & 70.49 & 66.98 & 60.28 & 51.65 & 44.47 & 38.23 & 36.41 & 34.26 \\ \hline
8 & 71.24 & 70.72 & 70.52 & 67.01 & 60.31 & 51.66 & 44.49 & 38.26 & 36.43
[MISSING_PAGE_POST]
### _Visualization Analysis_
#### 4.7.1 Confidence Probability Distributions
Some working examples of reliability balancing method are added in Fig. 5. It demonstrates how the method enhances the accuracy and reliability of predictions by stabilizing probability distributions. The primary probabilities generated by the model exhibit considerable variation, ranging from 0.529 to 0.0026. Due to intra-class similarity, disparity, and label ambiguity issues within images, primary probabilities are not always reliable in predicting accurate labels. In Fig. 5, it is evident that the maximum primary probability exceeds 0.4 for most of the cases, despite the associated labels being erroneous and making the model unreliable. Upon implementing the correction method, we notice two different phenomena.
**Boost in confidence values of accurate labels.** We observe a rise in maximum confidence value in probability distribution in some cases (Label 2, 5, 7 in Fig. 5) after applying reliability balancing. Increased confidence probability ensures accurate predictions.
**Decrease in confidence in faulty labels.** In some other cases, the confidence levels of faulty predictions are decreased by reliability balancing. In these situations, incorrect maximum values are reduced to a range of 0.15-0.25 mostly (Label 0, 1, 3 in Fig. 5). The correct maximum values stay in a range of 0.2-0.3, while also providing the correct labels. These findings further support the vital role of reliability balancing and stabilization techniques.
By implementing the corrective measures afforded by reliability balancing, both the maximum and minimum probability increases to 0.5429 and 0.0059 across a thorough study sample of 60 images, stabilizing the distribution. Notably, the standard deviation of corrected predictions (0.0881) was found to be lower than that of primary predictions (0.1316), providing strong evidence for enhanced stability and balance proving the efficacy of reliability balancing method applied. Hence, the reliability balancing strategy supports the model in all circumstances, from extremely uncertain conditions to extremely confident scenarios, whenever the primary model is making poor conclusions.
#### 4.7.2 Clustering Embeddings
The t-distributed stochastic neighbor embedding (t-SNE) plot in Fig. 6 visualizes the difference between classes in embedding space, each color denoting one class. **Davies-Bouldin score**[9] gauges the mean resemblance between a given cluster and its most akin cluster. Lower scores implies better clustering outcome. **Calinski-Harabas score**[5] assesses the ratio of variance between clusters to variance within clusters. A higher score suggests an optimal clustering solution.
In the figures, we observe some uniformly spaced groups with reliable classifications and some noises denoting problematic circumstances with inter-class similarity and disparity issues. The scores indicate that ARBEx outperforms the other models in terms of both Davies Bouldin Score (1.969) and Calinski Harabasz Score (1227.8). These results indicate that ARBEx produces a better clustering outcome compared to POSTER++ (Davies Bouldin Score of 1.990 and Calinski Harabasz Score of 1199.5) and SCN (2.534 and 915.2). Based on the plots and scores, it is noticeable that ARBEx has embeddings that are well dispersed and more discriminating than POSTER++ and SCN.
#### 4.7.3 Study of Different Loss Functions
Fig. 7 demonstrates the effects of different loss function setups in the training stage of our experiment. Anchor loss dominance causes the model to drop its performance after some initial good epochs, conveying that the model starts overfitting on anchors, ignoring true labels. Relying more on similarities rather than the actual prediction performance, this setup fails to fulfill the criteria. The other setups are quite stable and close. The ideal combination used in the study helps the model to train faster and better.
## 5 Conclusion
In this paper, we have presented a novel approach **ARBEx** for FEL, which leverages an extensive attentive feature extraction framework with reliability balancing to mitigate issues arising from biased and unbalanced data. Our method combines heavy augmentation and data refinement processes with a cross-attention window-based Vision Transformer (ViT) to generate feature embeddings, enabling effective handling of inter-class similarity, intra-class disparity, and label ambiguity. Our unique reliability balancing strategy combines trainable anchor points in the embedding space and multi-head self-attention mechanism with label distributions and confidence to stabilize the distributions and maximize performance against poor projections. Experimental analysis across multiple datasets demonstrates the superior effectiveness of our proposed ARBEx method, outperforming state-of-the-art FEL models, thereby highlighting its potential to significantly advance the field of facial expression learning.
## Acknowledgments
This work was partly supported by (1) Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT)
Fig. 7: Study of training progress on different setups using Accuracy (%) score. The red line shows the optimal model with perfect loss combination, blue line shows anchor loss dominant model, indigo colored line shows the model with no label correction with anchors, the grey line shows the model with Cross-Entropy Loss only and the yellow line shows where Cross-Entropy Loss is dominant.
(No.2020-0-01373, Artificial Intelligence Graduate School Program (Hanyang University)) and (2) the Bio & Medical Technology Development Program of the National Research Foundation (NRF) funded by the Korean government (MSIT) (No. NRF-2021M3E5D2A01021156).
|
2301.05317 | Next-to-leading-order solution to Kerr-Newman black hole superradiance | The superradiant instabilities of Kerr-Newman black holes with charged or
uncharged massive spin-0 fields are calculated analytically to the
next-to-leading order in the limit of $\alpha\sim r_g \mu \ll 1$. A missing
factor of $1/2$ in the previous leading-order result is identified. The
next-to-leading order result has a compact form and is in good agreement with
existing numerical calculations. The percentage error increases with $\alpha$,
from a few percent for $\alpha\sim 0.1$ to about $50\%$ for $\alpha\sim 0.4$.
Massive neutral scalars too heavy to be produced with Kerr black hole
superradiance may exist in the superradiant region of Kerr-Newman black holes. | Shou-Shan Bao, Qi-Xuan Xu, Hong Zhang | 2023-01-12T22:06:53Z | http://arxiv.org/abs/2301.05317v3 | # Next-to-leading-order solution to Kerr-Newman black hole superradiance
###### Abstract
The superradiant instabilities of Kerr-Newman black holes with charged or uncharged massive spin-0 fields are calculated analytically to the next-to-leading order in the limit of \(\alpha\sim r_{g}\mu\ll 1\). A missing factor of 1/2 in the previous leading-order result is identified. The next-to-leading order result has a compact form and is in good agreement with existing numerical calculations. The percentage error increases with \(\alpha\), from a few percent for \(\alpha\sim 0.1\) to about 50% for \(\alpha\sim 0.4\). Massive neutral scalars too heavy to be produced with Kerr black hole superradiance may exist in the superradiant region of Kerr-Newman black holes.
## I Introduction
Ultralight boson condensate could form around a rotating black hole (BH) if the boson's Compton wavelength is comparable to the size of the BH horizon. With the proper choice of parameters, such scalar condensate can continuously extract energy and angular momentum from the BH until the BH spin is below some critical value and/or nonlinear effects become important[1; 2; 3]. This phenomenon is known as BH superradiance [4; 5]. There exist numerous works on various bosons, including spin-2 [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26], spin-1 [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38] and spin-2 [39; 40] fields. In this work, we focus on the ultralight scalars. The superradiance of other types of bosons could be found in the comprehensive review [41].
The scalar superradiance, especially with a Kerr BH, is important in phenomenology. Such BH-condensate systems have been widely studied for constraining the scalar properties and for the possible observation of the (gravitational wave) GW emission. It has been shown that the BH evolves along the Regge trajectories on the mass-spin plot if the superradiant effect is strong [1; 17]. Consequently, there are "holes" on the Regge plot in which BHs cannot reside. Combing with the observed BH spin distribution, favored and unfavored scalar mass ranges can be identified [42; 43; 44]. On the other hand, with the continuous GW generated by the BH-condensate, works have been done to study the possibility of resolving these systems from the backgrounds [1; 13; 14; 15; 18; 19; 45]. The positive frequency drift [13; 27] and the beatlike pattern [46] have been proposed to distinguish them from other monochromatic GW sources, such as neutron stars. The unresolved BH-condensate systems have also been carefully studied as stochastic backgrounds for GW detectors [18; 19].
The phenomenological study of BH superradiance depends on the accurate determination of the bound state's eigenfrequency. For Kerr BHs, the numerical continued fraction method was first proposed by Leaver for massless scalars [47]. It is later developed for massive scalars in Ref. [8] and further refined in Ref. [10]. In the small \(\alpha\sim r_{g}\mu\) limit, an analytic approximation was obtained by Detweiler [6]. Nonetheless, these two solutions are not consistent with each other. The problem is recently resolved in our previous work by including the next-to-leading-order (NLO) contribution to the analytic approximation [21]. A power-counting strategy is also proposed which facilitates the NLO calculation.
In Ref. [48], Damour et.al. have shown that the superradiance can also be realized with a charged massive scalar field in Kerr-Newman spacetime. Comparably, it does not attract as much attention as that for Kerr BHs. It may be because the Kerr-Newman BH (KNH) is unlikely to play important roles in astrophysics [49; 50; 51]. Nonetheless, as pointed out in Ref. [52], the KNH provides an ideal testing ground for studying the interplay between gravity and electrodynamics. In the previous studies of scalar superradiance with KNHBs, De-tweller's method has been applied to obtain the leading-order (LO) analytic approximation at the \(\alpha\ll 1\) limit [53; 54; 55]. The numerical solution has also been achieved using the 3-term continued fraction method [56]. The parameter space of the KNHH superradiance is also probed by analyzing the existence of the potential well [57; 58; 59].
In this work, we refine the power-counting strategy in our previous work and apply it to calculate the NLO contribution of the KNHB superradiance. A compact NLO expression for \(\alpha\ll 1\) is obtained which could be straightforwardly applied to phenomenological study. The scalar field can be either neutral or charged. By comparing to the existing numerical results, the percentage error of the NLO approximation increases with \(\alpha\), from a few percent for \(\alpha\sim 0.1\) to about 50% for \(\alpha\sim 0.4\). In comparison, the LO approximation does not agree with the numerical results qualitatively (see Fig. 3 below).
This paper is organized as follows. In Sec. II, we briefly review the Klein-Gordon equation to be solved and obtain the superradiance condition from its solution at the outer horizon. Detweiler's method is applied to derive
the LO and NLO analytic expressions in Sec. III. In Sec. IV, the obtained analytic expressions are compared to the existing numerical calculation. Some effects relevant to phenomenology are also discussed. Finally, we summarize our results in Sec. V.
## II Scalars in Kerr-Newman spacetime
The spacetime around a KNBH with mass \(M\), angular momentum \(J\) and charge \(Q\) can be expressed in Boyer-Lindquist coordinates [60],
\[\begin{split} ds^{2}=&-\left(1-\frac{2r_{g}r-Q^{2}}{ \Sigma^{2}}\right)dt^{2}+\frac{\Sigma^{2}}{\Delta}dr^{2}+\Sigma^{2}d\theta^{2 }\\ &+\left[(r^{2}+a^{2})+\frac{(2r_{g}r-Q^{2})a^{2}\sin^{2}\theta}{ \Sigma^{2}}\right]\sin^{2}\theta d\varphi^{2}\\ &-\frac{2(2r_{g}r-Q^{2})a\,\sin^{2}\theta}{\Sigma^{2}}dtd\varphi,\end{split} \tag{1}\]
with
\[a =J/M, \tag{2a}\] \[r_{g} =GM,\] (2b) \[\Sigma^{2} =r^{2}+a^{2}\cos^{2}\theta,\] (2c) \[\Delta =r^{2}-2r_{g}r+a^{2}+Q^{2}. \tag{2d}\]
The equation \(\Delta=0\) gives two event horizons at \(r_{\pm}=r_{g}\pm b\) with \(b=\sqrt{r_{g}^{2}-a^{2}-Q^{2}}\). In this work, we only consider the KNBHs with \(r_{g}^{2}-a^{2}-Q^{2}\geq 0\).
To study the superradiance of a scalar field close to a BH, one needs to solve the combined Einstein and Klein-Gordon field equations, which is a very difficult task, especially because the existence of the scalar perturbs the spacetime around the BH. Nonetheless, it has been shown that this perturbation could be safely ignored due to the tiny energy-stress tensor of the scalar cloud for Kerr BH [17]. We assume the same situation happens for the KNBHs. We further assume the self-interaction of the scalar field can also be ignored. Then the problem reduces to solving the Klein-Gordon equation on the stationary Kerr-Newman background,
\[(\nabla^{\alpha}-iqA^{\alpha})(\nabla_{\alpha}-iqA_{\alpha})\phi-\mu^{2}\phi=0, \tag{3}\]
where \(\mu\) and \(q\) are the mass and electric charge of the scalar field, respectively. The vector \(A_{\alpha}\) is the background electromagnetic potential,
\[A_{\alpha}=\frac{Qr}{\Sigma^{2}}\left(-1,0,0,a\sin^{2}\theta\right). \tag{4}\]
For complex scalars, \(\phi\) can be written with the separation of variables,
\[\phi(t,r,\theta,\varphi)=\sum_{l,m}\int d\omega R_{lm}(r)S_{lm}(\theta)e^{im \varphi}e^{-i\omega t}. \tag{5}\]
Inserting it into Eq. (3), one obtains the angular equation,
\[\begin{split}&\frac{1}{\sin\theta}\frac{d}{d\theta}\left(\sin \theta\frac{dS_{lm}}{d\theta}\right)+\\ &\left[-a^{2}(\mu^{2}-\omega^{2})\cos^{2}\theta-\frac{m^{2}}{\sin ^{2}\theta}+\Lambda_{lm}\right]S_{lm}=0,\end{split} \tag{6}\]
where \(\Lambda_{lm}\) is the eigenvalue. Its solution \(S_{lm}(\theta)\) is called the spheroidal harmonic function, whose properties can be found in Ref. [61]. The corresponding radial equation is [62],
\[\Delta\frac{d}{dr}\left(\Delta\frac{dR_{lm}}{dr}\right)+U(r)R_{lm}=0, \tag{7}\]
with
\[\begin{split} U(r)=&[\omega(a^{2}+r^{2})-am-qQr]^{ 2}\\ &+\Delta[2am\omega-\mu^{2}r^{2}-a^{2}\omega^{2}-\Lambda_{lm}]. \end{split} \tag{8}\]
These are the equations for the complex scalar field. For real scalars, one should set \(q=0\) in Eq. (3) and choose only the real part on the right side of Eq. (5). In the rest of this paper, we focus on the equations for the complex scalars. The case for the real scalars can then be simply obtained by choosing \(q=0\).
To obtain a constraint on the parameters that allow superradiance, we change to the tortoise coordinates,
\[dr_{*}=\frac{r^{2}+a^{2}}{\Delta}dr, \tag{9}\]
with which the interesting region \(r\in(r_{+},+\infty)\) corresponds to \(r_{*}\in(-\infty,+\infty)\). We also define,
\[R_{*}(r_{*})=\sqrt{r^{2}+a^{2}}R(r). \tag{10}\]
Then Eq. (7) can be rewritten into a Schrodinger-like equation,
\[\frac{d^{2}R_{*}(r_{*})}{dr_{*}^{2}}-V(r)R_{*}(r_{*})=0, \tag{11}\]
where the effective potential is,
\[\begin{split} V(r)=&-\left(\omega-\frac{am+qQr}{a ^{2}+r^{2}}\right)^{2}+\frac{\Delta\mu^{2}}{a^{2}+r^{2}}\\ &-\frac{\Delta}{(a^{2}+r^{2})^{2}}\left[2am\omega-\Lambda_{lm}+a ^{2}(\mu^{2}-\omega^{2})\right]\\ &+\frac{\Delta[\Delta+2r(r-r_{g})]}{(a^{2}+r^{2})^{3}}-\frac{3 \Delta^{2}r^{2}}{(a^{2}+r^{2})^{4}}.\end{split} \tag{12}\]
In the region close to the outer horizon \(r_{+}\), the potential has the asymptotic form,
\[\lim_{r\to r_{+}}V(r)=-(\omega-\omega_{c})^{2}+\mathcal{O}(r-r_{+}), \tag{13}\]
where the critical frequency is defined as
\[\omega_{c}=\frac{ma+qQr_{+}}{r_{+}^{2}+a^{2}}=\frac{ma+qQr_{+}}{2r_{g}r_{+}-Q^{2}}. \tag{14}\]
Inserting this asymptotic expression of \(V(r)\) into Eq. (11), one gets the solution at the outer horizon,
\[\underset{r_{*}\rightarrow-\infty}{\lim}R_{*}(r_{*})=d_{1}e^{-i(\omega-\omega _{c})r_{*}}+d_{2}e^{i(\omega-\omega_{c})r_{*}}, \tag{15}\]
where the first term is the wave falling into the outer horizon, and the second term is the wave escaping from the outer horizon; \(d_{1}\) and \(d_{2}\) are their respective amplitudes. Physically, nothing can escape from the horizon, indicating \(d_{2}=0\). The superradiance requires the phase velocity and the group velocity to be in opposite directions, which leads to the superradiance condition for a KNBH,
\[\mathrm{Re}(\omega)<\omega_{c}. \tag{16}\]
From Eq. (14), we can see that with \(Q\) fixed, this condition is more relaxed (strict) compared to the superradiance condition of a Kerr BH if the charges of the scalar and the BH have the same sign (different signs).
## III Analytic solution at \(\alpha\ll 1\)
In the small \(\alpha\) limit, the asymptotic matching method first proposed in Ref. [6] gives a reasonable approximation of the complex eigenfrequency \(\omega\). In a previous work, we have further calculated the NLO contribution for Kerr BH superradiance [21]. The NLO result has a much better agreement with the numerical solutions compared to the LO approximation. In the current work, we apply the method to KNBHs. In this section, we first repeat the LO approximation in Ref. [56]. A missing factor of \(1/2\) is identified. We then continue to calculate the NLO contribution. The calculation is valid for both real and complex scalar fields. For a real scalar field, one simply sets \(q=0\) throughout.
### Leading-order approximation
We first formally introduce the power-counting parameter \(\alpha\sim r_{g}\mu\) for the expansion. The scaling of other parameters are \(\mathrm{Re}\,\omega\sim\mu\sim q\) and \(a\sim Q\sim r_{+}\sim r_{-}\sim r_{g}\). Unlike some previous calculations in which \(\alpha\) is defined to be \(r_{g}\mu\), here we leave \(\alpha\) as a power-counting parameter, which could be \(r_{g}\mu\) or any other quantity with the same scaling. In the limit \(r\rightarrow+\infty\), the derivative term in Eq. (7) divided by \(\Delta^{2}\) can be written into a familiar form,
\[\frac{1}{\Delta}\frac{d}{dr}\left(\Delta\frac{dR}{dr}\right)\approx\frac{d^{2 }R}{dr^{2}}+\frac{2}{r}\frac{dR}{dr}=\frac{1}{r}\frac{d^{2}}{dr^{2}}(rR). \tag{17}\]
The second term on the left side of Eq. (7) divided by \(\Delta^{2}\) can be expanded in powers of \(r_{g}/r\). Keeping terms up to \(r_{g}^{2}/r^{2}\), the radial function at large \(r\) limit (\(r\gg r_{g}\)) can be simplified as
\[\frac{d^{2}}{dr^{2}}(rR)+\left[(\omega^{2}-\mu^{2})+\frac{2(2r_{g}\omega^{2}-r _{g}\mu^{2}-qQ\omega)}{r}-\frac{l^{\prime}(l^{\prime}+1)}{r^{2}}+\mathcal{O}( r^{-3})\right]rR=0, \tag{18}\]
where
\[\begin{split} l^{\prime}(l^{\prime}+1)=&\Lambda_{lm }+4r_{g}^{2}(\mu^{2}-3\omega^{2})+a^{2}(\omega^{2}-\mu^{2})\\ &+Q^{2}(2\omega^{2}-q^{2}-\mu^{2})+8r_{g}qQ\omega.\end{split} \tag{19}\]
The \(l^{\prime}\) is related to the orbital angular number by
\[l^{\prime}=l+\epsilon. \tag{20}\]
Here \(\epsilon\sim\mathcal{O}(\alpha^{2})\) plays the role of a regulator and cannot be simply dropped.
For a confined profile, the real part of \(\omega\) is less than the boson mass \(\mu\). The physical solution is the one that decays exponentially at large \(r\). It is more convenient to define,
\[\kappa =\sqrt{\mu^{2}-\omega^{2}}, \tag{21}\] \[\lambda =\frac{2r_{g}\omega^{2}-r_{g}\mu^{2}-qQ\omega}{\kappa},\] (22) \[y =\kappa r,\] (23) \[u(y) =yR\left(\frac{y}{\kappa}\right). \tag{24}\]
Then Eq. (18) can be rewritten as
\[\frac{d^{2}u(y)}{dy^{2}}+\left[-1+\frac{2\lambda}{y}-\frac{l^{\prime}(l^{ \prime}+1)}{y^{2}}\right]u(y)=0. \tag{25}\]
The two solutions are Whittaker functions, and only one of them has the correct behavior at \(r\rightarrow+\infty\) required by the bound states. The solution with the correct behavior can be further written in terms of confluent hypergeometric functions. Finally, the radial function at large
\[R(r)=e^{-\kappa r}(2\kappa r)^{l^{\prime}}U(l^{\prime}+1-\lambda,2l^{\prime}+2;2 \kappa r), \tag{26}\]
up to an arbitrary normalization.
The bound states only exist if \(\lambda>0\). The superradiance conditon in Eq. (16) gives another constraint \(2r_{g}\omega<(ma+qQr_{+})/r_{+}\). Combining these two inequalities, one can obtain,
\[\begin{split} 0&<2r_{g}\omega^{2}-r_{g}\mu^{2}-qQ\omega\\ &<\left(\frac{ma}{r_{+}}+qQ\right)\omega-r_{g}\mu^{2}-qQ\omega \\ &=\frac{ma}{r_{+}}\omega-r_{g}\mu^{2}.\end{split} \tag{27}\]
So there is no superradiant bound state if \(m\leq 0\). It also shows that Reissner-Nordstrom BHs could not hold bounded scalar clouds [55]. The minimum KNBH spin \(a\) allowing superradiant instability is approximately \(r_{g}r_{+}\mu/m\).
Next, we look at Eq. (7) in the small \(r\) limit. For BH superradiance, the inner boundary is the outer horizon \(r=r_{+}\). It is more convenient to write the radial function in terms of \(z=(r-r_{+})/2b\),
\[z(z+1)\frac{d}{dz}\left[z(z+1)\frac{dR}{dz}\right]+U(z)R=0, \tag{28}\]
where \(U(z)\) can be written as an expansion of z,
\[\begin{split} U(z)&=p^{2}+z\left[\frac{4r_{g}r_{+} \omega}{b}\left(r_{+}\omega-\frac{am}{2r_{+}}-\frac{Q^{2}\omega}{2r_{g}}\right) -(\Lambda_{lm}+r_{+}^{2}\mu^{2}+a^{2}\omega^{2})+\frac{qQ}{b}(am+r_{+}qQ-a^{2} \omega-3r_{+}^{2}\omega)\right]\\ &\quad+z^{2}(a^{2}\omega^{2}-\Lambda_{lm}+2\mu^{2}a^{2}-3\mu^{2} r_{+}^{2}+6r_{+}^{2}\omega^{2}+2Q^{2}\mu^{2}+q^{2}Q^{2}-6r_{+}qQ\omega)\\ &\quad+4z^{3}b\left[r_{g}\mu^{2}+2r_{+}(\omega^{2}-\mu^{2})-qQ \omega\right]+4z^{4}b^{2}(\omega^{2}-\mu^{2}),\end{split} \tag{29}\]
in which,
\[p=\frac{(r_{+}^{2}+a^{2})}{2b}(\omega-\omega_{c}). \tag{30}\]
Note that both \(p\) and \(r_{g}\omega_{c}\) scale as \(\mathcal{O}(\alpha^{0})\).
In the limit of small \(\alpha\), the \(\Lambda_{lm}\) has the expanded form \(\Lambda_{lm}=l(l+1)+\mathcal{O}(\alpha^{4})\). At the LO of \(\alpha\), we get the radial equation in limit \((r-r_{+})\ll\max(1/\omega,1/\mu)\),
\[z(z+1)\frac{d}{dz}\left[z(z+1)\frac{dR}{dz}\right]+\left[p^{2}-l^{\prime}(l^{ \prime}+1)z(1+z)\right]R=0. \tag{31}\]
At LO, the \(l^{\prime}\) should be replaced by \(l\) in this order. Nonetheless, the \(\epsilon\) in \(l^{\prime}\) plays the role of a regulator in the intermediate steps. It will be set to zero at the end.
The general solution of Eq. (31) is a linear combination of two associated Legendre functions, and the physical solution is the one with the ingoing wave at \(r\to r_{+}\). After changing the variable back to \(r\), the solution of the radial function is,
\[R(r)=\left(\frac{r-r_{+}}{r-r_{-}}\right)^{-ip}{}_{2}F_{1}\left(-l^{\prime},l^ {\prime}+1;1-2ip;-\frac{r-r_{+}}{2b}\right), \tag{32}\]
up to an arbitrary normalization.
Next, we apply the matching method first proposed in [6] and further developed recently in Ref. [21]. The solution of Eq. (26) is only valid in \(r\gg r_{g}\) limit, while the solution in Eq. (32) requires \(r\ll r_{g}\alpha^{-2}\) from the ignorance of terms proportional to \(z^{3}\) and \(z^{4}\). They have an overlapped region in the limit \(\alpha\ll 1\). In this region, the two solutions are expected to have the same behavior. The behavior of Eq. (26) in the overlapped region is obtained by looking at its small \(r\) limit, which is
\[\frac{(2\kappa)^{l^{\prime}}\Gamma(-2l^{\prime}-1)}{\Gamma(-l^{\prime}-\lambda )}r^{l^{\prime}}+\frac{(2\kappa)^{-l^{\prime}-1}\Gamma(2l^{\prime}+1)}{\Gamma (l^{\prime}+1-\lambda)}r^{-l^{\prime}-1}. \tag{33}\]
On the other hand, the behavior of Eq. (32) in the overlapped region is obtained by looking at its large \(r\) limit,
which is 1
Footnote 1: Without the regulator \(\epsilon\), the ratio \(\Gamma(-2l-1)/\Gamma(-l)\) in Eq. (34) is ill-defined and needs to be handled with great caution. In comparison, the calculation with \(\epsilon\) is more straightforward. More discussion can be found in the Appendix of Ref. [21].
\[\frac{(2b)^{-l^{\prime}}\Gamma(2l^{\prime}+1)}{\Gamma(l^{\prime}+1) \Gamma(l^{\prime}+1-2ip)}r^{l^{\prime}}+\frac{(2b)^{l^{\prime}+1}\Gamma(-2l^{ \prime}-1)}{\Gamma(-l^{\prime}-2ip)\Gamma(-l^{\prime})}r^{-l^{\prime}-1}. \tag{34}\]
The ratio of the coefficients of the \(r^{l^{\prime}}\) and \(r^{-l^{\prime}-1}\) should be the same for the two solutions in the overlap region. The obtained equation is the eigenequation of \(\omega\). It can be solved perturbatively by the observation that the second term in the expression (33) must be suppressed at small \(r\), indicating \(l^{\prime}+1-\lambda\) is very close to zero or some negative integer,
\[l^{\prime}+1-\lambda=-n-\delta\lambda, \tag{35}\]
where \(|\delta\lambda|\ll 1\) and \(n\) is zero or a positive integer. Following the convention in literature, we also define \(\bar{n}=n+l+1\). Then the above relation is re-expressed as \(\lambda=\bar{n}+\epsilon+\delta\lambda\). At LO of \(\alpha\), it reduces to \(\lambda=\bar{n}+\delta\lambda\). Combining with the definition of \(\lambda\) in Eq. (22), the \(r_{g}\kappa\) scales as \(\alpha^{2}\), which is important in power-counting. Since \(|\delta\lambda|\ll 1\), one could solve for \(\delta\lambda\) perturbatively with expressions (33) and (34).
The LO calculation of \(\delta\lambda\) for Kerr BHs was completed in Ref. [6], with the regulator \(\epsilon\) set to zero from the beginning. Recently, we have confirmed a missing factor of \(1/2\) in that result [21], which was first identified in Ref. [34]. The missing factor is conjectured to be from the mistreatment of \(\Gamma\) functions with negative integer arguments. The correct formula is provided in the Appendix of Ref. [21]. This subtle calculation turns out to be straightforward with the regulator \(\epsilon\) kept in the intermediate steps. More details could be found in Ref. [21]. For KNBHs, the first LO calculation of \(\delta\lambda\) was completed in Ref. [56]. It followed the same steps in Ref. [6] and missed the factor \(1/2\) as well. After the correction, the LO result of \(\delta\lambda\) is
\[\delta\lambda^{(0)}= -ip\left(4\kappa b\right)^{2l+1}\frac{(n+2l+1)!(l!)^{2}}{n!\left[ (2l)!(2l+1)!\right]^{2}}\prod_{j=1}^{l}(j^{2}+4p^{2}), \tag{36}\]
where the superscript \((0)\) indicates that it is the LO result. It scales as \(\mathcal{O}(\alpha^{4l+2})\).
The eigenfrequency \(\omega\) can be expressed in terms of \(\delta\lambda\) with Eqs. (22) and (35). Defining \(\omega=\omega_{0}+\omega_{1}\delta\lambda^{(0)}\) in Eq. (22) and expanding it to the linear term of \(\delta\lambda^{(0)}\), one arrives at
\[\lambda =\frac{r_{g}(2\omega_{0}^{2}-\mu^{2})-qQ\omega_{0}}{\sqrt{\mu^{2} -\omega_{0}^{2}}}\] \[\quad+\frac{r_{g}\omega_{0}\omega_{1}(3\mu^{2}-2\omega_{0}^{2})-q Q\mu^{2}\omega_{1}}{(\mu^{2}-\omega_{0}^{2})^{3/2}}\delta\lambda^{(0)}+\mathcal{O} \left((\delta\lambda^{(0)})^{2}\right). \tag{37}\]
On the other hand, we have \(\lambda=\bar{n}+\delta\lambda^{(0)}\) from Eq. (35). Then it is straightforward to get,
\[\frac{r_{g}(2\omega_{0}^{2}-\mu^{2})-qQ\omega_{0}}{\sqrt{\mu^{2} -\omega_{0}^{2}}}=\bar{n}, \tag{38a}\] \[\frac{r_{g}\omega_{0}\omega_{1}(3\mu^{2}-2\omega_{0}^{2})-qQ\mu^{ 2}\omega_{1}}{(\mu^{2}-\omega_{0}^{2})^{3/2}}=1. \tag{38b}\]
Note that in getting Eq. (38a), we have ignored the \(\epsilon\) which could be traced back to the \(l^{\prime}\) in Eq. (35). This omission leads to an error in \(r_{g}\omega_{0}\) at the order of \(\mathcal{O}(\alpha^{5})\). Solving \(\omega_{0}\) perturbatively from Eq. (38a), one arrives at
\[\frac{\omega_{0}^{(0)}}{\mu}=1-\frac{1}{2}\left(\frac{r_{g}\mu-qQ}{\bar{n}} \right)^{2}+\mathcal{O}(\alpha^{4}). \tag{39}\]
Then the \(\omega_{1}\) could be expressed in terms of \(\omega_{0}\) from Eq. (38b) and expanded in powers of \(\alpha\),
\[\frac{\omega_{1}^{(0)}}{\mu}=\frac{(r_{g}\mu-qQ)^{2}}{\bar{n}^{3}}+\mathcal{O }(\alpha^{4}). \tag{40}\]
Since both \(\omega_{0}\) and \(\omega_{1}\) are real, \(\omega_{0}\) and \(\omega_{1}\delta\lambda^{(0)}\) are the leading terms of the real and imaginary parts of \(\omega\), respectively. Note, the imaginary part of \(\omega\) scales as \(\mathcal{O}(\alpha^{4l+5})\).
### Next-to-leading-order approximation
In a previous work, we have carefully studied the superradiance of a real scalar field around a Kerr BH [21]. The LO eigenfrequency \(\omega\) obtained in Ref. [6] has an error as large as \(160\%\) compared to the numerical result. After correcting the missing factor \(1/2\), the convergence is improved, with the error \(\lesssim 80\%\). Except for the large discrepancy, the LO result also has some strange behaviors. Since the LO result is the leading term in the Taylor series of the exact \(\omega\) at \(\alpha=0\), it is expected to converge to the exact \(\omega\) with \(\alpha\) approaching zero. Nonetheless, the relative error seems to be a nonzero constant for small \(\alpha\), reaching as large as \(30\%\) at \(\alpha=0.07\) for \(a=0.99\). This discrepancy at small \(\alpha\) calls into question the power-counting strategy. Moreover, the discrepancy at small \(\alpha\) increases quickly with the BH spin parameter \(a\).
These problems are solved by adding the NLO correction of \(\omega\)[21]. Below we follow the same steps for the KNBHs. The key observation is that the first term in the square bracket in Eq. (29), which scales as \(\alpha^{2}\), is enhanced by a factor of \(1/b\). For BHs with large spin \(a\) and/or charge \(Q\), this term can be as important as the LO contribution. Other NLO contributions are also added for consistency.
The first NLO correction appears as \(\epsilon\) in the asymptotic radial wave function at large \(r\), which is given in Eq. (26). It can be calculated from the definition of \(l^{\prime}\) in Eq. (19),
\[\epsilon=\frac{-8r_{g}^{2}\mu^{2}+Q^{2}\mu^{2}+8r_{g}qQ\mu-q^{2}Q^{2}}{2l+1}+ \mathcal{O}(\alpha^{4}). \tag{41}\]
The second NLO contribution is from the asymptotic radial wave function at small \(r\). The potential \(U(z)\) in Eq. (29) can be approximated by \(p^{2}-l^{\prime}(l^{\prime}+1)z(1+z)+zd\), where \(d\) is defined as
\[d= (4r_{g}\mu-2qQ)p-2(4r_{g}-r_{+})r_{g}\mu^{2}\] \[+2\mu qQ(4r_{g}-r_{+})-q^{2}Q^{2}+\mathcal{O}(\alpha^{3}).\]
Up to an arbitrary normalization, the corresponding radial function at the NLO is
\[R(r)= \frac{(r-r_{-})\sqrt{d-p^{2}}}{(r-r_{+})^{ip}}{}_{2}F_{1}\Big{(}- l^{\prime}-ip+\sqrt{d-p^{2}}, \tag{42}\] \[l^{\prime}+1-ip+\sqrt{d-p^{2}};1-2ip;-\frac{r-r_{+}}{2b}\Big{)}.\]
In the \(r\to+\infty\) limit, the asymptotic behavior of this function is
\[\frac{(2b)^{-l^{\prime}-ip+\sqrt{d-p^{2}}}\Gamma(2l^{\prime}+1) \Gamma(1-2ip)}{\Gamma(l^{\prime}+1-ip-\sqrt{d-p^{2}})\Gamma(l^{\prime}+1-ip+ \sqrt{d-p^{2}})}r^{l^{\prime}}\] \[+\frac{(2b)^{l^{\prime}+1-ip+\sqrt{d-p^{2}}}\Gamma(-2l^{\prime}-1 )\Gamma(1-2ip)}{\Gamma(-l^{\prime}-ip-\sqrt{d-p^{2}})\Gamma(-l^{\prime}-ip+ \sqrt{d-p^{2}})}r^{-l^{\prime}-1}. \tag{43}\]
Following similar matching steps above, the NLO contribution of \(\delta\lambda\) could be obtained after some algebra,
\[\delta\lambda^{(1)}=\left(\frac{d}{2\epsilon}-\frac{\epsilon}{2}-ip\right) \frac{\left(4\kappa b\right)^{2l^{\prime}+1}\Gamma(n+2l^{\prime}+2)\Gamma_{pd} }{n!\left[\Gamma(2l^{\prime}+1)\Gamma(2l^{\prime}+2)\right]^{2}}, \tag{44}\]
where the superscript (1) indicates it is the NLO result, and the \(\Gamma_{pd}\) is defined as
\[\Gamma_{pd}=\frac{\left|\Gamma(l^{\prime}+1+ip+\sqrt{d-p^{2}}) \Gamma(l^{\prime}+1+ip-\sqrt{d-p^{2}})\right|^{2}\Gamma(1+2\epsilon)\Gamma(1-2 \epsilon)}{\Gamma(1-ip-\sqrt{d-p^{2}}-\epsilon)\Gamma(1+ip+\sqrt{d-p^{2}}+ \epsilon)\Gamma(1-ip+\sqrt{d-p^{2}}-\epsilon)\Gamma(1+ip-\sqrt{d-p^{2}}+ \epsilon)}. \tag{45}\]
The last NLO contribution is from \(\omega_{0}\) and \(\omega_{1}\). Defining \(\omega=\omega_{0}^{(1)}+\omega_{1}^{(1)}\delta\lambda^{(1)}\), the expansion of \(\lambda\) in Eq. (37) is still valid, only with \(\delta\lambda^{(0)}\) replaced by \(\delta\lambda^{(1)}\). Combining with \(\lambda=\bar{n}+\epsilon+\delta\lambda^{(1)}\), one could follow the same steps as in the LO calculation and obtain,
\[\frac{\omega_{0}^{(1)}}{\mu}= 1-\frac{1}{2}\left(\frac{r_{g}\mu-qQ}{\bar{n}}\right)^{2} \tag{46a}\] \[+\frac{(r_{g}\mu-qQ)^{2}}{8\bar{n}^{4}}\left[3(r_{g}\mu-qQ)(5r_{g }\mu-qQ)+8\bar{n}\epsilon\right]\] \[+\mathcal{O}(\alpha^{6}),\] \[\frac{\omega_{1}^{(1)}}{\mu}= \frac{(r_{g}\mu-qQ)^{2}}{\bar{n}^{3}}\] \[-\frac{3(r_{g}\mu-qQ)^{2}}{2\bar{n}^{5}}\left[(r_{g}\mu-qQ)(5r_{ g}\mu-qQ)+2\bar{n}\epsilon\right]\] \[+\mathcal{O}(\alpha^{6}).\]
Finally, we discuss a subtle problem related to the \(\omega\) dependence in the definition of \(p\). In the calculation of the \(\delta\lambda^{(1)}\), the \(\omega\) in \(p\) should be replaced by \(\omega_{0}^{(0)}\), rather than \(\omega_{0}^{(1)}\). Here we explain the reason. In deriving the small-\(r\) asymptotic form of the radial function, we approximate \(U(z)\) in Eq. (29) by \(p^{2}-l^{\prime}(l^{\prime}+1)z(z+z)+zd\). The coefficient of \(z\) and \(z^{2}\) are accurate at \(\mathcal{O}(\alpha^{2})\) and \(\mathcal{O}(\alpha^{0})\), respectively. At \(z\sim\mathcal{O}(\alpha)\), this two terms are at the same order of \(\mathcal{O}(\alpha^{4})\). Consequently, we only need to keep the terms in \(p^{2}\) up to \(\mathcal{O}(\alpha^{4})\), which then leads to \(\omega=\omega_{0}^{(0)}\) in \(p\). In comparison to the numerical calculation, this choice of \(\omega\) gives a satisfactory NLO result. Using \(\omega_{0}^{(1)}\) in \(p\) is not as satisfactory, due to partially including the higher-order contributions.
## IV Results
The eigenfrequency of the Kerr BH superradiance has been studied in Refs. [8; 10; 21]. In comparison, the case for Kerr-Newman BH has two more parameters, the BH charge \(Q\) and the scalar charge \(q\). In this section, we first study the superradiance of a neutral scalar field, focusing on the effect of \(Q\). Then we consider the superradiance of a charged scalar field. Comparisons with the numerical calculations in the literature are also provided.
### Neutral scalar fields
In the following study of neutral scalar superradiance, we adopt the NLO \(\delta\lambda^{(1)}\) in Eq. (44), where the scalar charge \(q\) is set to zero. The \(\omega_{0}^{(1)}\) and \(\omega_{1}^{(1)}\) in Eqs. (46) are used. Then the NLO eigenfrequency is \(\omega=\omega_{0}^{(1)}+\omega_{1}^{(1)}\delta\lambda^{(1)}\).
The BH charge \(Q\) cannot be chosen arbitrarily. In our derivation, we have implicitly assumed the KNBH has horizons, which requires \(|Q|\leq\sqrt{r_{g}^{2}-a^{2}}\). In addition,
neutral scalars could not distinguish the sign of the BH charge. Mathematically, it means the BH charge \(Q\) can only appear in the formulas as \(Q^{2}\). So it is sufficient to only consider positive \(Q\).
The superradiance condition in Eq. (16) with \(q=0\) has the same form as the Kerr BH. The effect of the BH charge \(Q\) is hidden in \(r_{+}=r_{g}+\sqrt{r_{g}^{2}-a^{2}-Q^{2}}\). Keeping the BH mass \(M\) and spin \(a\) fixed, larger charge \(Q\) results in a larger upper limit of \(\text{Re}(\omega)\). Thus massive scalars too heavy to be produced with Kerr BH superradiance may exist in the superradiant region of KNBHs.
Figure 1 shows the imaginary part of \(\omega\) as a function of \(r_{g}\mu\). For comparison, the curves for Kerr BHs are also shown, labeled with \(Q=0\). All curves have the same qualitative behavior. With an increasing value of \(r_{g}\mu\), they first increase, then drop rapidly to below zero after reaching the maxima. There are three effects of the BH charge \(Q\). Firstly, the superradiant region of \(r_{g}\mu\) is enlarged with larger \(Q\). Correspondingly, the peak of the curve moves to the right with increasing \(Q\). The maximum \(r_{g}\mu\) with positive \(\text{Im}(\omega)\) is quite accurately determined by \(\mu=\omega_{c}\). Secondly, the maximum \(\text{Im}(\omega)\) increases with larger \(Q\). Fixing the BH spin to be \(a=0.9\), the maximum values of \(r_{g}\text{Im}(\omega)\) with \(Q=0\) are \(2.088\times 10^{-8}\), \(2.427\times 10^{-9}\) and \(1.029\times 10^{-10}\) for \(l=m=1,2,3\), respectively. The numbers for \(Q=0.43\) are \(1.476\times 10^{-7}\), \(2.006\times 10^{-8}\) and \(8.760\times 10^{-10}\), which are larger than the \(Q=0\) cases by factors of 7.07, 8.26 and 8.51. For BHs with spin \(a=0.7\), the maximum \(Q\) is 0.71. The enhancement factors are 90.29, 269.91, and 707.16, for \(l=m=1,2,3\), respectively. Finally, in the ranges of small \(r_{g}\mu\) before reaching the round peaks of the \(Q=0\) curves, the charge \(Q\) turns out to impede the growth of the scalar clouds. We define a factor \(s(Q)\) as
\[s(Q)=\frac{\text{Im}\,\omega(Q)}{\text{Im}\,\omega(Q=0)}. \tag{47}\]
In Fig. 2, we show \(s(Q)\) as a function of \(r_{g}\mu\), for two different BH spins and several values of \(Q\). Interestingly, the suppression factor varies slowly with \(r_{g}\mu\). It decreases with increasing \(Q\), reaching the minimum value \(\sim 0.8\) for \(a=0.9\) and \(\sim 0.5\) for \(a=0.7\).
In Ref. [56], the authors claim that when \(a\gtrsim 0.997r_{g}\)
Figure 1: The imaginary part of NLO eigenfrequency with \(q=0\) as a function of \(r_{g}\mu\). Only the curves with \(n=0\) are shown. In the top (bottom) panel, the BH spin \(a\) is 0.9 (0.7). In both panels, from left to right, the three bunches correspond to \(l=m=1,2,3\), respectively. In each bunch, the curves with different colors correspond to different values of the BH charge \(Q\).
Figure 2: Factor \(s(Q)\) with \(q=0\) as a function of \(r_{g}\mu\) for BH spin \(a=0.9\) (upper panel) and \(a=0.7\) (lower panel). The vertical dashed line in each panel labels the value of \(r_{g}\mu\) where \(\text{Im}\,\omega(Q=0)\) reaches its maximum value for the corresponding spin parameter \(a\).
the maximum value of \(\operatorname{Im}\omega\) decreases as \(Q\) grows. We do not observe the same behavior. For any spin parameter \(a\), the peak value of \(\operatorname{Im}\omega\) from the NLO approximation increases monotonically with \(Q\).
### Charged scalar fields
In this part, we study the superradiance of KNBHs under charged scalar perturbation. The NLO eigenfrequency is given by \(\omega=\omega_{0}^{(1)}+\omega_{1}^{(1)}\delta\lambda^{(1)}\), with the NLO \(\delta\lambda^{(1)}\) in Eq. (44), and the \(\omega_{0}^{(1)}\) and \(\omega_{1}^{(1)}\) in Eqs. (46). Note that the \(\omega\) in \(p\) should take the form of \(\omega_{0}^{(0)}\) in Eq. (39), as explained at the end of Sec. III.2. We also compare the NLO results to the LO ones. The latter is given by \(\omega=\omega_{0}^{(0)}+\omega_{1}^{(0)}\delta\lambda^{(0)}\), with the expressions defined in Eqs. (36), (39) and (40). The \(\omega\) in \(p\) is replaced by \(\mu\) for consistency.
Figure 3 shows the comparison of the LO and NLO approximations to the numerical results taken from Fig. 6 in Ref. [53]. The NLO approximation agrees much better with the numerical results. In particular, the average percentage errors of the NLO results for the points in Fig. 3 are \(6.7\%,9.9\%,20.7\%\) and \(48.3\%\) for \(r_{g}\mu=0.1,0.2,0.3\) and \(0.41\), respectively. These numbers can be used as estimates of the NLO results for different values of \(\alpha\). Moreover, the convergence of NLO results is better for a smaller value of \(r_{g}\mu\), qualifying the power-counting strategy. To the contrary, the LO results do not seem to converge to the numerical result at small \(r_{g}\mu\), which is also observed for Kerr BHs [21]. The reason for the bad convergence of the LO result is explained at the beginning of Sec. III.2. A caveat is that the curves for the LO approximations in Fig. 3 are not the same as those in Ref. [53]. The latter misses a factor of \(1/2\).
Table. 1 shows the comparison of the NLO results and the numerical solutions for five more parameter sets in the literature. They are the most unstable modes with different parameters. The percentage uncertainty of the NLO approximation varies from \(14\%\) to \(29\%\) compared to the numerical results.
Next, we analyze the effect of \(q\). In the formulas, the \(q\) and \(Q\) appears as \(qQ\) and \(Q^{2}\). So it is sufficient to consider the case with \(Q>0\), and with \(q\) being either positive or negative. There are two constraints for the existence of the superradiant bound states. The superradiance requires \(\omega<\omega_{c}\) in Eq. (16). And the existence of the bound states gives the second constraint \(\lambda>0\) from Eq. (22), which is approximately \(r_{g}\mu-qQ>0\).
If the scalar and the KNBH at the center have opposite charges, i.e. \(qQ<0\), the scalar cloud is more tightly bounded. In this case, the second constraint above is automatically satisfied. Figure 4 shows the imaginary part of \(\omega\) as a function of \(r_{g}\mu\) in the \(n=0\), \(l=m=1\) bound state, with BH spin \(a=0.9\) and charge \(Q=0.01\). The scalar charge \(q\) varies from \(-45\) to \(0\). The region of superradiance shrinks when \(q\) is more negative, which is a consequence that \(\omega_{c}\) decreases with \(q\) for fixed \(Q\). The peak value of \(\operatorname{Im}(\omega)\) seems to be smaller with decreasing \(q\). Nonetheless, a more careful study shows
\begin{table}
\begin{tabular}{c c c c} \hline \hline Case & Type & \(\operatorname{Im}(\omega)\) & \(\%\) error \\ \hline \multirow{3}{*}{A} & LO & 5.623\(\times 10^{-9}\) & 74.9\% \\ & NLO & 2.882\(\times 10^{-8}\) & 28.5\% \\ & Numerical & 2.243\(\times 10^{-8}\) & - \\ \hline \multirow{3}{*}{B} & LO & 1.224\(\times 10^{-8}\) & 92.9\% \\ & NLO & 1.981\(\times 10^{-7}\) & 14.1\% \\ & Numerical & 1.736\(\times 10^{-7}\) & - \\ \hline \multirow{3}{*}{C} & LO & 1.264\(\times 10^{-8}\) & 92.9\% \\ & NLO & 2.041\(\times 10^{-7}\) & 14.1\% \\ & Numerical & 1.788\(\times 10^{-7}\) & - \\ \hline \multirow{3}{*}{D} & LO & 1.263\(\times 10^{-8}\) & 92.9\% \\ & NLO & 2.041\(\times 10^{-7}\) & 14.1\% \\ & Numerical & 1.788\(\times 10^{-7}\) & - \\ \hline \multirow{3}{*}{E} & LO & 1.27\(\times 10^{-8}\) & 88.8\% \\ & NLO & 1.39\(\times 10^{-7}\) & 22.7\% \\ \cline{1-1} & Numerical & 1.13\(\times 10^{-7}\) & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the NLO approximations of \(\operatorname{Im}(\omega)\) with the numerical results from Ref. [56] (cases A to D) and from Ref. [53] (case E). All cases are with \(n=0\) and \(l=m=1\). The numbers below assume \(r_{g}=1\) for compacity. The percentage error is calculated by taking the difference between the approximation and the numerical result, then dividing it by the numerical result.
Figure 3: Comparison of the numerical result and the analytic approximations for \(n=0\), \(l=m=1\), \(a=0.98\), and \(Q=0.01\), with \(r_{g}\) chosen to be \(1\) for compacity. The imaginary part of \(\omega\) is plotted as a function of the scalar field charge \(q\). The dashed (solid) curves are the LO (NLO) approximations and the scattered dots are numerical results taken from Fig. 6 in Ref. [53]. The curves with different colors correspond to different values of \(\mu\), labeled above the corresponding curves with the same color.
that the maximum \(\mathrm{Im}(\omega)\) happens at some small but nonzero \(|q|\) (see Table. 2).
If the charges of the scalar and the KNBH have the same sign, i.e. \(qQ>0\), the scalar cloud is less bounded. The second constraint above gives \(r_{g}\mu>qQ\) for the existence of bound states. Figure 5 shows the imaginary part of \(\omega\) as a function of \(r_{g}\mu\) in the \(n=0\), \(l=m=1\) bound state, with BH spin \(a=0.9\) and charge \(Q=0.01\). With larger value of positive \(q\), the superradiance region shrinks and the peak is lower as well.
## V Conclusion
In this work, we have studied the scalar superradiant instability of the KNBH and obtained the LO and NLO expressions of the superradiant rate in the regime of \(\alpha\ll 1\). The calculation is based on the matching method which is proposed by Detweiler for Kerr BHs in Ref. [6] and developed in our previous work [21]. In this paper, we further refine the power-counting strategy and apply it to the KNBH.
The LO scalar superradiant rate for KNBH has been calculated previously in Ref. [53]. With our refined power-counting strategy, a similar result is obtained but with an extra overall factor of \(1/2\). We conjecture the factor is from the mistreatment of the \(\Gamma\) functions with negative integer arguments, similar to the case of Kerr BHs. More analysis could be found in our previous work [21].
We compare the LO and NLO results with the existing numerical calculations in the literature. The LO results are smaller than the numerical solutions by an order of magnitude. To the contrary, the percentage error of the NLO result ranges from a few percent to about \(50\%\), depending on the value of \(\alpha\) (see Fig. 3 and Table 1). In particular, the error of the NLO result decreases for a smaller value of \(\alpha\), qualifying our power-counting strategy.
The obtained NLO expression has a compact form and can be straightforwardly applied to phenomenological studies of the KNBH superradiance as well as the ultralight scalars, either neutral or charged. Besides the superradiance condition \(\mathrm{Re}(\omega)<m\Omega_{H}\) as the Kerr BHs, there is another condition \(r_{g}\mu>qQ\) for the existence of bound states. For neutral scalars, larger BH charge \(Q\) leads to a larger superradiant range of \(r_{g}\mu\) as well as the maximum superradiant rate (see Fig. 1). Thus massive neutral scalars too heavy to be produced with Kerr BH superradiance may exist in the superradiant region of KNBHs. The situation is different for charged scalars. For fixed BH spin \(a\) and charge \(Q\), increasing the scalar charge \(q\) always leads to narrower superradiant range of \(r_{g}\mu\) (see Figs. 4 and 5). Interestingly, the maximum su
\begin{table}
\begin{tabular}{c c c} \hline (a,Q) & q & \(\mathrm{Im}(\omega)\) \\ \hline & -2.5 & 2.10313\(\times 10^{-8}\) \\ (0.9, 0.01) & -2.25 & 2.10329\(\times 10^{-8}\) \\ & -2.2 & 2.10329\(\times 10^{-8}\) \\ & -2 & 2.10268\(\times 10^{-8}\) \\ \hline & -1.25 & 2.10814\(\times 10^{-8}\) \\ (0.9, 0.02) & -1.1 & 2.10831\(\times 10^{-8}\) \\ & -1 & 2.10815\(\times 10^{-8}\) \\ & -0.75 & 2.10682\(\times 10^{-8}\) \\ \hline & -3 & 4.14247\(\times 10^{-10}\) \\ (0.7, 0.01) & -2.8 & 4.14270\(\times 10^{-10}\) \\ & -2.75 & 4.14260\(\times 10^{-10}\) \\ & -2.5 & 4.14104\(\times 10^{-10}\) \\ \hline & -1.5 & 4.14863\(\times 10^{-10}\) \\ (0.7, 0.02) & -1.4 & 4.14888\(\times 10^{-10}\) \\ & -1.25 & 4.14726\(\times 10^{-10}\) \\ & -1 & 4.13927\(\times 10^{-10}\) \\ \hline \end{tabular}
\end{table}
Table 2: The maximum value of \(\mathrm{Im}(\omega)\) obtained by varying \(q\), with \(a\) and \(Q\) fixed. The numbers below assume \(r_{g}=1\) for compacity.
Figure 4: The imaginary part of NLO eigenfrequency as a function of \(r_{g}\mu\) with different negative values of \(q\). Other parameters are \(n=0\), \(l=m=1\), \(a=0.9\) and \(Q=0.01\).
Figure 5: The imaginary part of NLO eigenfrequency as a function of \(r_{g}\mu\) with different positive values of \(q\). Other parameters are \(n=0\), \(l=m=1\), \(a=0.9\) and \(Q=0.01\).
perradiant rate happens at a small negative scalar charge \(q\) (see Table 2). We have no explanation for this observation.
###### Acknowledgements.
This work is supported in part by the National Natural Science Foundation of China (NSFC) under Grant No. 12075136 and the Natural Science Foundation of Shandong Province under Grant No. ZR2020MA094.
|
2308.09026 | LesionMix: A Lesion-Level Data Augmentation Method for Medical Image
Segmentation | Data augmentation has become a de facto component of deep learning-based
medical image segmentation methods. Most data augmentation techniques used in
medical imaging focus on spatial and intensity transformations to improve the
diversity of training images. They are often designed at the image level,
augmenting the full image, and do not pay attention to specific abnormalities
within the image. Here, we present LesionMix, a novel and simple lesion-aware
data augmentation method. It performs augmentation at the lesion level,
increasing the diversity of lesion shape, location, intensity and load
distribution, and allowing both lesion populating and inpainting. Experiments
on different modalities and different lesion datasets, including four brain MR
lesion datasets and one liver CT lesion dataset, demonstrate that LesionMix
achieves promising performance in lesion image segmentation, outperforming
several recent Mix-based data augmentation methods. The code will be released
at https://github.com/dogabasaran/lesionmix. | Berke Doga Basaran, Weitong Zhang, Mengyun Qiao, Bernhard Kainz, Paul M. Matthews, Wenjia Bai | 2023-08-17T14:56:08Z | http://arxiv.org/abs/2308.09026v1 | # LesionMix: A Lesion-Level Data Augmentation Method for Medical Image Segmentation
###### Abstract
Data augmentation has become a de facto component of deep learning-based medical image segmentation methods. Most data augmentation techniques used in medical imaging focus on spatial and intensity transformations to improve the diversity of training images. They are often designed at the image level, augmenting the full image, and do not pay attention to specific abnormalities within the image. Here, we present LesionMix, a novel and simple lesion-aware data augmentation method. It performs augmentation at the lesion level, increasing the diversity of lesion shape, location, intensity and load distribution, and allowing both lesion populating and inpainting. Experiments on different modalities and different lesion datasets, including four brain MR lesion datasets and one liver CT lesion dataset, demonstrate that LesionMix achieves promising performance in lesion image segmentation, outperforming several recent Mix-based data augmentation methods. The code will be released at [https://github.com/dogabasaran/lesionmix](https://github.com/dogabasaran/lesionmix).
Keywords:Data augmentation Lesion populating Lesion inpainting Image synthesis Lesion image segmentation.
## 1 Introduction
Availability of labelled medical imaging data has been a long-term challenge for developing robust machine learning methods for medical image segmentation. In particular, when dealing with lesions or abnormality detection, datasets often follow a long-tail distribution [11, 21], which means there can be a variety of categories for abnormal cases but with each category only containing very few samples. In medical imaging, most data augmentation methods are developed at the image level, aiming to increase the diversity of the full image [8]. They often lack the capability to model specific abnormalities in the images. Recently, several disease-specific augmentation methods have been proposed for brain tumors, multiple sclerosis, and skin lesions [19, 2, 1]. Unfortunately, the majority
of these methods are either disease or organ-specific, or are difficult to train and implement due to their complexity.
In this work, we propose LesionMix, a novel and simple lesion-level data augmentation method for medical image segmentation. LesionMix is able to populate lesions with various properties, including shape, location, intensity and lesion load, as well as inpaint existing lesions by using a dual-branch iterative 3D framework. With LesionMix, we are able to train lesion segmentation models in a low-data setting, even if there are very few samples of lesion images. We perform a comprehensive evaluation of LesionMix using different imaging modalities and datasets, including four brain MR lesion datasets and one liver CT lesion dataset. Experiments show that LesionMix achieves promising lesion segmentation performance on various datasets and outperforms several state-of-the-art (SOTA) data augmentation methods.
### Related Works
#### 1.1.1 Non-generative data augmentation.
Traditional data augmentation (TDA) techniques are widely used for training medical image segmentation models [12]. TDA include flipping, rotating, scaling, intensity changes, and elastic deformations. These augmentations do not dramatically change the lesion properties, such as the shape and location of lesions with respect to the surrounding tissue. Zhang et al. proposed CarveMix, derived from CutMix [26], which uses a lesion-aware Mix-based technique to carve lesion regions from one image and insert them into another image [28]. Zhu et al. developed another Mix-based data augmentation method, SelfMix, which performs augmentation by mixing tumours with non-tumour regions [29]. Lebbos et al. introduced semantic mixing for rare lesions in ultrasound images [16]. Zhang et al. presented ObjectAug, an object-level augmentation method for semantic image segmentation [27]. These methods provide valuable insights for lesion-level data augmentation. However, they directly mix original lesion masks for augmentation without augmenting individual lesion volumes, with no attention to location of the augmentation, or the lesion load of augmented images.
#### 1.1.2 Generative data augmentation.
Generative methods provide an alternative way for data augmentation by performing abnormality synthesis. Salem et al. synthesises multiple sclerosis lesions using an encoder-decoder U-Net structure [22]. Bissoto, Jin, and Li et al. utilise generative adversarial networks (GANs) to synthesise skin lesions or brain tumours [17, 5, 14]. Reinhold et al. creates lesions with a predetermined lesion load using a structural casual model [20]. Xia et al. employs an adversarial framework for subject-specific pathological to healthy image synthesis, referred to as pseudo-healthy synthesis [25]. Similarly, Basaran et al. performs lesion image synthesis and pseudo-healthy synthesis by using cyclic attention-based generators [3]. Lin et al. proposes InsMix, a data augmentation method for nuclei segmentation, by employing a Copy-Paste-Smooth principle with a smooth-GAN for achieving contextual smoothness [18].
While generative augmentation methods have potential for diverse abnormality generation, they are often disease-specific and not easy to extend to different applications and datasets.
### Contributions
There are three main contributions of this work: (1) We propose a novel non-deep data augmentation method, which augments images at the lesion level and accounts for lesion shape, location, intensity as well as load distribution. (2) The method is easy to implement and can be added to a segmentation pipeline to complement traditional data augmentations. (3) It is generic and can be applied to datasets of various modalities (MRI, CT, etc).
## 2 Method
### LesionMix
The objective is to develop an efficient and easy-to-implement augmentation method that is aware of lesions in medical images, accounting for the spatial and load distribution of the lesions. Figure 1 illustrates the proposed LesionMix method, which consists of two branches, namely for lesion populating and lesion inpainting. LesionMix takes a lesion image, X, and its corresponding lesion mask, Y, as input, and generates an augmented lesion image, X', and lesion mask, Y', as output, which achieves a target lesion load, \(\rm{v_{tar}}\). If the target lesion load, \(\rm{v_{tar}}\), is greater than the current load, \(\rm{v_{cur}}\), the lesion load is increased via the populating branch. Otherwise, the lesion load is decreased via the inpainting branch. To generate diverse lesion samples, lesion-level augmentations are performed during populating. To maintain the fidelity of the samples, lesions are augmented according to learnt spatial and load distributions.
Figure 1: Illustration of the augmentation process of LesionMix. It consists of lesion populating (top) and lesion inpainting (bottom) branches to iteratively augment images to a desired lesion load.
### Lesion populating
#### 2.2.1 Lesion-level augmentation.
Given the input image, X, and its lesion mask, Y, a lesion is randomly selected and augmented. We apply 3D spatial augmentations, brightness augmentations (multiplicative), and Gaussian noise augmentations. Lesion-level spatial augmentations include flipping, rotating, resizing, and elastic deformation. Augmentations are applied by extracting the selected 3D lesion volume, applying the augmentation, and inserting the augmented lesion back into the image. By iteratively inserting lesions into the images, augmented lesions can overlap one another, allowing for unique lesion formations. Augmentation parameters are set empirically and provided in Table 1.
The original image, lesion mask, and the augmented lesion region are mixed using the following equations to generate the augmented image and mask,
\[\mathrm{X}^{\mathrm{\text{'}}}=\mathrm{X}\odot(1-\mathrm{M})+\mathrm{F}\odot \mathrm{M} \tag{1}\]
\[\mathrm{Y}^{\mathrm{\text{'}}}=\mathrm{Y}\odot(1-\mathrm{M})+\mathrm{M}, \tag{2}\]
where F denotes the augmented lesion intensity image, M denotes the mask of the augmented lesion region, and \(\odot\) denotes element-wise multiplication. To generate X' we use a soft mask M, in which the boundary pixels of the lesion mask are weighted by 0.66 and the inner pixels are weighted by 1. This allows the lesion boundary to blend more naturally with the input image. Lesion populating can be performed iteratively. At each iteration, lesion-level augmentation is applied to a randomly selected lesion and inserted into the image, until the lesion load, \(\mathrm{v}_{\mathrm{cur}}\), reaches the target lesion load, \(\mathrm{v}_{\mathrm{tar}}\).
#### 2.2.2 Lesion likelihood map.
The augmented lesion is inserted into the original image at a location sampled from a spatial heatmap, termed the lesion likelihood map, which describes the probability that a lesion appears at a specific spatial location in the anatomy. The map is learnt by summing the labels of the images
\begin{table}
\begin{tabular}{c c} \hline \hline Augmentation & Details \\ \hline Flipping & In X,Y,Z dimensions, \(p=0.5\) for each dimension \\ Rotating & In X,Y,Z dimensions, \(p=0.5\) for each dimension, range =[1\({}^{\circ}\), 89\({}^{\circ}\)] \\ Resizing & Dimension multiplication, range=[0.5, 1.8] \\ Elastic deformation[6] & Random deformation grid, \(\sigma\) range= [3, 7] \\ Brightness & Intensity value multiplication, range = [0.9, 1.1] \\ Gaussian noise & Addition of \(\mathcal{N}(0,1)\) \\ Inpainting & Fast marching method \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data augmentation parameters for LesionMix. \(p\) denotes the probability of augmentation. For augmentations where a range is given, the parameter is determined by selecting a value by uniformly sampling from the range.
to produce a lesion heatmap and normalising it into a probability map. The map is computed once for each organ dataset before model training. For brain datasets, augmented lesions can occur on both the white matter and gray matter. Although white matter lesions may be more common, gray matter lesions have been recorded in clinical literature [6].
### Lesion inpainting
Given the input image and lesion mask, a 3D lesion volume is randomly selected. 2D axial slices of the volume are inpainted using the fast marching method [24], which fills in the intensities within the lesion mask with neighbouring intensities from the normal region, formulated by,
\[I(p)=\frac{\sum_{q\in N(p)}w(p,q)[I(q)+\nabla I(q)(p-q)]}{\sum_{q\in N(p)}w(p,q)}, \tag{3}\]
where \(I\) denotes the intensity, \(p\) denotes a pixel within the lesion mask, \(q\in N(p)\) denotes pixels in the neighbourhood of \(q\) that belong to the normal region, \(\nabla I(q)\) denotes the image gradient at \(q\) and \(w(p,q)\) denotes a weighting function determined by the distance and direction from \(q\) to \(p\)[24]. After inpainting, we insert the inpainted slices back into the original image. 2D inpainting is implemented on axial slices of the lesions, due to the simplicity in implementation and fast computation. To ensure 3D continuity of the inpainted lesion, Gaussian blurring is performed on the boundary of inpainted lesions along all three dimensions,
\[\text{X'}=G(f(\text{X},\text{M}))\odot\partial\text{M}+f(\text{X},\text{M}) \odot(1-\partial\text{M}) \tag{4}\]
\[\text{Y'}=\text{Y}-\text{M}, \tag{5}\]
where \(f(\text{X},\text{M})\) denotes the inpainting function using fast marching, \(G\) denotes the Gaussian blurring function, and \(\partial\text{M}\) denotes the boundary of the lesion mask. Lesion inpainting can be performed iteratively for randomly selected lesions, until the lesion load, \(\text{v}_{\text{cur}}\), reaches the target lesion load, \(\text{v}_{\text{tar}}\).
### Lesion load distribution
Unlike other Mix-based methods, LesionMix allows for the generation of datasets with varying lesion load distribution. The load distribution, \(P(v)\), is a probability distribution function for lesion volume, \(v\). It characterises the degree of severity of the disease. We experiment with six different lesion load distributions: low, medium, high, uniform, Gaussian, and Real. Real denotes the real distribution learnt from the data. The other five are parametric distribution functions, with parameters described in Figure 2.
For each image to be augmented, we sample the target lesion load, \(\text{v}_{\text{tar}}\), from the distribution and apply lesion populating or inpainting iteratively to achieve this target. If \(\text{v}_{\text{tar}}\) is lower than \(\text{v}_{\text{cur}}\), lesion inpainting is applied; if \(\text{v}_{\text{tar}}\) is greater than \(\text{v}_{\text{cur}}\), lesion populating is applied. We present examples of
inpainting and populating as examples of LesionMix in low load and high load distribution setting examples, respectively, in Figure 3. We present the algorithm of LesionMix in Algorithm 1.
### Properties of LesionMix
We compare LesionMix with other Mix-based data augmentation methods, including CutMix [26], CarveMix [28] and SelfMix [29]. LesionMix offers greater control in augmentation, compared to CarveMix and SelfMix. LesionMix is spatially-aware, utilising the lesion likelihood map for drawing locations and thus mixes different backgrounds with the lesion. LesionMix performs shape and intensity augmentations at the lesion level, thus increasing sample diversity. Apart from populating lesions, it is also able to inpaint lesions and control the lesion load distribution. We further summarise in Table 2, and present a qualitative comparison against CutMix and CarveMix in Figure 4.
Figure 2: Illustration of the brain lesion datasets’ lesion load distribution (light blue), named Dataset Load, and the six load distributions (dark blue) for augmentation used by LesionMix. Low load is a uniform distribution sampled between 5 and 25 percentiles of the Dataset Load. Medium load is a uniform distribution sampled between 37.5 and 62.5 percentiles of the Dataset Load. High load is a uniform distribution sampled between 75 and 95 percentiles of the Dataset Load. Uniform load is a uniform distribution sampled between 5 and 95 percentiles of the Dataset Load. Gaussian load samples from a Gaussian distribution with the same mean and variance as the Dataset load. Real load samples directly from the Dataset load distribution. This process is repeated for the LiTS dataset.
## 4 Conclusion
\begin{table}
\begin{tabular}{l c c c} \hline Property & CutMix [25] & CarveMix [26] & SelfMix [27] & LesionMix \\ \hline Lesion-aware & \(\surd\) & \(\surd\) & \(\surd\) \\ Spatially-aware & & & \(\surd\) \\ Lesion-background mixing & & \(\surd\) & \(\surd\) \\ Lesion-level augmentation & & & \(\surd\) \\ Lesion inpainting & & & \(\surd\) \\ Lesion load control & & & \(\surd\) \\ \hline \end{tabular}
\end{table}
Table 2: Qualitative comparison of the properties of LesionMix with other Mix-based augmentation methods.
Figure 3: Original image, annotation and augmented data by LesionMix with low and high load image examples for both brain (first three rows) and liver (fourth row) datasets. Red denotes brain or liver lesions. Green denotes the liver. Low load example demonstrates performance of inpainted lesions, indicated by yellow arrows. High load example shows populated lesions, indicated by blue arrows.
```
Input Training images and annotations \(\{(\mathrm{X}_{1},\mathrm{Y}_{1}),...,(\mathrm{X}_{\mathrm{N}},\mathrm{Y}_{ \mathrm{N}})\}\); the desired number of augmented images \(T\), the desired load distribution \(P(v)\). Output Augmented training data \(\{(\mathrm{X}_{1}^{\prime},\mathrm{Y}_{1}^{\prime}),...,(\mathrm{X}_{\mathrm{T} }^{\prime},\mathrm{Y}_{\mathrm{T}}^{\prime})\}\) for\(\;\)t=1,2,...,\(T\)do Sample target load from the distribution, \(\mathrm{v}_{\mathrm{tar}}\sim P(v)\) if\(\mathrm{v}_{\mathrm{cur}}<\mathrm{v}_{\mathrm{tar}}\)then while\(\mathrm{v}_{\mathrm{cur}}<\mathrm{v}_{\mathrm{tar}}\)do 1) Randomly select lesion from \((\mathrm{X}_{\mathrm{i}},\mathrm{Y}_{\mathrm{i}})\) 2) Sample lesion location from the lesion likelihood map 3) Apply lesion-level augmentations and generate F and M 4) Apply mixing in Eq. 1 and 2 end for else while\(\mathrm{v}_{\mathrm{cur}}>\mathrm{v}_{\mathrm{tar}}\)do 1) Randomly select lesion from \((\mathrm{X}_{\mathrm{i}},\mathrm{Y}_{\mathrm{i}})\) and extract axial slice 2) Apply inpainting in Eq. 4 and 5 and reinsert slices end for end for return\((\mathrm{X}_{\mathrm{i}}^{\prime},\mathrm{Y}_{\mathrm{i}}^{\prime})\)
```
**Algorithm 1** LesionMix: Lesion-level augmentation
## 3 Experiments
### Data
As a generic method for lesion data augmentation, LesionMix is evaluated on brain lesion MR images and liver lesion CT images.
Figure 4: Original image, annotation and augmented data by Mix-based methods (lesions: red, liver: green). CutMix produces discontinuities in the image. CarveMix can place lesions outside the organ, indicated by arrows.
**Brain lesion data.** Four brain lesion datasets are used, the MICCAI 2008 multiple sclerosis (MS) lesion dataset (MS2008, \(n\)=20) [23], ISBI 2015 longitudinal MS lesion dataset (MS2015, \(n\)=21) [7], MICCAI 2016 MS lesion dataset (MS2016, \(n\)=15) [9], and MICCAI 2017 white matter hyperintensity dataset (WMH2017, \(n_{train}\)=60, \(n_{test}\)=110) [15]. We use the WMH2017 training set for training a lesion segmentation network with the proposed augmentation method and evaluate its performance on the WMH2017 test set and MS2008, MS2015, MS2016 datasets. For all datasets, FLAIR images are used and resampled to \(1\times 1\times 1\) mm\({}^{3}\) voxel spacing, followed by brain extraction using FSL [13] and rigid registration into the MNI space [10].
#### 3.1.3 Liver lesion data.
We use the MICCAI 2017 liver tumor segmentation dataset (LiTS) [4]. The training set contains CT scans for 131 subjects, which are split into batch 1 (\(n\)=28) and batch 2 (\(n\)=103) by the challenge organisers. We use the LiTS batch 1 dataset for training a liver lesion segmentation network and evaluate its performance on the LiTS batch 2 dataset. The in-plane image resolution ranges from 0.56mm to 1.0mm, and 0.45mm to 6.0mm in slice thickness. The LiTS dataset has high variance of data size and organ shape, therefore we use the normalised label map of the liver as the lesion likelihood map. This ensures the placed lesion is within the liver.
### Implementation details
The proposed method is developed on PyTorch. All augmentation methods are evaluated with the same segmentation model, nnU-Net with 3D full resolution configuration, and trained for 1,000 epochs on NVIDIA Tesla T4 GPUs.
### Results
#### 3.3.1 Lesion load distribution.
We perform an ablation study to select the optimal lesion load distribution for LesionMix. We simulate a low-data setting by selecting just one training image from WMH2017, perform data augmentation using LesionMix to generate 100 augmented images, and train a segmentation network. Table 3 reports the lesion segmentation performance when six different lesion load distributions are used, and compared against the _None_ method, which is trained with a single image without augmentation.
#### 3.3.2 Comparison to other data augmentation methods.
Following the ablation study, we choose the uniform lesion load distribution for the remaining experiments. We compare LesionMix to SOTA data augmentation methods, including traditional data augmentations (TDA), which come default with nnU-Net [12], CutMix [26] and CarveMix [28]. TDA includes rotation, scaling, mirroring, elastic deformation, intensity perturbation and simulation of low resolution. We add CutMix, CarveMix, or the proposed LesionMix onto TDA. We re-implement CutMix [26] for 3D medical images, and use the public code for CarveMix [28].
We are unable to compare against SelfMix [29] due to unavailability of public code. For fair comparison, all methods use nnU-net as the segmentation network and augment the WMH2017 training set for brain lesions and the LiTS batch 1 dataset for liver lesions by five times. We conduct experiments when different sizes of the training data is used. Table 4 reports the lesion segmentation Dice scores for different data augmentation methods. LesionMix improves lesion segmentation against SOTA methods in the majority of experiments. We notice greater statistical significance in experiments with smaller dataset sizes. We present example segmentations against benchmark methods in Figure 5.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Test set & None & Low & Medium & High & Uniform & Gaussian & Real \\ \hline MS2008 & \(16.46_{15.43}\) & \(22.90_{18.59}\) & \(22.23_{17.63}\) & \(23.57_{18.72}\) & \(\mathbf{24.07_{19.04}}\) & \(22.22_{17.59}\) & \(22.66_{17.84}\) \\ MS2015 & \(37.58_{16.05}\) & \(\mathbf{40.51_{9.05}}\) & \(37.09_{12.20}\) & \(40.37_{15.64}\) & \(39.38_{15.68}\) & \(36.64_{14.34}\) & \(37.68_{14.21}\) \\ MS2016 & \(24.35_{20.07}\) & \(36.75_{21.33}\) & \(38.10_{20.37}\) & \(49.08_{18.05}\) & \(\mathbf{50.72_{18.96}}\) & \(41.16_{20.04}\) & \(50.32_{16.43}\) \\ WMH2017 & \(39.30_{25.16}\) & \(49.79_{24.88}\) & \(51.59_{23.80}\) & \(58.99_{21.26}\) & \(\mathbf{59.42_{21.25}}\) & \(53.23_{22.85}\) & \(55.24_{21.56}\) \\ \hline LiTS & \(3.34_{4.94}\) & \(9.40_{6.25}\) & \(12.18_{10.34}\) & \(13.33_{5.94}\) & \(\mathbf{13.65_{8.02}}\) & \(12.88_{7.20}\) & \(11.99_{7.82}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mean and standard deviations of lesion segmentation Dice scores (%), when different lesion load distributions are used for LesionMix. Best results are in bold.
Figure 5: Qualitative comparison of segmentation performance when 10% of dataset size is used. Models with LesionMix detect more lesions and segment them more accurately.
## 4 Conclusion
We present LesionMix, a simple lesion-level data augmentation method. It is aware of the lesion likelihood distribution and produces augmented data with varying lesion load. LesionMix improves segmentation performance against other Mix-based augmentation methods across datasets of different modalities and organs. It is modality- and organ-agnostic and can serve as a useful tool for medical image segmentation.
#### Acknowledgements
This work is supported by the UKRI CDT in AI for Healthcare [http://ai4health.io](http://ai4health.io) (Grant No. EP/S023283/1).
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Size & Test set & TDA [12] & CutMix [26] & CarveMix [28] & LesionMix \\ \hline \multirow{6}{*}{100\%} & MS2008 & \(36.72^{*}_{18.07}\) & \(37.67_{19.09}\) & \(36.69^{*}_{17.82}\) & \(\mathbf{38.30_{17.67}}\) \\ & MS2015 & \(71.59_{11.14}\) & \(\mathbf{72.81_{7.53}}\) & \(71.33_{9.96}\) & \(72.33_{11.41}\) \\ & MS2016 & \(57.85_{17.50}\) & \(63.22_{15.04}\) & \(\mathbf{65.85_{14.26}}\) & \(65.62_{14.56}\) \\ & WMH2017 & \(79.13_{10.59}\) & \(80.14_{9.69}\) & \(79.74_{9.94}\) & \(\mathbf{80.95_{9.45}}\) \\ \cline{2-6} & LiTS & \(61.93_{24.30}\) & \(58.39^{*}_{29.40}\) & \(60.20_{29.04}\) & \(\mathbf{63.51_{24.97}}\) \\ \hline \multirow{6}{*}{50\%} & MS2008 & \(35.11_{21.16}\) & \(35.83_{19.55}\) & \(32.71^{*}_{19.95}\) & \(\mathbf{36.04_{18.93}}\) \\ & MS2015 & \(70.59_{11.14}\) & \(71.77_{7.16}\) & \(67.44^{*}_{9.78}\) & \(\mathbf{71.82_{7.40}}\) \\ & MS2016 & \(55.75^{***}_{17.82}\) & \(57.90^{**}_{14.24}\) & \(62.27_{16.64}\) & \(\mathbf{62.64_{16.26}}\) \\ & WMH2017 & \(73.65_{17.30}\) & \(74.26_{12.69}\) & \(72.45^{*}_{18.25}\) & \(\mathbf{75.65_{17.60}}\) \\ \cline{2-6} & LiTS & \(52.40_{29.21}\) & \(49.60^{*}_{31.32}\) & \(51.98_{27.99}\) & \(\mathbf{52.59_{29.18}}\) \\ \hline \multirow{6}{*}{25\%} & MS2008 & \(34.35_{21.49}\) & \(33.86_{20.21}\) & \(30.64^{*}_{21.17}\) & \(\mathbf{34.51_{20.04}}\) \\ & MS2015 & \(66.09^{**}_{8.82}\) & \(67.69^{*}_{6.10}\) & \(67.12^{*}_{8.13}\) & \(\mathbf{70.73_{8.20}}\) \\ \cline{1-1} & MS2016 & \(55.72^{**}_{18.78}\) & \(58.24_{15.67}\) & \(60.64_{17.67}\) & \(\mathbf{60.94_{16.66}}\) \\ \cline{1-1} & WMH2017 & \(71.50^{*}_{17.88}\) & \(72.02_{14.29}\) & \(72.12_{16.95}\) & \(\mathbf{73.92_{15.81}}\) \\ \cline{1-1} \cline{2-6} & LiTS & \(29.88^{***}_{26.52}\) & \(36.35^{*}_{30.12}\) & \(36.77_{29.45}\) & \(\mathbf{39.25_{30.01}}\) \\ \hline \multirow{6}{*}{10\%} & MS2008 & \(28.77^{**}_{17.77}\) & \(31.32_{18.28}\) & \(29.57^{*}_{21.66}\) & \(\mathbf{32.28_{22.11}}\) \\ & MS2015 & \(63.97^{**}_{9.97}\) & \(64.97^{*}_{10.15}\) & \(58.10^{***}_{13.35}\) & \(\mathbf{67.13_{10.73}}\) \\ \cline{1-1} & MS2016 & \(43.07^{**}_{17.03}\) & \(49.58^{***}_{13.65}\) & \(50.34^{***}_{13.71}\) & \(\mathbf{61.02_{15.56}}\) \\ \cline{1-1} & WMH2017 & \(71.16_{16.70}\) & \(70.18^{*}_{14.79}\) & \(67.80^{**}_{19.97}\) & \(\mathbf{72.03_{15.48}}\) \\ \cline{1-1} \cline{2-6} & LiTS & \(24.23^{***}_{25.76}\) & \(27.99^{*}_{29.26}\) & \(20.53^{***}_{26.84}\) & \(\mathbf{30.12_{27.47}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Mean and standard deviations of lesion segmentation Dice scores (%), at different sizes of training data. Best results are in bold. Asterisks indicate statistical significance (\({}^{*}\): p\(\leq\) 0.05, \({}^{**}\): p \(\leq\) 0.01, \({}^{***}\): p \(\leq\) 0.005) when using a paired Student’s \(t\)-test comparing LesionMix’s performance to baseline methods. |
2310.13234 | Confronting the thermodynamics knowledge gap: A short course on
computational thermodynamics in Julia | Computational elements in thermodynamics have become increasingly important
in contemporary chemical-engineering research and practice. However,
traditional thermodynamics instruction provides little exposure to
computational thermodynamics, leaving students ill-equipped to engage with the
state-of-the-art deployed in industry and academia. The recent rise of
easy-to-use open-source thermodynamic codes presents an opportunity for
educators to help bridge this gap. In this work, we present a short course that
was developed and rolled-out using the Clapeyron.jl package, the material of
which is all openly available on GitHub. The course can serve as a foundation
for others to similarly integrate computational material in thermodynamics
education. The course is structured into three sections. Section one serves as
a refresher and covers core material in numerical methods and thermodynamics.
Section two introduces a range of thermodynamic models such as
activity-coefficient models and cubic equations of state, outlining their
implementation. In section three the focus is moved to deployment, guiding
students on how to implement computational-thermodynamics methods covering
volume solvers, saturation solvers, chemical-stability analysis and flash
problems. In a pilot study conducted with both undergraduate and graduate
students, participants found the material engaging and highly relevant to their
chemical-engineering education. | Luc Paoli, Pavan K. Inguva, Andrew J. Haslam, Pierre J. Walker | 2023-10-20T02:44:42Z | http://arxiv.org/abs/2310.13234v1 | Confronting the thermodynamics knowledge gap: A short course on computational thermodynamics in Julia
###### Abstract
Computational elements in thermodynamics have become increasingly important in contemporary chemical-engineering research and practice. However, traditional thermodynamics instruction provides little exposure to computational thermodynamics, leaving students ill-equipped to engage with the state-of-the-art deployed in industry and academia. The recent rise of easy-to-use open-source thermodynamic codes presents an opportunity for educators to help bridge this gap. In this work, we present a short course that was developed and rolled-out using the Claperron.jl package, the material of which is all openly available on GitHub. The course can serve as a foundation for others to similarly integrate computational material in thermodynamics education. The course is structured into three sections. Section one serves as a refresher and covers core material in numerical methods and thermodynamics. Section two introduces a range of thermodynamic models such as activity-coefficient models and cubic equations of state, outlining their implementation. In section three the focus is moved to deployment, guiding students on how to implement computational-thermodynamics methods covering volume solvers, saturation solvers, chemical-stability analysis and flash problems. In a pilot study conducted with both undergraduate and graduate students, participants found the material engaging and highly relevant to their chemical-engineering education.
## 1 Introduction
The application of thermodynamics in industry and research is becoming increasingly diverse and sophisticated with advances in computational capabilities, available resources, theoretical understanding, and use-cases [1, 2, 3, 4]. The expanding role that thermodynamics plays in the progress of science and engineering is not surprising considering its centrality in describing physical systems. Nevertheless, it places a challenge on educators to provide suitably advanced thermodynamics instruction to ensure students are well-equipped to meet their professional needs after graduation. Accordingly, the continued development of education opportunities and resources for thermodynamics remains as important as ever.
One of the most common tasks encountered is the use of various thermodynamic models and methods to analyze systems of interest, to generate estimates for various properties (e.g., phase diagrams or activity coefficients), or as part of a process / physics-based simulation at a larger scale. In that regard, there are several currently available software packages of varying degrees of functionality that have been used in professional and/or teaching settings e.g., ASPEN [5], PYroMAT [6, 7], CoolProp [8], XSEOS [9], teqp [10], and custom code [11]. A key component that is implicit in the use of these models and software packages is the computational aspect of thermodynamics, yet this is often neglected in thermodynamics courses, even at a graduate level. Often, one of the major selling points of many software packages is the abstraction of the numerical implementation of the model and algorithms, which can be complex, thus enabling users to rapidly employ advanced thermodynamic models in their workflow (e.g., see [10, 12]).
In conceptualizing an advanced teaching course, it is imperative to not treat the various models and packages as "black boxes" which "magically" generate the desired answer [13, 14], as the objective of such advanced courses is to help students develop deeper insights and become informed practitioners. Therefore, course content and packages employed should not only allow students to engage meaningfully with the theory and use of
various thermodynamic models (including state-of-the-art models where practicable), they should also provide the opportunity to explore various back-end details such as the numerics and algorithms needed to compute the desired result. Other considerations that should be taken into account in relation to software for use in educational settings include availability and ease-of-use [14, 15, 16].
In this work, we outline the development and roll-out of a short course for introducing computational thermodynamics to advanced undergraduate / graduate students. The course is intended to be a stand-alone course that can be delivered on an ad-hoc basis, though we intend to integrate components of the course into a graduate thermodynamics module. The course was designed to be modular so that self-directed learners and/or educators can adapt specific components for their own needs or develop an expanded course using the material presented as a foundation. All course material and code are currently publicly available in a GitHub repository and additional contributions are planned to expand the material offered.
The remainder of this article is laid out as follows: the context in which the course is / can be taught is provided in section 2. After a short discussion on the computational environment (section 3), the structure of the course is provided in section 4. The results and feedback obtained from a pilot run of the course is presented and discussed in section 5. Finally, the overall findings of this article are summarised in section 6.
## 2 Course Context
The short course was developed to serve a dual role of introducing advanced undergraduate students and graduate students to advanced thermodynamic modelling tools and real-world applications of the statistical thermodynamic theories. Students taking this course should already be familiar with the fundamentals of thermodynamics (for example, the laws of thermodynamics, Maxwell relations, and phase equilibria) and, perhaps, some thermodynamic models (such as the ideal gas, van der Waals equation of state, and ideal-solution models), as well as having taken some form of introductory coding class. The latter is not necessarily a strong requirement as the course material is delivered in such a way that the students only do minimal coding. Within the context of the Imperial College undergraduate degree program, by the end of the second year, students will have taken two courses on thermodynamics, covering all of the topics mentioned previously, and one introductory course on MATLAB coding, which we deem more than sufficient for a student intending to take part in this course.
For the benefit of those students who are not familiar with all of the above pre-requisites or require a refresher for any of the material, the first section of the course provides sufficient introductory content. Conversely, to suit participants who are already familiar with large sections of this course, or instructors who wish to use only specific sections in their own class, the course is designed to be modular, allowing the user to engage with the material flexibly.
## 3 Computational Environment
The main software package used in this course is the recently developed open-source fluid thermodynamics toolkit Claperon.jl [12] which is implemented in the Julia language. The release of Claperon.jl has enabled users to employ a large variety of thermodynamic models (ranging from activity-coefficient models and standard cubics to the state-of-the-art SAFT equations) in an easy-to-use and extensible manner. The package has several built-in capabilities for thermophysical-property estimation and is currently under active further development to incorporate additional capabilities to make it the go-to thermodynamics toolkit. Claperon.jl has garnered significant interest from a diverse group of users with over 135 stars on GitHub.
Although the Julia language is less mature than alternatives such as Python and MATLAB, it has several features and packages that make it appealing for scientific computing while also being well-suited in an educational setting [17, 18]. In terms of ease-of-use, Julia has a similar syntax to MATLAB and Python which helps reduce challenges students may have with reading and writing code [19]. Julia also has native package-management tools that can negate the need for setting up environments, reducing time spent introducing students to the coding environment.
One particular package we would like to highlight is Pluto.jl [22], which provides _reactive_ notebooks. For the presentation of a course developed previously on introducing partial-differential-equation solvers[15], a combination of ready-made Python scripts for code and accompanying pdf files was used for the course material. From student feedback, two main challenges were noted: some students experienced difficulties in setting up the Python environment needed to run the scripts; and the lack of active and structured learning activities hampered student engagement with the material. Pluto.jl can elegantly resolve both these issues. For this course, we decided to integrate both the lecture notes and codes for a given section within a single file. Illustrated visually in figure 1, within Pluto notebooks, we are able to provide both the code needed to generate the results as well as the text explaining these results. As seen in figure 1, the explanatory text along with the main function
call and plotting code are embedded seamlessly. Pluto has the advantage of avoiding clutter in the notebook by allowing some sections of the code (e.g., plotting) to be hidden; hidden text can be revealed if desired by selecting the designated symbol (highlighted in red in figure 1).
Although similar functionalities are available in more commonly used _interactive_ environments such as Jupyter notebooks, which have already found widespread use in education[23, 24, 25], the reactive nature of Pluto has one major additional benefit to users and educators. In the example shown in figure 1, students are invited to modify the provided code and re-run the cell. In doing so, the entire notebook'reacts' to update every other cell block, including the hidden plotting code. As a result, the figure will automatically update based on the changes they have made. Interactive notebooks by virtue of their sequential nature do not allow for such dynamic behaviour [26]. We believe reactive notebooks allow students to engage more intently with the lecture material and, thereby, foster independent learning[27]. Further details on setting up the computational environment for this course are provided in the online repository.
## 4 Course Structure
Progression was used as the guiding principle when designing the course, to help accelerate students' learning in a short period of time [28]. The course is divided into three sections, each of which is delivered in a three-hour lecture (including two ten-minute breaks). An outline of the course structure is presented in figure 2. The first section serves as an introduction and provides a refresher on the fundamentals of thermodynamics; important numerical methods (such as Newton's algorithm and automatic differentiation) are outlined. In the second section five classes of equations of state are examined, providing students with a high-level overview of each class, as well as their respective advantages and disadvantages. The final section builds on material from the previous two sections and examines various computational thermodynamic problems, such as rooting-finding algorithms, stability analysis and flash algorithms. Details on theory and numerical methods for each problem are also covered.
### Section 1: Fundamentals
The first section of the course is intended to serve as a review to ensure participants are familiar with the pre-requisites of the course. Two main topics are covered: thermodynamics; and numerical methods. In the thermodynamics review, we revisit fundamental concepts such as thermodynamic state variables (temperature, pressure, volume, number of moles etc.) and state functions. We emphasise the point that from a state function
Figure 1: Example taken from the Pluto notebook used in the course. Explanatory text is here highlighted in yellow; example code is highlighted in green. To reveal the hidden code used to generate the embedded chart, students would simply select the symbol highlighted in red. The chart represents the isobaric heat capacity of carbon dioxide obtained using different equations of state.[20, 21]. Students are invited to vary which species and equation of state is used to examine how the ideal contribution plays a role in determining real properties.
expressed in terms of three state variables, one can obtain a complete thermodynamic description of the system using derivatives (e.g., Maxwell relations).
Having set the context for how thermodynamic properties can be generated, the core numerical methods that will be utilised are then briefly covered. We first introduce automatic differentiation, which can be used to obtain numerically exact derivatives of arbitrary functions. Its use in many computational applications, including machine learning, is highlighted to students. Subsequently, root-finding algorithms (mainly Newton-Raphson and fixed-point iterations) and optimisation are discussed. At this stage, students are expected only to be aware of the existence of these methods, how they are implemented (conceptually), and their advantages and limitations. Students should recognise that errors arising during the implementation of thermodynamic models and methods could originate from the numerical methods used and not just from a limitation in their thermodynamics knowledge.
With the exception of a short exercise on how to use automatic differentiation in Julia, this section of the course was delivered through PowerPoint slides only. The remainder of the allocated lecture time was spent ensuring all students had working notebooks, to allow them to actively take part in the remaining two sections.
### Section 2: Implementing thermodynamic models
In this section fives classes of thermodynamic models are discussed; these are intended to cover the range of models that are typically encountered in academia or industry. The order in which they are taught was chosen so as to start with more-familiar approaches (ideal gas and ideal solution) and gradually approach more-advanced models such as SAFT and empirical equations of state.
#### 4.2.1 Ideal-gas equation
We felt that students would be most comfortable starting with something with which they should all be familiar; the obvious choice is therefore the ideal-gas equation,
\[pV=nRT, \tag{1}\]
where \(p\) is pressure, \(V\) is volume, \(n\) is the number of moles, \(R\) is the universal gas constant and \(T\) is temperature, we quickly highlight that the equation alone does not constitute the full picture of an ideal gas. By integrating (1) with respect to volume in order to obtain the Helmholtz free energy, \(A\),
\[A=-\int pdV=-nRT\ln V+c(n,T), \tag{2}\]
we notice that there are missing contributions corresponding to the translational, rotational and vibrational modes of motion. These contributions are vital when obtaining real properties such as heat capacities and speeds of sound, particularly in the gas phase [29]. We illustrate that these missing contributions can be obtained using ideal-heat-capacity correlations, with which students may already be familiar from mass- and energy-balances. The importance of these contributions is highlighted in the case of real gases (shown in the embedded chart in figure 1). Although there are no exercises in this section, students are invited to vary the species and conditions for which this chart is generated in order to appreciate the importance of these contributions.
Figure 2: Course structure summary with bullets providing a brief description of each subsection. Dotted purple arrows denote concepts carried over between subsections of the course.
Enabled by the simplicity of the ideal-gas model, we take the opportunity to introduce students to other important concepts in computational thermodynamics. For instance, while correlations provide simple models to address typically complex properties, one does need to be careful to only use these for _interpolation_ as the accuracy of these models can degrade quickly when _extrapolating_. We also introduce students to the concept of group-contribution approaches, which can be very useful when trying to model species for which no (molecular) parameters exist. While such approaches are more common in relation to other models discussed later in the course, it is helpful to first introduce these approaches in the context of a more-familiar model.
#### 4.2.2 Activity coefficient models
Although some students may already be familiar with the ideal-solution model, we begin this section by describing the assumptions implicit in this approach, and how they lead to the more-familiar Raoult's law,
\[p_{i}=x_{i}p_{\text{sat},i} \tag{3}\]
where \(p_{i}\), \(p_{\text{sat},i}\) and \(x_{i}\) are the partial pressure, saturation pressure and molar composition of species \(i\), respectively. It is then easy to introduce the concept of activity coefficients (\(\gamma_{i}\)) as deviations from the ideal-solution model, motivating the interest in activity-coefficient models. We also highlight some of the limitations of activity-coefficient models, primarily being that they can only be used to obtain mixture properties.
Having now established the ideal-solution model, it is then simple to explain the local-composition model, from which all the activity models we present in this section are derived. We use the Wilson model[30] where the key interaction parameter, \(\Lambda_{ij}\), between two species \(i\) and \(j\) is given by:
\[\Lambda_{ij}=\frac{v_{j}}{v_{i}}\exp{-\Delta\lambda_{ij}/(RT)} \tag{4}\]
where \(v_{i}\) is the molar volume of species \(i\) and \(\Delta\lambda_{ij}\) is the interaction energy between \(i\) and \(j\). We then highlight the two primary effects captured by local-composition models: size effects (\(v_{j}/v_{i}\)) and enthalpic effects (\(\Delta\lambda_{ij}\)). As a result, it is straightforward to explain the assumptions that underly the other models (NRTL[31] and UNIQUAC[32]) discussed in this section, allowing us to focus on the advantages and limitations of each. One of the key messages we highlight is that, to decide which of these approaches is 'best' for a given application, one needs to compare the predictions to experimental data.
Finally, as we have already introduced the concept of group-contribution approaches in relation to ideal-gas models, it is straightforward to now introduce the student to perhaps the most-commonly used activity coefficient model: UNIFAC[33]. We also highlight another issue students may encounter when using thermodynamic models, particularly UNIFAC: differences in models used by specific individuals but often referred to by the same name, exemplified here by the two versions of UNIFAC (the 'original' and Dortmund versions), which are both referred to by the same name in literature. We emphasise the importance of verifying the source of parameters, as well as their predictive accuracy.
#### 4.2.3 Cubic equations of state
As the most-common class of thermodynamic models used in industry, we dedicate a significant portion of section two to cubic equations of state. To give some physical insight behind cubic equations of state, we
Figure 3: Example of exercise given in the cubic equation of state section. Each student is tasked with completing the code at the commented sections. A textbox below the code block will dynamically update based on the student’s progress.
begin with a phenomenological discussion of the van der Waals equation[34], and remind students how the parameters can be obtained from the critical pressure and temperature of any species, allowing for a wide range of applicability. From here, we highlight the limitations of the van der Waals equation and how the subsequent cubic equations of state represent improvements. For conciseness, we introduce only the Redlich-Kwong (RK)[35], Soave-Redlich-Kwong (SRK)[36], and Peng-Robinson (PR)[37] equations but do point out that many others have been developed. In the case of the latter two equations of state, we also introduce the concept of an \(\alpha\)-function and its relationship to the acentric factor (\(\omega\)). We then emphasise one of the key advantages of cubic equations of state: they have a simple and universal form,
\[p=\frac{nRT}{V-nb}-\frac{n^{2}a\alpha(T)}{(V+r_{1}nb)(V+r_{2}nb)}\,, \tag{5}\]
where \(\alpha(T)\) is the \(\alpha\)-function and, \(a\) and \(b\) are substance-specific parameters. The coefficients \(r_{1}\) and \(r_{2}\) are equation of state-specific. We then give students the opportunity to implement their own generalised cubic equation of state. We provide them with a code template with parts left incomplete (shown in figure 3). They are then asked to define the coefficients \(r_{1}\) and \(r_{2}\) for the van der Waals, SRK and PR equations of state and, write out the residual Helmholtz free energy corresponding to equation 5, accounting for each equation of state. Aside from just gaining experience in coding, there are implicit learning objectives to this exercise. In implementing a generalised cubic equation of state, the students make use of the multiple-dispatch feature in Julia and, in order to ensure that the automatic derivatives of the Helmholtz free energy are correct, they will have to explicitly define all the state variables (an easy mistake to make whenever implementing any equation of state). A text box is added below the code block which provides real-time feedback based on their current implementation and provides hints based on what part of their code is incorrect. This feature arises from the reactive nature of Pluto notebooks.
Subsequently, we illustrate one of the key benefits of cubic equations of state: modularity. We highlight how one can improve the modelling capabilities of cubic equations of state by replacing the \(\alpha\)-functions and adding a volume-translation correction. The impact on real properties is also illustrated with multiple examples.
Having covered how one can use cubic equations of state to model pure species, we now introduce students to the concept of mixing rules to model mixtures of species. We illustrate how these mixing rules map the pure-component parameters to mixture parameters in order to represent the mixture. We cover two classes of mixing rules. The first is the standard van der Waals one-fluid mixing rule, which is perhaps the most common one used in industry. It is emphasised that this mixing rule is generally inaccurate without introducing a binary interaction parameter and that these typically need to be fitted by adjustment using experimental data. The second class of mixing rule we introduce is that of the EoS/\(G^{E}\) mixing rules. The previously-established effectiveness of activity models in modelling mixture phase equilibrium is used to motivate the concept of mixing rules which 'pair' an activity-coefficient model with a cubic equation of state (such as the Huron-Vidal[40] and Wong-Sandler[41] mixing rules).
Finally, uniting all of the previous concepts, we introduce the final and most-modern class of cubic equations of state, the predictive cubics[38, 42]. These equations of state combine everything the students will have encountered up until now: \(\alpha\)-functions; volume-translation corrections; and mixing rules. We highlight that these represent the gold-standard for cubic equation of state modelling in industry. However, to emphasise the importance of validating these models against experimental data, we challenge students to try to develop their own cubic equation of state, using all the tools they have learnt up until now, to improve upon an existing
Figure 4: Pressure–composition \((p,xy)\) diagram of benzene+methanol at 433.15 K obtained using PSRK and a custom cubic equation of state.[38, 39] Each student is invited to attempt to surpass the accuracy of PSRK by developing their own cubic equation of state, varying the \(\alpha\)-function, mixing rule and activity coefficient model.
predictive cubic equation of state. As shown in figure 4, obtaining a better model is not necessarily challenging with the tools students have at their disposal. Note that this figure and, indeed, all the subsequent charts presented in this and the following section, is shown here exactly as seen by the students in the Pluto notebook.
#### 4.2.4 SAFT equations of state
The Statistical Associating Fluid Theory (SAFT)[46, 47] is not typically covered at any level within chemical-engineering curricula, primarily due to do the complexity of its derivation. However, in the past few years, SAFT equations of state have been shown to describe experimental data more accurately for a larger range of properties and species than cubic equations of state. As part of this course, we introduce students to SAFT equations of state by providing a very high-level description of how these equations represent species. Starting from the van der Waals equation, we highlight the improvements made with each term added within the SAFT equation. Before going into the individual SAFT variants, we summarise the overall SAFT approach. Some details are provided on the implementation of SAFT equations, primarily the association term which requires special treatment in order to be used. We believe that a simple, high-level understanding is sufficient for students to confidently use SAFT equations in their own work; should they want to find out more, we provide references and invite them to take an in-depth statistical-thermodynamics course provided within the college.
The remainder of this section is focussed on introducing the more-common SAFT equations of state, explaining their differences at a high level and giving their respective advantages and disadvantages. We emphasise that, as with the activity-coefficient models, to decide which approach is best, one needs to compare predictions to experimental data. However, in the case of SAFT equations, as shown in figure 5, a lot of the model predictions are of comparable accuracy. As a final point, we add that there is another factor to take into account when considering whether to use a SAFT equation: with their increased complexity, the computational cost of these approaches becomes significant, meaning that the students would need to weigh the added accuracy of these models with this additional cost.
#### 4.2.5 Empirical equations of state
The last class of equations of state we cover in this course is empirical equations of state. Like the ideal-heat-capacity correlations introduced in section 4.2.1, these approaches are purely intended for interpolations. However, unlike the approaches discussed previously, while limited to relatively few systems, these equations are extremely accurate for the range of conditions in which they were regressed. As such, if students ever need to model a substance for which there already is an empirical equation of state, they might be better off using this, instead of one of the approaches discussed hitherto. As there is a large range of these models, we only provide two examples: IAPWS-95[48], for water, and GERG-2008[45], for liquified natural-gas systems. We use these to illustrate the only real limitation of these models: extrapolation beyond the conditions over which they were
Figure 5: Vapour–liquid envelope of methanol obtained using different equations of state[21, 37, 43, 44, 45], demonstrating the significant improve of SAFT equations over cubics, but the minimal difference between individual SAFT-type equations.
parameterised. In short, this subsection is intended to make students aware that these empirical equations of state exist and in what cases they might wish to use them.
### Section 3: Employing thermodynamic models
Having now developed a good understanding of the equations of state and thermodynamic models, in this section students are introduced to the computational methods used to obtain more-familiar and useful properties. One of the changes made in content presentation and delivery in this section, relative to section two, is that larger code blocks are left visible for the students to examine, given that the focus is now on the numerical implementation rather than just a high-level understanding.
#### 4.3.1 Volume Solvers
The simplest and perhaps most-useful method with which the students should be familiar with is that for obtaining the volume (\(V\)) of a system at a given temperature (\(T_{0}\)), pressure (\(p_{0}\)) and composition (\(\mathbf{n}_{0}\)):
\[p(V,T_{0},\mathbf{n}_{0})=p_{0}\,. \tag{6}\]
In the first instance, for simplicity of illustration, we focus on using the van der Waals equation of state, although we also highlight that any cubic equation of state could be used. The power of cubic equations of state becomes immediately apparent as equation 6 can be re-arranged to solve for the roots of a cubic polynomial:
\[Z^{3}-\left(1+\frac{bp_{0}}{RT_{0}}\right)Z^{2}+\left(\frac{ap_{0}}{(RT_{0})^ {2}}\right)Z-\frac{abp_{0}^{2}}{(RT_{0})^{3}}=0\,, \tag{7}\]
where \(Z=p_{0}V/(n_{0}RT_{0})\) is the compressibility factor. The above equation can be solved analytically, obtaining all three roots. In the case where there is only one real root, no further considerations need to be made. However, if more than one real root is obtained, the most stable root is selected as that which corresponds to the lowest Gibbs free energy. The results from this procedure are demonstrated within a reactive (\(p,V\)) diagram of carbon dioxide; students are invited to vary the temperature of the plotted isotherm (as shown in figure 5(a)) and observe the resulting changes in the diagram. The metastable and unstable sections of the isotherm are included so as to illustrate the pressure construction (dashed black line) where the fluid will phase split.
We note that this problem only becomes more challenging when using equations of state with more-complex functional forms. We consider the SAFT equations of state, which must be solved numerically. However, instead of solving equation 6, we remind students of the definition of the isothermal compressibility, \(\beta_{T}\):
\[\beta_{T}=-\frac{1}{V}\left(\frac{\partial V}{\partial p}\right)_{T}\,; \tag{8}\]
if we assume that \(\beta_{T}\) is constant, and integrate between states 1 and 2, we obtain:
\[\ln V_{2}=\ln V_{1}+\beta_{T}(p_{2}-p_{1})\,. \tag{9}\]
Figure 6: Key illustrations used when discussing volume solvers within the Pluto notebooks.
If one were to set \(p_{2}=p_{0}\), \(\beta_{T}=\beta_{T}(V_{1})\) and \(p_{1}=p(V_{1})\), then \(\ln V_{2}\) would be our next-best guess for the volume, starting from some initial guess \(V_{1}\), resulting in a recursive relationship:
\[\ln V_{i+1}=\ln V_{i}+\beta_{T}(V_{i})\cdot(p_{0}-p(V_{i}))\,. \tag{10}\]
We point out that this iterative scheme is equivalent to solving equation 6 with respect to the logarithm of the volume. It is preferable to solve equation 6 with this approach as it offers greater numerical stability, particularly in the liquid phase. However, as with any iterative scheme, we need good initial guesses for each phase. For the vapour phase, the natural choice would simply be the volume of an ideal gas at the same conditions:
\[V_{0}=\frac{n_{0}RT_{0}}{p_{0}}\,. \tag{11}\]
However, the situation for the liquid phase is more complicated. Thankfully, and the reason why we choose to use the SAFT equations as an example here, we can use physical intuition to obtain an initial guess for the liquid volume. We know that the minimum volume taken up by a species will be at least the summed total volume of the molecules. In the case of SAFT equations, the total volume taken up by one mole of a pure species is given by:
\[V_{\text{min.}}=\frac{\pi}{6}N_{\Lambda}m\sigma^{3}\,, \tag{12}\]
where \(N_{\Lambda}\) is Avogadro's number and the parameter \(\sigma\) represents the diameter of each of the \(m\) spherical segments comprising the model molecule. One can then set the initial guess of the volume phase to be some (low) multiple of this lower bound (we choose a multiple of 1.25). We do highlight that optimising this initial guess is of importance when accelerating the convergence of the algorithm (as demonstrated in figure 6b). Naturally, one must solve for both the vapour and liquid roots to then determine which of the two phases are stable.
We finish this exercise by pointing out that for more-complicated equations of state, such as the empirical equations of state, there are sometimes more than three roots at a given pressure and temperature. In such cases, it is important to ensure that all roots have been found before deciding which is most stable, as there is no guarantee that our initial guesses will lead to the two most-stable solutions.
#### 4.3.2 Saturation solvers for pure components
One of the most-important classes of property for chemical engineers to obtain from equations of state is the saturation properties of pure components. Solving for the saturation conditions at a given temperature is more complicated than the volume solvers mentioned previously as it involves solving for the volume (\(v^{\text{liq.}}\) and \(v^{\text{vap.}}\)) of each phase using the conditions for phase equilibrium between two phases:
\[p^{\text{liq.}}-p^{\text{vap.}} =0\,,\] \[\mu^{\text{liq.}}-\mu^{\text{vap.}} =0\,, \tag{13}\]
where the superscripts liq. and vap. denote properties relating to the liquid and vapour phases, respectively. Inherently, this problem amounts to simply using the Newton-Raphson method to converge the volumes, allowing us to illustrate important concepts when solving such problems. For example, we re-iterate the importance of initial guesses, using the approaches we developed in the volume solver, or using empirical correlations. In addition, we also highlight the value of re-scaling the variables such that the numerical solver is more stable (including solving for the logarithm of the volumes rather than the absolute values, and normalising equation 13 by the characteristic length scales of pressure and energy).
Students will now be able to trace the saturation curve. However, we point out that, for some equations of state, such as SAFT-type equations, the end-point of the saturation curve, i.e., the critical point, isn't known _a priori_; this must be solved for numerically. We remind students of the definition for the critical point of a pure substance:
\[\left(\frac{\partial p(v,T)}{\partial v}\right)_{T} =0\,,\] \[\left(\frac{\partial^{2}p(v,T)}{\partial v^{2}}\right)_{T} =0\,. \tag{14}\]
Students can re-use the tricks we highlighted in relation to the saturation solver to solve the above system of equations. As a result, the student can now trace the entire vapour-liquid envelope.
#### 4.3.3 Chemical Stability Analysis
While the previous exercises pertained to single-component systems, for most applications, chemical engineers will be dealing with mixture systems. The phase space of mixtures becomes more complex as we introduce wide regions where the system may exist in two or more phases. As such, if we wish to study the properties of a system at given set of conditions \((p_{0},T_{0},\mathbf{z}_{0})\), we must first answer the question: "Does a phase split occur?". This problem is non-trivial and involves carrying out chemical-stability analysis. The standard approach was developed by Michelsen[49] and involves minimising what is referred to as the Gibbs tangent plane distance (\(TPD\)):
\[TPD(\mathbf{x})=\sum_{i}x_{i}(\mu_{i}(\mathbf{x})-\mu_{i}(\mathbf{z}_{0}))\,, \tag{15}\]
where \(x_{i}\) is the mole fraction of species \(i\) in a candidate phase. The function is demonstrated visually in figure (a)a. If \(TPD<0\), then we know that a phase split will occur.
Once again, we take the students through the process of re-defining the problem in a way that is more numerically stable. We also take the opportunity to illustrate that, unless a global optimiser is used, there is a chance that the chemical-stability analysis might fail. As shown in figure (b)b, if a local optimiser is used, conditions near the black marker would not be seen as unstable as, despite being locally stable, there exists a tangent plane with a lower Gibbs free energy (denoted by the dashed line in the figure).
#### 4.3.4 Isothermal Flash Problem
Having now established when a phase split will occur, the next step is to determine what the various phases will be. Balancing both simplicity and rigour, we choose to focus solely on the isothermal flash problem, the backbone of multiple vital processes in chemical engineering.
To build up the necessary tools to perform the flash calculations, we derive the relationship between \(K\)-factors and fugacity coefficients from the definition for chemical equilibrium. From here, we impose the mass-balance equation. As a result, we obtain the venerable Rachford-Rice[50] equation:
\[f(\beta)=\sum_{i}\frac{(K_{i}-1)z_{i}}{1+\beta(K_{i}-1)}=0\,, \tag{16}\]
where \(\beta\) is the phase fraction and \(K_{i}\) is the \(K\)-factor for species \(i\). Solving equation 16 is at the heart of almost all flash algorithms. As such, we task the students with solving the Rachford-Rice equation. In introducing the problem, we highlight one of the benefits of this equation: it is monotonically decreasing. The implementation of the solver is left to the students to fill-in, within the sample code. The exercise is presented the same way as the previous exercise shown in figure 3.
Students will thus be able to obtain the phase fraction, for a given set of \(K\)-factors, representing the first iteration of the flash algorithm. For most real systems, \(K\)-factors depend on the composition in each phase. Accordingly, once we've solved for the phase fraction, we must update the \(K\)-factors and verify if the composition
Figure 7: Key illustrations used when discussing chemical stability analysis within the Pluto notebooks.
of each phase has changed significantly. If so, we must solve the Rachford-Rice equation again with the updated \(K\)-factors, leading to an iterative scheme.
While the implementation we present uses a simple successive-substitution iterative scheme, we highlight that more-advanced flash algorithms, such as the Michelsen flash algorithm[49], use methods such as Newton-Raphson or Anderson Acceleration to improve the convergence (shown visually in figure 7(b)). We highlight that, while these algorithms may require fewer iterations to convergence, it is important to consider the computational cost of each iteration as a Newton-Raphson step can be significantly more costly than an Anderson Acceleration step.
By this point, students will have a strong foundation to flash algorithms, allowing them to comfortably explore the literature on this topic at their own leisure. We provide multiple references on more-advanced flash algorithms for them to explore independently.
## 5 Course pilot: results and feedback
A remote pilot run of the course took place in the summer of 2022 with 16 participants. As the course is intended for students who have already had at least one introductory thermodynamics class, both undergraduate (beyond second year) and graduate students were invited to participate. A pre-course and a post-course survey were delivered. Both surveys were vetted and approved by the Ethics Research Committee at Imperial College London and were anonymised with the assistance of the _StudentShapers_ programme[51]. Some questions presented in the second survey were repeats of questions presented in the first, so as to evaluate the change in student self-efficacy. The pre-course survey contained questions designed to determine students' general coding proficiency prior to taking the course. The post-course survey contained more questions related to the students' experience and thoughts on the course. Open-ended questions were also added to the post-course survey in order to allow students to fully express any feedback they might have.
### Introduction and code familiarisation
One of the main factors motivating the decision to use the Julia language and Pluto notebooks for this course is to reduce the friction students encounter when coding and/or using new software, which had been an issue in
Figure 8: Key illustrations used when discussing chemical stability analysis within the Pluto notebooks.
Figure 9: Summary of student response when asked to qualify their confidence in programming using different languages.
prior work [15]. When asked at the start of the course how confident students felt in programming with the MATLAB, Python, and Julia languages, the majority were either not at all or only slightly confident with using Julia, as shown in figure 9. In comparison, students were mostly quite or extremely confident in using MATLAB and/or Python. The relative lack of confidence in using Julia is not surprising considering that it is a newer language and many students would not have encountered its use. In anticipation of students' lack of familiarity, a detailed set of instructions for installing and using the code environment was prepared and made available on the Github repository. We also offered a one-hour diagnostic and troubleshooting session prior to the course - which, surprisingly, no student attended. In contrast to our previous experience with Python, students mostly did not find either setting up the environment or learning the syntax of Julia to be a significant activation barrier (see figure 9). The latter may not be surprising as the syntax used in MATLAB (the language which most students were familiar) is quite similar to that used in Julia. The former speaks to the simplicity with which the Julia package manager, and Pluto, abstract away the complexities typically experienced when setting up a coding environment, demonstrating the benefits of using Julia in an educational setting. One student commented on the similarity between MATLAB and Julia, as well as their experience with the Pluto notebooks:
_"I enjoyed the interactive notebooks which allowed for a better learning experience. Reminded me of MATLAB on-ramp course, which was a good introduction/helper course."_
The lack of documentation, which was not a significant a issue with Python, has become more of a barrier. This is primarily due to how comparatively young Julia is as a language. As a result, packages in Julia often have comparatively less comprehensive documentation and support than their Python/MATLAB equivalents. Another common source of help for coders are online forums like StackOverflow [52, 53] and blog posts are also similarly less developed for Julia. Nonetheless, we believe that this issue will improve over time considering the rapid development and adoption of Julia by many groups such as educators and researchers. Also highlighted in figure 10 is another limitation of introducing students to coding, which has persisted from our prior work, is the lack of aim / problems to solve. This is something we hope to have resolved with this and future courses we develop.
### Content and structure
Overall, the feedback for the course content and structure was quite positive. This is summarised in figure 11. As we can see, the main area of improvement is the instructions provided prior to the course, which came in the form of an e-mail providing information and links to the online repository where students could obtain more details on setting-up the coding environment. Had the course been held in-person, perhaps it would have been more instructive to formally set-up an in-person session for this. However, we were restricted by the remote delivery of the course. Nevertheless, as demonstrated in the previous section, this did not seem to be a significant issue as most students successfully set-up the coding environment and became familiarised with the syntax quickly, despite over half (\(N=8\)) not using code frequently.
As evident from figure 11, the arrangement of content and purpose of each section was very well received, highlighting the value of using progression as a guiding principle in the course development[28]. We are particularly happy that most students agreed that the examples given were relevant to chemical-engineering
Figure 10: Student rating to aspects of programming in the Julia language which present an activation barrier. Symbols represent individual response and solid lines represent the mean response.
problems. To this end, one of the more-emblematic responses we received from students after being asked which part of the course they enjoyed most:
_"The history of EoS as it really makes you understand how far we have come."_
This was indeed one of the main motivations for the development of this course: bringing students up-to-speed with the modern class of equations of state available today[4]. This is made even more apparent in figure 12 where it is illustrated that, when students were asked whether or not the course improved their confidence with regards to mastering and applying the concepts of computational thermodynamics, there was a near-total increase. Based on one student's feedback:
_"...the code template made it easy to actually solve the exercises despite lacking the necessary knowledge to set up the problem in code.",_
It appears that the integration of code templates and explanatory text was effective in helping students develop a good understanding of computational thermodynamics, despite not necessarily having a strong computational background. Interestingly, one of the pieces of feedback we received from students was that we
Figure 11: Student responses to various aspects of the course delivery.
Figure 12: ‘Before-and-after” student responses related to their self-perceived confidence in aspects of computational thermodynamics before and after the course.
_"Could have more activity sessions..."_. The number of activity sessions (reactive exercises) was restricted to just two since we intended to develop an accelerated introduction to computational thermodynamics. However, to compensate for this, we embedded many examples within the Pluto notebooks, so that students might have the opportunity to examine the code after the course, by which time they would have a stronger computational-thermodynamics background. Nevertheless, clearly the students felt that, when interacting with code, exercises were more effective as they felt more comfortable when under the supervision of the teaching staff.
Another improvement that was suggested related to the pace of the course. With three, three-hour long lectures (including two ten-minute breaks), one per day, it was apparent each day that by the final hour, students had a harder time concentrating. As such, in future implementations of the course, it would be advised to extend the duration to five or six sessions in order to spread out the material and give more opportunities for questions and interaction with the Pluto notebooks.
## 6 Conclusions
We have developed and rolled-out an introductory course on computational thermodynamics for both undergraduate and graduate students using the open-source thermo-fluids package, Clapeyron.jl, and the reactive notebooks provided by Pluto.jl. Through the course, students have been able to develop a sufficiently high level of competency in the use of a state-of-the-art thermodynamics software package and a familiarity with concepts and methods in computational thermodynamics. This result is demonstrated by the overall increase in students' self-efficacy and confidence. The course thus helps position students to become informed users and also obtain a sufficient foundation to engage with the thermodynamics literature. Two key components enable such rapid and effective learning: first, the use of progression as a guiding philosophy in course design where content and exercises are structured such that content and exercises progressively reach the required level of complexity and sophistication; second, the use of the Julia language and reactive notebooks.
In a previous course developed by the authors [15], we used template Python scripts for code implementation. While the use of the Python language presented no issues due to its familiar syntax, setting up the computational environment was a "pain point" for students and the use of static scripts limited the opportunities for student engagement during the course. In this course, the use of the Julia language and Pluto notebooks successfully addressed these two issues while also not presenting any significant challenges from the perspective of introducing a new coding language. Setup was much more straightforward, even with none of the participants attending the pre-course trouble-shooting session. The use of reactive notebooks also enabled integration of course content with code in an environment that facilitates student engagement and exploration.
This course represents an important first step in addressing a pertinent gap in advanced thermodynamics instruction. Providing students with the opportunity to understand how thermodynamic theory is employed in modern applications is important for their own practice and also to accelerate the promulgation of developments in thermodynamics to the rest of the scientific and engineering community. The modular nature of the course content enables self-directed learners and other educators to pick and choose components that are relevant to them and also to easily develop new material for their own use/courses. However, due to the comparatively small number of students who participated in the pilot study, further evaluation is important for evaluating whether the proposed course structure and delivery format will be effective for a larger cohort or if a similar quality of learning experience can be achieved when incorporating parts of the course into a larger thermodynamics module.
## Availability of Code
All the course notes and code can be found at the following repository: [https://github.com/ClapeyronThermo/introduction-to-computational-thermodynamics](https://github.com/ClapeyronThermo/introduction-to-computational-thermodynamics)
## Acknowledgments
The authors would like to thank the students who participated in this course and provided their valuable feedback. This project was supported by funding from StudentShapers (Imperial College London) to enable partnership with students. |
2302.04559 | Lattice Supersymmetry and Holography | Over the last twenty years, work based on lattice supersymmetry has generated
many new results and insights into the non-perturbative nature of string
theory, quantum black holes, and gravity. This endeavor is a broad research
program encompassing lattice field theory, supersymmetry, string theory, and
quantum gravity. In this volume, we look at a selected subset of the topics
covering recent progress in lattice supersymmetry and holography. | Anosh Joseph | 2023-02-09T10:50:29Z | http://arxiv.org/abs/2302.04559v1 | # Lattice Supersymmetry and Holography
###### Abstract
Over the last twenty years, work based on lattice supersymmetry has generated many new results and insights into the non-perturbative nature of string theory, quantum black holes, and gravity. This endeavor is a broad research program encompassing lattice field theory, supersymmetry, string theory, and quantum gravity. In this volume, we look at a selected subset of the topics covering recent progress in lattice supersymmetry and holography.
## 1 Introduction
Supersymmetric theories play a prominent role in our efforts to understand string theory, strong dynamics, quantum gravity, and various extensions to the Standard Model of particle physics. The holographic duality conjecture reveals the connections between theories of quantum gravity living on curved spacetimes and quantum field theories without gravity living on the boundaries of such spacetimes. This conjecture also provides promising directions for studying the nature of quantum gravity and black holes.
It is possible to describe certain black hole geometries in terms of the world-volume theories of the D-branes that compose them. These are the maximally supersymmetric Yang-Mills theories in various spacetime dimensions, with many colors, taken in the 't Hooft limit and at finite temperatures. These theories can be strongly coupled in the regimes in which they describe string theory backgrounds, including black holes. Solving these field theories at strong coupling and finite temperature would allow us to directly study the quantum properties of the dual black holes, including their thermodynamic features. As the 't Hooft coupling and the number of colors are reduced, classical and quantum string corrections become more prominent. These less understood limits can be investigated using a lattice formulation - a first-principles definition of quantum field theories. Simulations of lattice discretized field theories can validate the holographic duality conjecture and provide new insight into the non-perturbative structure of string theory and quantum gravity.
We hope that this special issue would be beneficial to beginning researchers and practitioners in string theory, quantum field theory, and lattice field theory who would want to contribute to this exciting interdisciplinary topic of lattice supersymmetry and its usefulness in testing and validating the holographic duality conjecture.
## 2 A Brief Overview of Contributions
Below we provide a brief overview of the contributions within this special issue.
In Ref. [1], David Schaich reviews the recent progress and near future prospects in lattice investigations of supersymmetric field theories and some of the challenges that remain to be overcome. He focuses on the progress in three areas: supersymmetric Yang-Mills (SYM) theories in fewer than four spacetime dimensions, four-dimensional \(\mathcal{N}=1\) SYM theory and maximally supersymmetric Yang-Mills theory in four dimensions. He also highlights supersymmetric QCD (SQCD) and the sign problem as significant challenges that will be important to address in future work.
In Ref. [2], Yuhma Asano outlines some basics of the matrix model conjecture and the gauge/gravity duality conjecture for the matrix models. He reviews various numerical evidence provided for the gauge/gravity duality conjecture for the BFSS and BMN matrix models and their flavored cousins, the Berkooz-Douglas (BD) and Kim-Lee-Yi (KLY) matrix models.
Masanori Hanada and Hiromasa Watanabe review the basic properties of partial deconfinement in Ref. [3] and discuss its applications. The confinement deconfinement transition in gauge theory plays an important role in physics, including describing thermal phase transitions in the dual gravitational theory. In the scenario of partial deconfinement, there is an intermediate phase in which the color degrees of freedom split into the confined and deconfined sectors. The partially deconfined phase is dual to the small black hole that lies between the large black hole and graviton gas. A better understanding of partial deconfinement may explain how gravity emerges from the degrees of freedom of the field theory.
In Ref. [4], Gabriel Bliard, Ilaria Costa, and Valentina Forini review the lattice study of the Green-Schwarz gauge-fixed string action describing the worldsheet fluctuations about the minimal surface holographically dual to the null cusp Wilson loop. A numerical study of this system using the Monte Carlo method helps evaluate the cusp anomaly of \(\mathcal{N}=4\) super Yang-Mills. They comment on the discretization, numerical explorations, and challenges for the non-perturbative study of this benchmark model of gauge-fixed worldsheet actions.
In Ref. [5], Raghav Jha proposes additional tests of holography by studying supersymmetric Wilson loops in \(p+1\)-dimensional maximally supersymmetric Yang-Mills (SYM) theories on a lattice, taken in the large-\(N\) limit. In the dual gravity description, this computation involves calculating the area of a fundamental string worldsheet in certain Type II supergravity backgrounds. Even though thermodynamic observables have been computed on the lattice using Monte Carlo methods, and the results agree with the supergravity results in various dimensions, more needs to be done for the gauge-invariant operators, such as the Wilson loop. Jha provides analytical predictions for these loops for various non-conformal D\(p\)-brane background cases, with \(p<3\), in the large \(N\) limit. He also comments on how these can be computed on non-orthogonal lattices for various supersymmetric models.
Daisuke Kadoh and Naoya Ukita in Ref. [6] propose a supersymmetric gradient flow for four-dimensional \(\mathcal{N}=1\) SQCD. They gave expressions for the flow equation in the superfield formalism and the component fields formalism in the Wess-Zumino gauge. They also discuss a simplified flow using the gradient of supersymmetric Yang-Mills (SYM) action instead of SQCD action to define a gauge multiplet flow.
## 3 Future Research Directions
In this section, we briefly outline the various research directions the effort of lattice supersymmetry and holography can take in the near future.
* For the case of the \(\mathcal{N}=4\) Yang-Mills in four dimensions, there is a famous holographic prediction for the Coulomb coefficient \(C(\lambda)\) that it is proportional to \(\sqrt{\lambda}\) up to \(\mathcal{O}(1/\sqrt{\lambda})\) corrections. For the \(N=\infty\) planar limit more general analytic results have also been obtained. It would be interesting to search for this behavior in more detail, although some promising preliminary results have already been reported.
* A more detailed understanding of the non-trivial scaling dimension of the simplest conformal primary operator of the four-dimensional \(\mathcal{N}=4\) SYM theory, the Konishi operator is much needed. Preliminary lattice results have already been obtained and are consistent with existing perturbation theory results.
* It would be interesting to study the behavior of the four-dimensional \(\mathcal{N}=4\) SYM theory around the \(S\)-dual point, where the 't Hooft coupling takes the form \(\lambda_{\rm sd}=4\pi N\).
* Another direction is to adjust the scalar potential to study the four-dimensional \(\mathcal{N}=4\) SYM on the Coulomb branch of the moduli space. In this context, the \(S\)-duality connects the masses of the U(1)-charged elementary '\(W\) bosons' and the magnetically charged topological 't Hooft-Polyakov monopoles. They can be accessed from lattice calculations with appropriate boundary conditions.
* Non-perturbative lattice calculations can be used to study the free energy of four-dimensional \(\mathcal{N}=4\) SYM theory. The weak-coupling perturbative prediction and the strong coupling holographic calculation differ by a famous factor of \(3/4\). A lattice setup can be used to interpolate between these two coupling regimes.
* In the case of maximally supersymmetric Yang-Mills in three dimensions, one can explore, through lattice investigations, the phase transition between the 'D2 phase' and the spatially deconfined 'D0 phase' dual to a localized black hole geometry.
* A more detailed and careful non-perturbative investigation of the finite temperature phase diagrams in two-dimensional SYM theories is still needed. These theories also possess rich zero-temperature dynamics, such as the'meson' spectrum and spontaneous symmetry breaking, that are important to explore non-perturbatively.
* In the context of the BMN model, the critical temperature of the confinement transition can be predicted by perturbative calculations in the weak-coupling regime and by a dual supergravity calculation for strong coupling. An open research direction would be non-perturbatively connecting the transition temperatures by mapping out the intermediate coupling regime, where perturbative and holographic approaches are unreliable.
* There has been excellent progress in validating and testing holography in the context of BFSS and BMN matrix models. It would be interesting to extend these investigations into more exotic models such as the KLY matrix model.
* Partial deconfinement can be understood as the coexisting phenomenon in the space of the color degrees of freedom. Historically, it was proposed for the SYM theory as the dual of the small black hole phase. An open research direction would be to generalize the idea of partial deconfinement to systems with finite \(N\) by utilizing their chiral symmetry.
We hope this volume will contribute to constructive discussions on advances in lattice supersymmetry and holography and stimulate further studies.
## Acknowledgements
The author was supported in part by the Start-up Research Grant (No. SRG / 2019 / 002035) from the Science and Engineering Research Board (SERB), Government of
India, and in part by the Indian Institute of Science Education and Research (IISER) - Mohali.
## Data Availability Statement
No Data associated in the manuscript.
|
2303.04670 | EvConv: Fast CNN Inference on Event Camera Inputs For High-Speed Robot
Perception | Event cameras capture visual information with a high temporal resolution and
a wide dynamic range. This enables capturing visual information at fine time
granularities (e.g., microseconds) in rapidly changing environments. This makes
event cameras highly useful for high-speed robotics tasks involving rapid
motion, such as high-speed perception, object tracking, and control. However,
convolutional neural network inference on event camera streams cannot currently
perform real-time inference at the high speeds at which event cameras operate -
current CNN inference times are typically closer in order of magnitude to the
frame rates of regular frame-based cameras. Real-time inference at event camera
rates is necessary to fully leverage the high frequency and high temporal
resolution that event cameras offer. This paper presents EvConv, a new approach
to enable fast inference on CNNs for inputs from event cameras. We observe that
consecutive inputs to the CNN from an event camera have only small differences
between them. Thus, we propose to perform inference on the difference between
consecutive input tensors, or the increment. This enables a significant
reduction in the number of floating-point operations required (and thus the
inference latency) because increments are very sparse. We design EvConv to
leverage the irregular sparsity in increments from event cameras and to retain
the sparsity of these increments across all layers of the network. We
demonstrate a reduction in the number of floating operations required in the
forward pass by up to 98%. We also demonstrate a speedup of up to 1.6X for
inference using CNNs for tasks such as depth estimation, object recognition,
and optical flow estimation, with almost no loss in accuracy. | Sankeerth Durvasula, Yushi Guan, Nandita Vijaykumar | 2023-03-08T15:47:13Z | http://arxiv.org/abs/2303.04670v1 | # Ev-Conv: Fast CNN Inference on Event Camera Inputs
###### Abstract
Event cameras capture visual information with a high temporal resolution and a wide dynamic range. This enables capturing visual information at fine time granularities (e.g., microseconds) in rapidly changing environments. This makes event cameras highly useful for high-speed robotics tasks involving rapid motion, such as high-speed perception, object tracking, and control. However, convolutional neural network inference on event camera streams cannot currently perform real-time inference at the high speeds at which event cameras operate--current CNN inference times are typically closer in order of magnitude to the frame rates of regular frame-based cameras. Real-time inference at event camera rates is necessary to fully leverage the high frequency and high temporal resolution that event cameras offer. This paper presents Ev-Conv, a new approach to enable fast inference on CNNs for inputs from event cameras. We observe that consecutive inputs to the CNN from an event camera have only _small differences_ between them. Thus, we propose to perform inference on the difference between consecutive input tensors, or the _increment_. This enables a significant reduction in the number of floating-point operations required (and thus the inference latency) because _increments_ are very sparse. We design Ev-Conv to leverage the irregular sparsity in increments from event cameras and to retain the sparsity of these increments across all layers of the network. We demonstrate a reduction in the number of floating operations required in the forward pass by up to \(98\%\). We also demonstrate a speedup of up to \(1.6\times\) for inference using CNNs for tasks such as depth estimation, object recognition, and optical flow estimation, with almost no loss in accuracy.
Software Architecture for Robotic and Automation, Computer Architecture for Robotic and Automation, Software, Middleware and Programming Environments
Code: [https://github.com/utcsz/evconv](https://github.com/utcsz/evconv)
## I Introduction
Event-based cameras have emerged as a promising method to generate visual information for robot vision as they capture high-speed changes, are resilient to motion blur, have high dynamic range, and consume low power. As a result of the high frequencies at which event cameras capture changes in the environment, they are useful for several robotics tasks which require fast perception of the environment. For example, event cameras are used to detect and dodge fast-moving objects on a quadrotor with fast independent object motion estimation [14, 23] and high-speed 3D reconstruction on scenes with fast-moving objects [12]. Event cameras are composed of a set of pixels that register changes in the intensity of light falling on them. These changes are then streamed asynchronously as a stream of packets of the form \((u_{x},u_{y},t,p)\), containing the pixel coordinate \((u_{x},u_{y})\), the timestamp \(t\) at which the intensity change occured, and the polarity \(p=+1/-1\), corresponding to an increase or decrease in intensity. In contrast to frame-based cameras, which capture frames at a rate of 30 or 60 frames per second, event cameras stream packets at a rate of 1-10 MHz, which allows them to detect any environmental changes almost instantly.
Convolution neural networks have shown state-of-art accuracy in a number of vision tasks on inputs from event cameras, such as image classification [28], object detection [15, 13], human pose estimation [16, 32], depth estimation [9, 38, 24], optical flow estimation [36, 6], independent object motion estimation [14, 23, 15] and semantic segmentation [1, 29]. A fundamental challenge with using CNN inference on event camera inputs is that it requires high latencies. The processing time of CNN inference makes its use challenging with high-speed real-time applications. For example, a forward pass for end-to-end inference on EVFlowNet takes \(30ms\) on a GTX 1050 GPU [36] and 9.5ms on a desktop RTX 3060. High inference latencies negate the benefits of using event cameras.
As a result of event cameras capturing changes in intensities at high frequencies, the changes between consecutive input features to the CNN are often small. However, processing each input still requires the full set of dense computations during CNN inference. Existing methods for CNN inference on event camera inputs treat each input as an independent image that requires end-to-end dense tensor computations, incurring large inference latency. We propose Ev-Conv, a new approach to accelerate CNN inference for event camera inputs where we perform inference on the difference between consecutive input tensors (referred to as _increment tensors_) instead of the input tensors themselves. We leverage the sparsity in increment tensors to accelerate CNN inference by skipping unnecessary computations on zero values.
We identify two major challenges in effectively leveraging the sparsity of the increment tensors for event camera inputs. First, the sparsity in increment tensors tends to be highly irregular, as the pixels that register changes tend to be more scattered. Existing methods to enable speedup on sparse increment tensors exploit the property that large portions of the scene remain the same across frames when the camera isn't moving, leading to the sparsity of increment tensors being regular [3, 19, 27, 33]. We introduce a sparse convolution operation to leverage this sparsity for speedup. As GPUs are optimized for dense computations, we introduce an approach to skip computations of convolution filters on a block of the tensor when all elements of the block are \(0\). Second, the sparsity level significantly reduces at the deeper layers of
the network for the following reasons: (i) Convolution and matrix multiply operations spread the distribution of 0s in input tensors. We however observe that a large percentage of the tensor elements tends to be numerically small. We can significantly increase the sparsity of increment tensors by introducing a rounding-off mechanism that sets small elements to \(0\). (ii) CNN architectures have downsampling operations to extract high-level semantics at low spatial resolution levels. Through downsampling, the tensors become denser as the spatial resolution decreases. To address this problem, we propose a technique called _delayed integration_ to preserve sparsity of increment tensors. We test the idea with a proposed Delayed UNet architecture and demonstrate significant floating point operation reductions but with similar accuracy as the original CNN.
The faster inference we obtain when using Ev-Conv makes it more suitable for robotics vision applications that use CNNs to process inputs from event cameras. We evaluate Ev-Conv for a range of tasks that use event camera inputs on different types of CNN architectures, including depth estimation, optical flow estimation, and object recognition. We demonstrate that Ev-Conv is able to reduce the number of floating operations by up to \(98\%\) and provide a speedup of up to \(1.6\times\), with similar accuracy as the original network.
## II Related Work
**DNN Architectures for Event Cameras.** Recently proposed DNN architectures [26, 25, 20] enable faster inference on event camera streams by using smaller networks with fewer weights. Graph neural networks (GNNs) [24], for example, represent events as nodes in a graph and this representation requires less computation as fewer events are processed at each step. AEGNN [24] enables a 200-fold reduction in the number of floating point operations required in this manner. However, despite this, inference using GNNs is orders of magnitude slower than using CNNs with similar numbers of parameters [24], making them impractical for real-time inference in real-world tasks. EventNet [26] uses a PointNet [21]-like architecture to use a more efficient encoding of the event stream input. This encoding enables real-time inference, however, it discards the time stamps of the events. This results in lower accuracy and effectiveness of the network itself for various tasks [36].
**Sparse DNN architectures.** Given the high frequency at which event camera streams capture changes in the environment, events are sparse in time and also in their locations in the scene being captured. Some encodings for event camera streams, such as event histograms [11] and event queues [31], enable the input tensors to a CNN to be expressed as _sparse tensors_. Some approaches [30, 35, 7] exploit the sparsity in the input tensors to implement sparse convolutions with fewer floating-point computations compared to their dense counterparts. For example, for sparse event stream encodings, submanifold sparse convolutions [13] can reduce floating point operations by \(10\times\). However, these sparse operations require irregular accesses to memory and despite significantly reducing the number of floating point operations, there is no reduction in overall inference latency [35, 4]. Other networks such as SBNet can leverage sparsity effectively to generate lower inference latencies. However, SBNet was designed for LiDAR point cloud inputs and event camera inputs do not have regularity in sparsity.
**Accelerating inference on video streams.** Several recent works [3, 34, 19, 27, 33] leverage the similarity between consecutive frames in a video to accelerate inference. These methods leverage the typical case where large parts of the scene are static and unchanged across consecutive frames in the video. Thus convolution operations need to be performed for the non-static portions of the scene only. However, these approaches are not effective with event camera streams which are typically used in more dynamic environments where there is no guarantee of large static sections. The high temporal resolution detection of changes in intensity leads to more unstructured and irregular sparsity in the CNN inputs. DeltaCNN [19] also leverages the sparsity in _deltas_ between consecutive video frames. However, DeltaCNN is not effective with event camera inputs as the sparsity in the input increments is far more irregular. In addition, these works do not address the decrease in sparsity due to the upsampling in commonly used CNN architectures such as UNets. We propose Ev-Conv, an approach designed to leverage the irregular sparsity seen in event camera streams to generate low latency inference.
## III Method
### _Difference Between Consecutive Convolution Inputs_
An event camera measures the change in intensity of light falling on each pixel asynchronously. Every individual event contains the following information: \((u_{x},u_{y},t,p)\): pixel coordinate, timestamp, and polarity. CNN inference on event inputs at time \(\tau\) encodes all the events occurring within a time window of \(\Delta\) (typically \(50\) milliseconds) before \(\tau\). The events in this window are grouped into a tensor representation to be used as input to the neural network. Commonly used types of event encodings for CNNs include: _event-voxels_[9, 13, 38], _event-count_[13] and _most recent timestamp_[36].
#### Iii-A1 Increment tensors and increment layers
We define the _increment_ tensor as the difference between the tensor at the current inference step and the previous inference step. For an input tensor \(\mathbf{x}(n)\) to an operator of the CNN at inference step \(n\), we define the increment tensor of \(\mathbf{x}(n)\) as \(\mathbf{x}_{\uparrow}(n)\) as:
\[\mathbf{x}_{\uparrow}(n)=\mathbf{x}(n)-\mathbf{x}(n-1) \tag{1}\]
Alongside each layer in the original CNN, we implement an _increment layer_ that receives an input increment. A corresponding _output increment tensor_\(\mathbf{y}_{\uparrow}(n)\) is generated, which should be the difference between two consecutive outputs \(\mathbf{y}(n)\) and \(\mathbf{y}(n-1)\) of the layer. We aim to evaluate the increment in the CNN output by replacing each forward pass layer with its increment layer. The output of an operator in Ev-Conv with increment layers is an increment tensor over the previous output. To compute the final output \(\mathbf{y}\) after \(m\) inference steps, we sum the previous \(m\) increment outputs and the output tensor at time \(n-m\):
\[\mathbf{y}(n)=\mathbf{y}(n-m)+\sum_{i=n-m+1}^{n}\mathbf{y}_{\uparrow}(i) \tag{2}\]
#### Iii-A2 Forward pass with increment tensors
Treating encoded event tensors as individual inputs (similarly to images) leads to many expensive dense tensor computations in the forward pass. However, at high inference frequencies (1kHz), there is a large degree of overlap between consecutive inputs. Thus, two consecutive time windows have similar input tensors \(\mathbf{x}(n)\) and \(\mathbf{x}(n-1)\). In other words, \(\mathbf{x}(n)-\mathbf{x}(n-1)\) is sparse, and many elements have small absolute values. The input increment tensors must be sparse to reduce the number of floating-point operations performed by each increment layer, enabling us to perform sparse operations to obtain the output increment. We identify two challenges in performing inference with increment tensors:
1. **Insufficient sparsity of increment tensors in deeper layers.** We find that the sparsity of the increment tensors is irregular and decreases significantly towards deeper layers in the network. This is because convolution and matrix multiplication operations on sparse tensors do not necessarily produce sparse outputs and increase the spread of nonzero values, making it infeasible to leverage sparsity for faster execution.
2. **Drop in sparsity in encoder-decoder architectures on upsampling.** A number of CNNs used in computer vision have an encoder-decoder structure. The encoders encode the input feature tensor into a dense intermediate tensor. The subsequent upsampling operations in the decoder produce more dense tensors, making it infeasible to accelerate increment layers.
### _Sparsification layer_
To address the insufficient sparsity problem mentioned in Section III-A2, we devise a mechanism to retain the sparsity in the increment tensors using a rounding-off mechanism. Typically, due to the similarity between the two consecutive inputs, the elements of the input increments to each operator are not only sparse but also have very small values. To further improve sparsification, we round off to zero all elements whose absolute values are smaller than a threshold. To perform this sparsification, we add a new layer, the _sparsification layer_, at various points in the CNN.
At \(n=0\), we round off the elements to the nearest multiple of a parameter \(k\). Thus all elements whose absolute values are above the parameter \(k\) are rounded off to zero. This can be expressed mathematically as:
\[\mathbf{y}_{\uparrow}(0)=k\left\lfloor 0.5+\frac{\mathbf{x}_{\uparrow}(0)}{k}\right\rfloor \tag{3}\]
The difference between \(\mathbf{y}_{\uparrow}(0)\) and the \(\mathbf{x}_{\uparrow}(0)\), or the residual resulting from the operation is stored in \(\delta(0)\) (Eqn 4).
\[\delta(0)=\mathbf{y}_{\uparrow}(0)-\mathbf{x}_{\uparrow}(0) \tag{4}\]
This residual \(\delta(0)\) is the error between the elements in \(\mathbf{y}_{\uparrow}(0)\) and the true increment \(\mathbf{x}_{\uparrow}(0)\). On subsequent increments, this residual should be added to future inputs to correct the error in output tensor increments. Therefore, at \(t=1\), we have:
\[\mathbf{x}_{corrected}(1)=\delta(0)+\mathbf{x}_{\uparrow}(1) \tag{5}\]
This corrected output \(\mathbf{y}_{corrected}\) tensor is now to be sparsified using the same mechanism as in Eqn 3. Therefore, at time \(t\), we have the following update equations to update a sparse tensor \(\mathbf{y}_{\uparrow}\) from an input increment \(\mathbf{x}_{\uparrow}\):
\[\mathbf{x}_{corrected}(n)=\delta(n-1)+\mathbf{x}_{\uparrow}(n) \tag{6}\]
\[\mathbf{y}_{\uparrow}(n)=k\left\lfloor 0.5+\frac{\mathbf{x}_{corrected}(n)}{k}\right\rfloor \tag{7}\]
\[\delta(n)=\delta(n-1)+\mathbf{x}_{corrected}(n)-\mathbf{y}_{\uparrow}(n) \tag{8}\]
Eqns 6 and 7 show the operations performed on increment tensor \(x\uparrow\) to produce the increment tensor \(\mathbf{y}\uparrow\) by rounding-off small values of the input to \(0\). \(\delta(n)\) stores the round-off error.
This sequence of steps ensures that the elements of \(\delta\) remain small. The error due to rounding off in the output of our network varies proportionally with respect to the values in \(\delta\). We define a hyperparameter, the thresholding parameter \(t_{p}\), which we use to calculate \(k\). The value of \(k\) is computed as \(k=t_{p}\|\mathbf{x}\|\), where \(x\) is the rolling average of the input to the sparsification layer and \(\|\|\) is the \(L_{2}\) norm.
By adding sparsification layers before each pair of convolutional layers of our YOLE [2] DNN for object detection (Figure 0(a)), we can retain sparsity of the increment tensors at the deeper layers. Our mechanism ensures that each operator in the forward pass receives a sparse increment as an input, which results in a dramatic reduction in the number of floating-point operations required. We perform an experiment to measure the sparsity in the increment tensors in a YOLE [2] network trained to detect objects on the Caltech101 dataset using increment layers. Figure 0(b) depicts the sparsity in increment tensors for layers at various depths in the network. As depicted in the figure, before correction, there is a significant drop in sparsity towards the deeper layers of the network.
### _Mitigating Reduced Sparsity on Upsampling_
Many CNNs in computer vision, such as feature pyramid networks and UNets, have an encoder-decoder structure [22, 10]. The encoder consists of a sequence of downsampling operations to create a high-level abstract feature that has lower spatial resolution than the input. This downsampling process leads to _dense_ intermediate tensors. In the decoder, the upsampling operations take these dense intermediate tensors as inputs and output more dense tensors. As a result, the
Fig. 1: Sparsity of intermediate tensors with sparsification layers inserted at difference points of the network.
sparse convolution operators do not improve inference speed on processing these dense tensors in the decoder.
In order to benefit from sparse convolution operation, we propose a _delayed integration_ technique and implement it in a delayed UNet architecture shown in Figures 1(a) and 1(b). In the original UNet, the output of an upsampling layer in the decoder is fed into the next decoder (labelled with dashed red arrows in Figure 1(a)). In our proposed delayed UNet, the outputs of upsampling layers in the decoder are not fed into the subsequent levels of decoders. Instead, the upsampled outputs are concatenated with the upsampled predictions towards the end (thus the name _delayed_). These concatenated tensors are then fed through two convolutions layers (Figure 1(b)). Thus, the decoder layers do not receive the dense tensors from the lowest spatial resolution level and can benefit from sparse convolutions. We include the prediction explicitly as inputs to the final two convolutions since the predictions have been trained with auxiliary loss and reflect the network's prediction of optical flow at different spatial resolutions.
### _Implementation of Increment Layers_
In this section, we describe the implementation of each of the increment layers corresponding to each operation for the forward pass. At inference step \(n\), each operator takes as input an increment tensor \(\mathbf{x}_{\uparrow}(n)\) (as defined in Section III-A1) and a mask tensor \(\mathbf{x}_{m}(n)\). Each element of the mask indicates whether a region of the input tensor can be skipped for computation by the operator as it is entirely comprised of zeros. In CNNs where the input increment tensor and the mask tensor have dimensions \(\mathbb{R}^{H\times W\times C}\) (i.e., height, weight, and channel), each element of the mask summarizes the sparsity of a smaller section of \(\mathbb{R}^{h\times w\times 1}\) elements in the input tensor, as depicted in Figure 3. For each operator, we estimate the following parameters: the output increment tensor \(\mathbf{y}_{\uparrow}\) and the output mask \(\mathbf{y}_{m}\). We drop the reference to a specific inference \(n\) when there is no ambiguity.
We denote linear operations, including **convolution** and **matrix multiplication** operations, with \(\theta_{L}\). For the increment tensor, we directly compute the increment in the output using increment in the input, as seen in Eqn 9 and 10.
\[\theta_{L}(\mathbf{x}(n))=\theta_{L}(\mathbf{x}(n-1)+\mathbf{x}_{ \uparrow}(n))=\theta_{L}(\mathbf{x}(n-1))+\theta_{L}(\mathbf{x}_{\uparrow}(n)) \tag{9}\] \[\mathbf{y}_{\uparrow}(n)=\theta_{L}(\mathbf{x}_{\uparrow}(n)) \tag{10}\]
We give an overview of our implementation of sparse convolution in Section III-D.
**Addition.** We denote two inputs to an addition operation with subscripts 1 and 2 respectively. The output increment is implemented simply as a sum of two input increment tensors using a dense to sparse tensor addition (Eqn 11). This value is initialized at the beginning of the inference. The mask of the result of the addition of two sparse tensors can be computed with a bitwise OR operation () as shown in Eqn 12.
\[\mathbf{y}_{\uparrow}=\mathbf{x}_{1,\uparrow}+\mathbf{x}_{2,\uparrow} \tag{11}\]
\[\mathbf{y}_{m}=\mathbf{x}_{1,m}\mid\mathbf{x}_{2,m} \tag{12}\]
**Activation.** Increment operators for non-linear functions like activations (\(\theta_{a}\)) is computed as the difference between the values of activation at step \(n\) and activation computed at step \(n-1\). We introduce an additional parameter \(x_{acc}\) which is an estimate of the input in the previous run, calculated as the sum of the corresponding increment tensors until the current inference step. Using this, the difference between outputs of \(\theta_{a}\) can be computed as shown in Eqn 13. The input mask is equal to the output mask (Eqn 14). The estimate of the input to the operator is updated as shown in Eqn 15.
\[\mathbf{y}_{\uparrow}=\theta_{a}(\mathbf{x}_{\uparrow}+\mathbf{x}_{acc})- \theta_{a}(\mathbf{x}_{acc}) \tag{13}\]
\[\mathbf{y}_{m}=\mathbf{x}_{m} \tag{14}\]
\[\mathbf{x}_{acc}\leftarrow\mathbf{x}_{acc}+\mathbf{x}_{\uparrow} \tag{15}\]
**Elementwise Multiply.** Elementwise multiplication maintains two additional tensors \(\mathbf{x}_{1,acc}\) and \(\mathbf{x}_{2,acc}\) which stores the sum of the increments over previous inference runs. The operation is implemented according to the Eqn 16. This requires two dense-to-sparse multiplications and two dense-to-sparse tensor additions. The mask of the output can be computed using a bitwise OR operation (Eqn 17). After we perform the multiplication, the quantities \(\mathbf{x}_{1,acc}\) and \(\mathbf{x}_{2,acc}\) are updated according to Eqn 18.
\[\mathbf{y}_{\uparrow}=(\mathbf{x}_{1,acc}+\mathbf{x}_{1,\uparrow})\, \mathbf{x}_{2,\uparrow}+\mathbf{x}_{2,acc}\,\mathbf{x}_{1,\uparrow} \tag{16}\]
\[\mathbf{y}_{m}=\mathbf{x}_{1,m}\mid\mathbf{x}_{2,m} \tag{17}\]
\[\mathbf{x}_{1/2,acc}\leftarrow\mathbf{x}_{1/2,acc}+\mathbf{x}_{1/2,\uparrow} \tag{18}\]
**Convolution on Sparse Inputs** Our convolution layer receives as input an increment tensor and a mask tensor of size \(H\times W\) and \(C\) channels. The mask of the input contains information on all the locations in the input where \(h\times w\) sections of the tensor are \(0\). When computing convolution over the input with a filter of size \(k\), we can skip applying the convolution filter overall \(k\times k\) regions within each \(h\times w\) section of the image of a particular channel. This skipping of sets of filter computations results in faster convolution.
Fig. 3: Compute mask on each 3D layout tensor
Fig. 2: Delayed UNet architecture
### _Drift Errors_
Repeated inferences on incremental inputs make Ev-Conv prone to drift errors. We perform an experiment and measure the difference between the output of the incremental and the regular version of a YOLE [2] CNN fine-tuned on N-Caltech [17] dataset. Figure 3(a) shows the error in the element with the maximum absolute difference between the real output and the output of the network computed with increments. We see that as the number of iterations increases, the percentage of the element with the maximum absolute difference compounds over time. To address this problem, we re-initialize each accumulated tensor (\(x_{acc}\) in activations and pointwise multiplications in Section III-D) by performing a regular forward pass. We call this run a refresh step. We run this step every N inferences to reduce the address drift errors, as demonstrated in Figure 3(b).
## IV Experiments
We perform all our experiments on a desktop with an Intel 11700K with an RTX 3060 GPU. We evaluate Ev-Conv on three computer vision applications that use different CNN architectures: real-time monocular depth estimation, optical flow estimation and object recognition. We insert our sparsification layer ahead of each convolution operation. We evaluate Ev-Conv for depth estimation tasks on two neural network architectures, a Conv-LSTM UNet architecture based on E2Depth [9] and a Bimodal ConvGRU based on RAMNET [38] on the DENSE [5] dataset. These networks use a spatio-temporal voxel grid to encode event inputs into input tensors. The DENSE dataset is synthetically produced using an event camera with a resolution of 346x260 and a mean readout rate of 24 kHz. For object recognition, we evaluate Ev-Conv on N-Caltech101 [17] and NCars [28] datasets with the YOLE CNN [2]. These datasets are produced with an event camera of resolution of 240x180 and a mean event readout rate of 2.7 kHz. Ev-Conv for optical flow estimation is evaluated on different neural network architectures: a recurrent convolution-UNet architecture [8], UNet architecture based on EVFlowNet [36], and a simple sequence of convolutions based on FireFlowNet [18] and our delayed UNet architecture on the MVSEC dataset [37]. The MVSEC dataset consists of many sequences of outdoor and indoor scenes captured using an event camera with a resolution of 346x260 and a mean readout rate of 185-270 kHz. In all of our experiments we have considered the time window to capture events at 50ms long, and shifting 1ms forward in time.
### _Inference Latency and Required MFLOPs_
From Table I, we observe that Ev-Conv provides a speedup of up to \(1.6\times\) on depth estimation task by leveraging the sparsity in the input increments. Furthermore, we observe a significant reduction in the number of floating-point operations required in the forward pass by as much as \(97\%\). Ev-Conv achieves a similar log-RMSE error compared to the original network. On the object recognition task (Table II), we observe a faster inference latency of about \(1.1\times\), resulting from a large reduction in the number of floating-point operations by about \(89\%\). We observe no change in accuracy on using Ev-Conv compared to the baseline model. For the optical flow task, we demonstrated a significant reduction in floating-point operations in the forward pass, which reduced over 90% of the required operations. However, Ev-Conv implementation sees a slight slowdown compared to the PyTorch implementation for RecEVFlowNet. For optical flow task, we compare the average endpoint error (AEE) (Table III) of the baseline and the models using Ev-Conv. We observe that our delayed model achieves similar accuracy compared with EVFlowNet. Despite the large reduction of floating point operations, the improvement in latency is much smaller in comparison. The reason for this is that we compare our hand-written kernel implementations with highly-optimized cuDNN implementations for dense tensor computations, which invoke specialized optimization techniques (device-specific intrinsics) to achieve better performance. As a number of these techniques are not directly accessible to the user and require expert domain knowledge, we operate with a weaker baseline implementation.
### _Ablation Study_
#### Iv-B1 Changing the inference timestep
Our framework performs inference over a time window of events. This window is shifted forwards in time each time when performing inference. The inference speed depends on the sparsity of the increment tensor observed, which depends on the inference rate. We present here the effect of varying the length of the inference timestep on inference speed in Figure 5.
#### Iv-B2 Changing the threshold parameter
On depth estimation task, Figure 6 show how the log-RMSE error changes as the threshold parameter increases. Figure 7 shows the log-depth estimates of the network at different sparsification threshold \(t_{p}\). We observe that the depthmap is visually similar to baseline.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Network** & **Latency** & **MFLOPs** & \begin{tabular}{c} **Accuracy-** \\ N-Caltech \\ \end{tabular} &
\begin{tabular}{c} **Accuracy-** \\ N-Cars \\ \end{tabular} \\ \hline YOLE [2] & 26ms & 58.9 & 69.5\% & 92.4\% \\
**Ev-Conv-YOLE** & 24.6ms & 6.8 & 69.5\% & 92\% \\ \hline \end{tabular}
\end{table} TABLE II: Evaluation of Ev-Conv for object recognition on N-Caltech [17] and N-Cars [28] datasets
Fig. 4: Drift error in inference output
Figure (a)a shows how the accuracy of the predictions varies with increasing threshold parameter. We see that the accuracy drops marginally (less than \(1\%\)) on both the N-Caltech and NCars datasets when the threshold parameter is set to \(0.1\). Figure (b)b shows how the Average Endpoint Error (AEE) varies with increasing the threshold parameter on different MVSEC dataset sequences. We see only minor degradation of the AEE error compared to the original network when the threshold parameter is set to \(0.1\).
Figure (a)a shows a percentage of the number of FLOPs required for the forward pass compared to baseline on changing the threshold parameter. On the depth estimation task, we find that on setting the threshold parameter at \(0.1\), we observe a nearly over \(97\%\) reduction in the raw number of floating-point operations required. and a \(90\%\) reduction for the object detection task. Figure (b)b depicts the normalized change in the inference latency on increasing the threshold parameters compared to the baseline. For the depth estimation task, we see a \(1.6\times\) and \(1.5\times\) speedup in inference latency over our implementation with virtually no change in the RMSE-log error in depth when the threshold parameter is set to \(0.1\). For the optical flow task, despite a significant reduction in floating-point operations, we observe that our best inference latency is slower by about \(1.05\times\). For the object detection task, we see an \(8\%\) reduction in the inference latency at the threshold parameter set to 0.1.
### _Comparison to prior works_
DeltaCNN [19] leverages the sparsity of the difference in consecutive input tensors for faster inference in videos. We adapted DeltaCNN for the event-based tasks for comparison. The convolution operation in DeltaCNN can skip computations if all elements of the receptive field of a convolution filter are 0, throughout the channel dimensions. For example, if the receptive field of the convolution is 3x3, 3x3xC channel dimensions are required to be 0. Although high, the sparsity of increments of event camera inputs is very irregular. Our implementation is able to handle irregular sparsity by requiring a tile of dimension 6x6 to be 0 to be able to skip computations
Fig. 5: Normalized latency on varying the inference timestep
Fig. 8: Ev-Conv on changing the threshold parameter
Fig. 6: log-RMSE error on changing the threshold parameter
Fig. 7: Depth maps computed by Ev-Conv-ConvLSTM-UNet on a single frame of DENSE town-007 dataset [5].
Fig. 9: Inference speed and % reduction of FLOPs on changing the threshold parameter
(as indicated in Section VI). Additionally, DeltaCNN skips the cooperative memory fetch of the input tensor when all elements to be loaded are 0. We perform an experiment where we use DeltaCNN for inference on event camera input encodings, shown in Figure 10. On incorporating this memory load optimization, our inference speeds improve further, as seen under the 'ours + conditioned fetch' label.
AsyNet [13] uses submanifold sparse convolution to leverage sparsity in event encodings while AEGNN [24] uses a graph neural network over event encodings. We compare the inference speed of these works with our work. We find that AsyNet performs \(2.31\times\), and AEGNN performs \(2.96\times\) slower inference compared to our CNN implementation due to irregular memory accesses in doing inference with submanifold sparse convolutions and graph neural networks on the GPU.
## V Conclusion
We introduced Ev-Conv, a new approach to achieving fast CNN inference on event camera inputs to enable high-speed robotics perception such as high-speed motion estimation, event-based object detection, object tracking, etc. We observe that at high inference rates, the difference between consecutive inputs to the CNN is small. Ev-Conv thus performs inference on the difference between consecutive inputs, or _increments_, rather than the event camera stream itself. Ev-Conv leverages the sparsity of the increments to effectively accelerate inference by reducing the number of floating points required. Ev-Conv is designed to retain the sparsity of the inputs to all layers of the CNN without sacrificing the accuracy of the network. Ev-Conv is able to reduce the floating operations by up to \(98\%\) and provide up to \(1.6\times\) faster inference speed on depth estimation, optical flow, and object recognition.
|
2307.07856 | Time reversal invariance and ontology | Albert and Callender have challenged the received view that theories like
classical electrodynamics and non-relativistic quantum mechanics are time
reversal invariant. They claim that time reversal should correspond to the mere
reversal of the temporal order of the instantaneous states, without any
accompanying change of the instantaneous state as in the standard view. As
such, Albert and Callender claim, these theories are not time reversal
invariant. The view of Albert and Callender has been much criticized, with many
philosophers arguing that time reversal may correspond to more than the
reversal of the temporal order. In this paper, we will not so much engage with
that aspect of the debate, but rather deflate the disagreement by exploiting
the ontological underdetermination. Namely, it will be argued that with a
suitable choice of ontology, these theories are in fact time reversal invariant
in the sense of Albert and Callender, in agreement with the standard view. | Ward Struyve | 2023-07-15T17:29:35Z | http://arxiv.org/abs/2307.07856v1 | # Time reversal invariance and ontology
###### Abstract
Albert and Callender have challenged the received view that theories like classical electrodynamics and non-relativistic quantum mechanics are time reversal invariant. They claim that time reversal should correspond to the mere reversal of the temporal order of the instantaneous states, without any accompanying change of the instantaneous state as in the standard view. As such, Albert and Callender claim, these theories are not time reversal invariant. The view of Albert and Callender has been much criticized, with many philosophers arguing that time reversal may correspond to more than the reversal of the temporal order. In this paper, we will not so much engage with that aspect of the debate, but rather deflate the disagreement by exploiting the ontological underdetermination. Namely, it will be argued that with a suitable choice of ontology, these theories are in fact time reversal invariant in the sense of Albert and Callender, in agreement with the standard view.
## 1 Introduction
Physics textbooks state that theories like Newtonian mechanics, classical electrodynamics and non-relativistic quantum mechanics are time reversal invariant. Albert [1] and Callender [2] disagree. Albert claims that only the former is time reversal invariant, while the other two are not [1, p. 14]:
And so [classical electrodynamics] is not invariant under time reversal. Period.
And neither (it turns out) is quantum mechanics, and neither is relativistic quantum field theory, and neither is general relativity, and neither is supergravity, and neither is supersymmetric quantum string theory, and neither (for that matter) are any of the candidates for a fundamental theory that anybody has taken seriously since Newton. And everything everybody has always said to the contrary... is wrong.
Callender discusses just non-relativistic quantum mechanics [2], but also arrives at the conclusion -- for the same reason as Albert -- that this theory is not time reversal invariant.
To explain the disagreement, let us consider Albert, who gives a detailed discussion of what he takes time reversal to mean. Consider first Newtonian mechanics. In this case, Albert actually agrees with the standard conclusion that the theory is time reversal invariant, but for different reasons. For Albert, the collection of positions of the particles at a time forms the instantaneous state. The temporal sequence of these instantaneous states forms a history. Albert takes the time reversal of a history to be just the history run backwards. That is, the temporal order of instantaneous states is reversed. It is as if a video of the motion of the particles is played backwards. Newtonian mechanics is time reversal invariant because the time-reversed of each dynamically allowed history is also dynamically allowed. That is, time reversal is a symmetry by turning solutions to Newton's dynamics into solutions. In the example of the video, the time reversal invariance entails that we would not be able to tell on the basis of Newton's dynamics whether the video is played backwards or not, since both evolutions are allowed by the dynamics.
Also according to the standard view Newtonian mechanics is time reversal invariant, but the story is a bit different. First of all, in addition to the particle positions, also the instantaneous velocities are included in the instantaneous state. In this way, the instantaneous state determines a unique solution to the Newtonian dynamics. Second, according to the standard view, the time reversal amounts to reversing the temporal order of the instantaneous states together with flipping the sign of the velocities at each time. So the time reversal is more than just reversing the temporal order of the instantaneous states. For Albert, the velocities also flip sign under time reversal, but that is because they are the rates of change of the positions and hence a time reversal of the positions induces a sign flip of the velocities. In any case, despite these differences, the conclusion is the same: Newtonian mechanics is time reversal invariant.
Disagreement arises in the case of classical electrodynamics. In this case, the electric and magnetic field are included in the instantaneous state, both according to Albert and standard textbooks. So there is no disagreement concerning the electromagnetic part of the instantaneous state. However, according to the standard view, the magnetic field flips sign under time reversal, like the velocities in Newtonian mechanics. But for Albert the magnetic field should not change sign under time reversal [1, p. 20]:
Magnetic fields are not the sorts of things that any proper time reversal transformation can possibly turn around. Magnetic fields are not--either logically or conceptually--the rates of change of anything.
As such Albert concludes that electrodynamics is not time reversal invariant.
The story in non-relativistic quantum mechanics is similar. There is no disagreement that the instantaneous state is given by the wave function. But standard time reversal involves more than merely reversing the temporal order of states, namely it also involves taking the complex conjugate of the wave function. Merely reversing the temporal order
does not correspond to a symmetry of the theory and so Albert and Callender conclude that the theory is not time reversal invariant.
So Albert's analysis differs from the standard one on two accounts. First, there is the different notion of instantaneous state. In essence, Albert takes the instantaneous state at a time to be determined by the fundamental ontology (that is, by what there exists on the fundamental level according to the theory. For example, the ontology may be one of, say, particles or fields, so that the instantaneous state at a time consists of particle positions or field configurations at that time. On the other hand, on the standard account, the instantaneous state is such that it determines a unique solution to the equations of motion and as such may contain more variables than Albert's instantaneous state. For example, it may also include particle velocities, field velocities,.... Second, there is the notion of time reversal invariance, which is just the temporal order reversal of instantaneous states for Albert, whereas there might be an additional (involutive) state transformation at each instant according to the standard account. The examples of electrodynamics and quantum mechanics show that the second difference is essential for the different conclusion concerning the question of time reversal invariance.
This issue is important because if a theory is not time reversal invariant, then it could be argued that time has an objective direction according to that theory (while it has no bearings on issues like the arrow of time [1, 3]). There is a large body of interesting literature defending the standard time reversal transformations contra Albert and Callender [4, 5, 6, 7, 8, 9, 10, 11]. In particular, efforts are made to make precise the notion of time reversal invariance. From the standard textbook account one may get the impression that the time reversal transformation of the instantaneous state is somewhat arbitrary and is chosen so as to make the theory time reversal invariant. It is a virtue of the Albert and Callender account that it does not depend on desiderata of this kind. Two other precise notions that stand out are that of 'active time reversal' [5, 8, 12, 13] and a notion of Malament sometimes called 'geometric time reversal' [6, 7, 8]. An active transformation is defined through the notion of a passive transformation. Given geometric quantities that are expressed in a coordinate system with time \(t\), a passive transformation expresses these quantities in a different coordinate system with time \(t^{\prime}=-t\). An active transformation keeps the coordinate system fixed but transforms the quantities as in the passive transformation. Geometric time reversal differs from active time reversal. In short, geometric time reversal corresponds to flipping just the temporal orientation, but keeping the geometrical objects fixed. Representations of these geometrical objects may depend on the temporal orientation and may hence change under geometric time reversal. As in the case of active time reversal, the way the instantaneous state transforms depends on the type of geometrical objects that exist. This makes that the transformation may be non-trivial so that time reversal may amount to more than the mere temporal order reversal of Albert and Callender.
In this paper, we will not so much engage in the discussion on which is the better notion of time reversal invariance. Rather, the goal of the paper is to show that theories like electrodynamics and quantum mechanics can be considered time reversal invariant according to the Albert-Callender notion, provided a suitable ontology is chosen.
Namely, the ontology of these theories is underdetermined (especially when it comes to field ontologies). Different ontologies yield different instantaneous states. So whether a theory counts as time reversal invariant depends on what is considered to be the ontology. Ontologies can be found such that the theories are time reversal invariant in the Albert-Callender sense.
The role of ontology has been emphasized before, notably in [8, 14]. Arntzenius and Greaves show that different ontologies exist for which electrodynamics is time reversal invariant under geometric time reversal. They also consider Albert's view on electrodynamics. But while they consider it as internally coherent, they do not further pursue it because the electric and magnetic fields are regarded as a suitable ontology for a Newtonian space-time, but not for a relativistic one. Allori [14] compares different views including those of Albert and Malament and argues that the difference lies (in part) in the choice of ontology.
Since the Albert-Callender notion of time reversal invariance seems to involve mere reversal of temporal order, it seems stronger than the other notions of active and geometric time reversal. For theories that are standardly considered to be time-reversal invariant, ontologies may be found such that these theories are invariant under temporal order reversal. But not necessarily the other way around. For example, the standard model of particle physics is actually not time reversal invariant according to the standard notion. (It is merely CPT-invariant, that is, invariant under the joint transformation of charge conjugation, parity and time reversal.) But no choice of ontology will make the theory invariant under the temporal order reversal of Albert and Callender.
The ontologies we consider for electrodynamics and non-relativistic quantum mechanics also make the theories invariant under active and geometric time reversal. Unlike for electrodynamics, this has to our knowledge not yet been achieved for non-relativistic quantum mechanics. Roberts discusses the latter in detail and defends the usual transformation of the wave function under time reversal, but on different grounds [9, 10].
The outline of the paper is as follows. In the next section, we start with introducing the relevant notions. Then, in section 3, we will consider the ontological implications concerning time reversal invariance in the case of a scalar field. In section 4, we will consider ontologies for classical electrodynamics and quantum mechanics which make these theories time reversal invariant in the Albert-Callender sense. With these choices of ontology, the time reversal transformation happens to coincide with the standard one (just as it does in the case of Newtonian mechanics). There are also examples for which this is not the case. In section 5, we will illustrate this with scalar electrodynamics, which describes a scalar field interacting with an electromagnetic field. An ontology will be presented for which the Albert and Callender notion of time reversal does not coincide with the standard one, but rather with the joint transformation of time reversal and charge conjugation. Finally, in section 6, a comparison is made with active and geometric time reversal in the case of electrodynamics. We conclude in section 7.
Instantaneous state and time reversal
Let us first formalize some notions, following Albert [1]. The instantaneous state at a certain time \(t\) is denoted by \(S(t)\). As mentioned before, for Albert, the instantaneous state at a time is determined by the ontology. For example, in the case of a particle ontology, the instantaneous state at a time is given by the particle positions at that time. (There might be non-dynamical variables such as for example the charges or masses of particles which may be part of the state, but which do not play a role in the arguments concerning time reversal invariance. Therefore, we will only explicitly include the dynamical variables in the state specification.) One may consider all kinds of other quantities such as velocities, accelerations, momenta, angular momenta, energies, etc., but these are not fundamental quantities; they can be derived from the trajectories of the particles. On the other hand, the standard notion of instantaneous state (in this context) includes extra variables at that time such that for a deterministic theory the instantaneous state together with the equations of motion determines a unique solution. For example, in the case of a particle ontology these variables could be the velocities or the momenta. We will add the subscripts \(a\) and \(s\) and write \(S_{a}(t)\) and \(S_{s}(t)\) to refer to respectively Albert's notion and the standard notion of state. (Albert elaborates more on the notion, but this suffices for our purposes.)
For a given history \(t\to S(t)\), the time-reversed history is denoted by \(t\to T(S)(t)\). For Albert, the time-reversed history is \(t\to T_{a}(S_{a})(t)\), with \(T_{a}(S_{a})(t)=S_{a}(-t)\).1 So time reversal is merely a reversal of the temporal order of the instantaneous states. It also induces a transformation of the non-fundamental quantities like velocities and momenta, etc., which may amount to more than their mere order reversal. According to the standard notion, given a history \(t\to S_{s}(t)\), the time-reversed history is \(T_{s}(S_{s})(t)=S_{s}^{T}(-t)\), where the superscript \(T\) denotes some additional involutive operation on each instantaneous state in addition to the order reversal.
Footnote 1: We consider only time-translation invariant theories so that there is nothing special about \(t=0\) in the definition of a time-reversed history.
A theory is time reversal invariant if for each dynamically allowed history, that is, each possible solution to the equations of motion, its time-reversed history is also dynamically allowed.
Let us give some examples. First consider Newtonian mechanics, where the ontology is given by point-particles with positions \(({\bf X}_{1},\ldots,{\bf X}_{n})\). The equations of motion read
\[m_{k}\frac{d^{2}{\bf X}_{k}}{dt^{2}}=-{\mathbf{\nabla}}_{k}V({\bf X}_{1},\ldots,{ \bf X}_{n}), \tag{1}\]
with \(m_{k}\) the mass of the \(k\)-th particle and \(V\) a potential which depends on just the positions (not on time or the velocities). According to the standard notion, the instantaneous state at a time \(t\) is the collection of positions and velocities \({\bf V}_{k}\) at that time, where
\[{\bf V}_{k}(t)=\frac{d{\bf X}_{k}(t)}{dt}, \tag{2}\]
so that
\[S_{s}(t)=({\bf X}_{1}(t),\ldots,{\bf X}_{n}(t),{\bf V}_{1}(t),\ldots,{\bf V}_{n}( t)). \tag{3}\]
The time reversal operation is
\[T_{s}:S_{s}(t)\to S_{s}^{T}(-t)=({\bf X}_{1}(-t),\ldots,{\bf X}_{n}(-t),-{\bf V}_{ 1}(-t),\ldots,-{\bf V}_{n}(-t)). \tag{4}\]
Since this operation maps solutions to solutions, Newtonian mechanics is time reversal invariant according to the standard notion.2
Footnote 2: For this conclusion to obtain, it is important that the potential does not depend on time or the particle velocities. According to Albert, the instantaneous state at time \(t\) is
\[S_{a}(t)=({\bf X}_{1}(t),\ldots,{\bf X}_{n}(t)). \tag{5}\]
It does not include the instantaneous velocities, but these are determined by the collection of instantaneous states, that is, as the rates of change of the positions, by (2). The time reversal transformation is
\[T_{a}:S_{a}(t)\to S_{a}(-t)=({\bf X}_{1}(-t),\ldots,{\bf X}_{n}(-t)), \tag{6}\]
with the induced transformation of the velocities as in (4). The upshot is that also according to Albert, Newtonian mechanics is time reversal invariant.
Let us now turn to classical electrodynamics. Consider an ontology given by point-particles together with an electric and magnetic field \({\bf E}({\bf x},t)\) and \({\bf B}({\bf x},t)\).3
Footnote 3: The electric and magnetic field can also be written in terms of the electromagnetic tensor
\[F^{\mu\nu}=\begin{pmatrix}0&E_{1}&E_{2}&E_{3}\\ -E_{1}&0&B_{3}&-B_{2}\\ -E_{2}&-B_{3}&0&B_{1}\\ -E_{3}&B_{2}&-B_{1}&0\end{pmatrix}. \tag{7}\]
The laws of motion are given by the Lorentz force law4
Footnote 4: We assume units such that \(c=\hbar=1\) throughout.
\[\frac{d}{dt}\left(m_{r,k}\frac{d{\bf X}_{k}}{dt}\right)=e_{k}\left[{\bf E}({ \bf X}_{k},t)+\frac{d{\bf X}_{k}}{dt}\times{\bf B}({\bf X}_{k},t)\right], \tag{8}\]
where \(m_{r,k}\) is the relativistic mass of the \(k\)-th particle and \(e_{k}\) its charge, together with Maxwell's equations
\[\mathbf{\nabla}\cdot{\bf E}=\rho,\qquad\mathbf{\nabla}\cdot {\bf B}=0, \tag{9}\]
\[\mathbf{\nabla}\times{\bf E}=-\frac{\partial{\bf B}}{\partial t}, \qquad\mathbf{\nabla}\times{\bf B}={\bf J}+\frac{\partial{\bf E}}{ \partial t}, \tag{10}\]
where \(\rho({\bf x},t)=\sum_{k}e_{k}\delta({\bf x}-{\bf X}_{k}(t))\) and \({\bf J}({\bf x},t)=\sum_{k}e_{k}\frac{d{\bf X}_{k}(t)}{dt}\delta({\bf x}-{\bf X }_{k}(t))\) are respectively the charge density and the charge current. According to the standard account, the instantaneous state is
\[S_{s}(t)=({\bf X}_{1}(t),\ldots,{\bf X}_{n}(t),{\bf V}_{1}(t),\ldots,{\bf V}_{ n}(t),{\bf E}({\bf x},t),{\bf B}({\bf x},t)) \tag{11}\]
and under time reversal
\[T_{s}:S_{s}(t)\to S_{s}^{T}(-t)=({\bf X}_{1}(-t),\ldots,{\bf X}_{n}(-t),-{\bf V}_ {1}(-t),\ldots,-{\bf V}_{n}(-t),{\bf E}({\bf x},-t),-{\bf B}({\bf x},-t)). \tag{12}\]
It is crucial that the magnetic field flips sign under this operation as it guarantees the invariance of the equations of motion.
Albert takes the instantaneous state to be
\[S_{a}(t)=({\bf X}_{1}(t),\ldots,{\bf X}_{n}(t),{\bf E}({\bf x},t),{\bf B}({\bf x },t)) \tag{13}\]
and under time reversal
\[T_{a}:S_{a}(t)\to S_{a}(-t)=({\bf X}_{1}(-t),\ldots,{\bf X}_{n}(-t),{\bf E}({ \bf x},-t),{\bf B}({\bf x},-t)). \tag{14}\]
There is no sign flip of the magnetic field; it is not the rate of change of anything. As such, Albert concludes that the equations of motion are not time reversal invariant; the transformation (14) is not a symmetry of the equations of motion (it does not map solutions to solutions). The standard time reversal transformation (12) is still a symmetry of the equations of motion, but for Albert this does not amount to time reversal symmetry.
In non-relativistic quantum mechanics the situation is similar and is detailed by Callender [2]. To avoid the interpretational issues that arise in this context, we will regard the Schrodinger equation as just a classical field equation. For simplicity, we will also consider just a single "particle". The field ontology is then represented by the wave function \(\psi({\bf x},t)\) and the Schrodinger equation is
\[{\rm i}\frac{\partial\psi({\bf x},t)}{\partial t}=-\frac{1}{2m}\nabla^{2}\psi ({\bf x},t)+V({\bf x})\psi({\bf x},t). \tag{15}\]
The instantaneous state is
\[S_{s}(t)=S_{a}(t)=\psi({\bf x},t). \tag{16}\]
The Schrodinger equation is invariant under the standard time reversal operation
\[T_{s}:S_{s}(t)\to S_{s}^{T}(-t)=\psi^{*}({\bf x},-t), \tag{17}\]
but not under the mere reversal of temporal order of instantaneous states
\[T_{a}:S_{a}(t)\to S_{a}(-t)=\psi({\bf x},-t). \tag{18}\]
## 3 Ontological underdetermination
There tends to be an underdetermination in the ontology of physical theories. Newtonian mechanics is usually regarded as a theory about point-particles. But one could consider different possible ontologies. In particular, there are ontologies that make that the theory is no longer invariant under mere temporal order reversal. For example, take the
ontology to be given by point-particles, with positions \({\bf X}_{k}\), endowed with vectors \({\bf P}_{k}\) at the particle locations, and take the dynamics to be given by
\[m_{k}\frac{d{\bf X}_{k}}{dt}={\bf P}_{k},\qquad\frac{d{\bf P}_{k}}{dt}=-\mathbf{\nabla}_{k}V({\bf X}_{1},\ldots,{\bf X}_{n}). \tag{19}\]
This is of course recognized as the phase space formulation of Newtonian mechanics. But while the ontology is (usually) still taken to consist of just the point-particles, with (19) corresponding to a particular way of expressing the particle dynamics, we consider here an alternative ontology given by the point-particles together with the vectors \({\bf P}_{k}\). That the \({\bf P}_{k}\) are related to the rates of change of the positions is not taken as a kinematical fact, but as a dynamical fact.5 With this ontology, the instantaneous state is \(S_{a}(t)=({\bf X}_{1}(t),\ldots,{\bf X}_{n}(t),{\bf P}_{1}(t),\ldots,{\bf P}_{ n}(t))\) and the theory is no longer invariant under mere temporal order reversal.
Footnote 5: To appreciate the difference, consider a different (physically unmotivated) dynamics for which this relation no longer holds, like \(d{\bf X}_{k}/dt=0\), \(d{\bf P}_{k}/dt=0\).
Consider now the theory of a real scalar field \(\phi({\bf x},t)\), satisfying the Klein-Gordon equation
\[\partial_{\mu}\partial^{\mu}\phi+m^{2}\phi=0. \tag{20}\]
The ontology is given by the scalar field \(\phi({\bf x},t)\). But one could also consider a phase space representation, with an ontology given by the fields \((\phi({\bf x},t),\pi({\bf x},t))\), satisfying
\[\frac{\partial\phi}{\partial t}=\pi,\qquad\frac{\partial\pi}{\partial t}= \nabla^{2}\phi-m^{2}\phi. \tag{21}\]
As with the alternative ontology for Newtonian mechanics above, the fields \(\phi({\bf x},t)\) and \(\pi({\bf x},t)\) should be taken as ontologically independent fields. That the field \(\pi\) happens to agree with the velocity \(\partial\phi/\partial t\) is merely a dynamical fact and not a kinematical one.
The scalar field theory can also be written in terms of a 5-component spinor \(\psi({\bf x},t)\) which satisfies the Kemmer equation [15, 16]
\[{\rm i}\beta^{\mu}\partial_{\mu}\psi-m\psi=0. \tag{22}\]
This is a Dirac-like equation, which is manifestly Lorentz invariant, just as the Klein-Gordon equation (20). This theory is completely equivalent to the Klein-Gordon theory. (In a particular representation of the Kemmer matrices \(\beta^{\mu}\), the Kemmer equation implies \(\psi=(\partial_{\mu}\phi,m\phi)^{T}\).) Despite the equivalence, this form of the theory is hardly used, due to its greater complexity. However, this is not a reason not to consider an ontology in terms of the Kemmer spinor. Actually, when it comes to spin-1/2 particles, the Dirac equation which is a first-order differential equation (which is the analogue of (22)) is the one commonly used, instead of the somewhat simpler second-order Van der Waerden equation for a two-component spinor [17] (which is the analogue of (20)).
So for the scalar field theory, we have considered three possible candidates for the ontology and hence for the instantaneous state \(S_{a}\). Namely,
\[S_{a}^{(1)}(t)=\phi({\bf x},t),\qquad S_{a}^{(2)}(t)=(\phi({\bf x},t),\pi({\bf x },t)),\qquad S_{a}^{(3)}(t)=\psi({\bf x},t). \tag{23}\]
Only the first one yields time reversal invariance under temporal order reversal. The time reversal operation \(T_{a}\) in this case also corresponds to the standard one. According to the standard notion, the theory is time reversal invariant for all these choices of ontology. (Similarly, in the case of a (free) spin-1/2 particle, for which the state \(S_{a}\) can be taken to be a Dirac spinor or a Van der Waerden spinor, only the latter will amount to invariance under temporal order reversal.)
So whether a theory is invariant under temporal order reversal depends on the choice of ontology. In the case of a classical field theory, different possible ontologies seem possible with no clear physical preference. (Even the requirement of manifest Lorentz invariance leaves options \(S_{a}^{(1)}\) and \(S_{a}^{(3)}\).) In the next section, we will show that the underdetermination of the ontology in the case of classical electrodynamics and non-relativistic quantum mechanics can be exploited to choose one such that the theory is invariant under mere temporal order reversal.
## 4 Ontology of electrodynamics and quantum mechanics
Maxwell's equations imply6
Footnote 6: The action of \(1/\nabla^{2}\) is defined in terms of the Green function of the Laplacian as \((1/\nabla^{2})f({\bf x})=-\frac{1}{4\pi}\int d^{3}yf({\bf y})/|{\bf x}-{\bf y}|\). The expression (24) follows if the fields fall off sufficiently fast at spatial infinity.
\[{\bf B}=-\frac{1}{\nabla^{2}}\mathbf{\nabla}\times\left({\bf J}+ \frac{\partial{\bf E}}{\partial t}\right). \tag{24}\]
This expression can be used to eliminate the magnetic field from Maxwell's equations and the Lorentz force law. Maxwell's equations are then expressed as
\[\mathbf{\nabla}\cdot{\bf E}=\rho,\qquad\frac{\partial^{2}{\bf E}}{ \partial t^{2}}-\nabla^{2}{\bf E}=-\mathbf{\nabla}\rho-\frac{\partial {\bf J}}{\partial t}. \tag{25}\]
The resulting formulation of the theory is completely equivalent to the original one, where the latter can be obtained using (24) as a definition of the magnetic field. The number of equations of motion is halved, but the equations have become second order in the time derivatives rather than first order. The reformulation suggests that the ontology of the electromagnetic field can be taken to be just the electric field, so that
\[S_{a}(t)=({\bf X}_{1}(t),\ldots,{\bf X}_{n}(t),{\bf E}({\bf x},t)), \tag{26}\]
with the field equations given by (25). This theory is invariant under mere temporal order reversal. There is no problem with the magnetic field because it is simply not part of the ontology (and the theory). The magnetic field could be defined as in (24), in terms of the particles and the electric field. From that definition it follows that under temporal order reversal of the state \(S_{a}\), the magnetic field will flip sign. So on this view the magnetic field does play the role of a velocity, since it is a linear combination of the
rates of change of the particle positions (through the charge current) and the electric field.
Note that we could also have eliminated the electric field in terms of the magnetic field. But then the resulting theory would not be invariant under temporal order reversal. So there is no technical reason why it is more natural to assume the ontology to be given by the electric field rather than the magnetic field.
Nevertheless, there is an issue with this ontology. It is an ontology that is suitable for Newtonian space-time, but not so much for a Minkowski space-time, which is the natural space-time in this context due to the Lorentz invariance of electrodynamics (see also [8]). So rather than having a 3-vector as constituting the fundamental ontology, it would be more desirable to have Lorentz-covariant objects, like the electromagnetic tensor (7). However, such an ontology does not make the theory invariant under mere temporal order reversal. The same holds for the ontology considered by Malament [6], where the electromagnetic field is a map from tangent lines to forces. We will discuss this further in section 6 when comparing to other notions of time reversal.
A manifestly Lorentz invariant theory that is invariant under temporal order reversal could be obtained by completely removing the fields from the ontology, so that only the particles remain. This is attempted in the Wheeler-Feynman theory [18, 19]. To see how this theory is obtained, consider the covariant form of electrodynamics
\[\partial_{\mu}F^{\mu\nu}(x)=\sum_{k}j^{\nu}_{k}(x),\qquad m_{k}\frac{d^{2}X^{ \mu}_{k}(s_{k})}{ds_{k}^{2}}=e_{k}F^{\mu}{}_{\nu}(X_{k}(s_{k}))\frac{dX^{\nu}_ {k}(s_{k})}{ds_{k}}, \tag{27}\]
where \(X^{\mu}_{k}(s_{k})\) is the world line of the \(k\)-th particle, parameterized by its proper time \(s_{k}\), \(A^{\mu}(x)\) is the electromagnetic potential and \(F^{\mu\nu}=\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu}\) is the electromagnetic tensor, with \(E_{i}=F_{0i}\) and \(B_{i}=-\epsilon_{ijk}F^{jk}/2\) (see (7)), and \(j^{\mu}_{k}(x)=e_{k}\int ds\frac{dX^{\mu}_{k}(s)}{ds}\delta(x-X_{k}(s))\) the charge current produced by the \(k\)-th charge. Assuming the Lorenz gauge \(\partial_{\mu}A^{\mu}=0\), the Maxwell equations can be written as
\[\Box A^{\mu}(x)=\sum_{k}j^{\mu}_{k}(x). \tag{28}\]
The potential can be decomposed as \(A^{\mu}_{F}+A^{\mu}_{M}\) with \(A^{\mu}_{F}\) a field satisfying the free Maxwell equations \(\Box A^{\mu}_{F}(x)=0\) and
\[A^{\mu}_{M}(x)=\frac{1}{\Box}\sum_{k}j^{\mu}_{k}(x), \tag{29}\]
where \(1/\Box\) denotes a convolution with a Green's function \(G\) of the d'Alembertian. There are various choices for \(G\); one could take the retarded Green's function \(G_{-}\), the advanced one \(G_{+}\) or linear combinations.7 Different choices imply different free fields \(A^{\mu}_{F}\). Wheeler and Feynman chose \(G=(G_{+}+G_{-})/2\). Eq. (29) is then taken as a definition of \(A^{\mu}_{M}\), rather than as (part of) a dynamical equation. In the Lorentz force law in (27), the
self-force is subtracted to avoid infinities (so that the sum ranges over \(l\neq k\) in (30)). Furthermore, it is assumed there are no free fields, so that \(A_{F}^{\mu}=0\). In this way, the fields are completely eliminated from the theory. There are just particles, satisfying the equations of motion
\[m_{k}\frac{d^{2}X_{k}^{\mu}(s_{k})}{ds_{k}^{2}}=e_{k}\sum_{l\neq k}\left[\partial ^{\mu}\frac{1}{\Box}j_{l\nu}(X_{k}(s_{k}))-\partial_{\nu}\frac{1}{\Box}j_{l}^{ \mu}(X_{k}(s_{k}))\right]\frac{dX_{k}^{\nu}(s_{k})}{ds_{k}}. \tag{30}\]
The instantaneous state is now
\[S_{a}(t)=({\bf X}_{1}(t),\ldots,{\bf X}_{n}(t)) \tag{31}\]
and it can easily be checked that the theory is invariant under temporal order reversal.8
Footnote 8: To establish this time reversal invariance, it is crucial that the Green’s function \(G=(G_{+}+G_{-})/2\) is chosen in (29), since this function satisfies \(G({\bf x},t;{\bf x}^{\prime},t^{\prime})=G({\bf x},-t;{\bf x}^{\prime},-t^{ \prime})\). For any other choice of \(G\), the dynamics (30) will not be time reversal invariant. In addition, the transformation law of \(A_{M}\) (32) only holds for the Wheeler-Feynman choice. It is also crucial for the time reversal invariance that the free field is assumed zero. If it is not zero and assumed to be part of the ontology, then while the free Maxwell equations \(\Box A_{F}^{\mu}=0\) are invariant under temporal order reversal, the Lorentz force law will not be.
There are no fields, but if one defines \(A^{\mu}\) through (29) (with the Wheeler-Feynman choice of Green's function), the time reversal transformation \(T_{a}\) on the state \(S_{a}\) induces the follow transformation of \(A_{M}^{\mu}\)
\[A_{M}^{0}({\bf x},t)\to A_{M}^{0}({\bf x},-t),\qquad A_{M}^{i}({\bf x},t) \rightarrow-A_{M}^{i}({\bf x},-t). \tag{32}\]
This transformation agrees with the standard transformation of the vector potential found in textbooks. It further induces the standard transformations of the electric and magnetic field if one defines them through the usual definitions, including a sign flip of the magnetic field.9
Footnote 9: Interestingly, in his defense of the standard notion of time reversal invariance, Earman also considers \(A_{M}^{\mu}\)[4]. He proposes to take (28) as ‘the definition of the four-potential arising from [the current]’. This could be read in the sense considered here, namely that there is no independent reality for the field. However, Earman seems to have had merely the intention of showing that if one accepts the usual transformation of the particles, then one should also accept the usual transformation of the vector potential. But this is akin to stating that the vector potential should transform the usual way, just to make the theory time reversal invariant. Because if \(A_{M}^{\mu}\) (or the electromagnetic field) is taken as part of the ontology, then (28) should be taken as a law.
There is debate about the empirical adequacy of the theory, but the important point for our purposes is that it is a theory that is manifestly Lorentz invariant and that it is invariant under temporal order reversal. Other choices of ontologies may be possible that achieve this and perhaps also include a free field.
Allori [14] considers yet another option to have invariance under temporal order reversal, which she attributes to Horwich [20]. On this view, the electric and magnetic field are not part of the fundamental ontology. There is just a particle ontology, like in the Wheeler-Feynman theory. But unlike in the latter, the electromagnetic field still appears in Horwich's account of the theory. But the field has a nomological character
rather than an ontological one, that is, the field merely plays a role in the dynamics of the particles. The particles are said to constitute the 'primitive ontology'. The fields then transform the way they do under time reversal just to have the primitive ontology transform the right way. The approach we consider here is different. We have not relegated some parts of the standard ontology to the nomological domain, but rather we have eliminated them completely (in particular also from the dynamics). For example, in our first proposal, the magnetic field was no longer part of the theory, neither on the ontological nor on the nomological level.
The non-relativistic Schrodinger equation can be dealt with similarly. Writing \(\psi=\psi_{r}+{\rm i}\psi_{i}\), with \(\psi_{r}\) and \(\psi_{i}\) real, the Schrodinger equation (15) amounts to the following set of coupled differential equations
\[\partial_{t}\psi_{r}=-H\psi_{i}, \tag{33}\]
\[\partial_{t}\psi_{i}=H\psi_{r}, \tag{34}\]
where \(H=-\frac{1}{2m}\nabla^{2}+V({\bf x})\) is the Hamiltonian operator. Taking the time derivative of (33) and using (34), leads to10
Footnote 10: This equation has been considered by a number of people, including Schrödinger himself [21]. This equation is also encountered in the reduced phase space formulation of the Schrödinger theory [22]. In the reduced phase space formulation, \(\psi_{r}\) and \(\psi_{i}\) become canonically conjugate variables. An inverse Legendre transformation leads to the Lagrangian for just \(\psi_{r}\), whose Euler-Lagrange equation corresponds to (35). Actually, the second-order equations (25) for electromagnetism could be obtained similarly since the electric and magnetic field are (approximately) canonically conjugate. The analogy between the second-order equations for electromagnetism and non-relativistic quantum mechanics also shows up if the Riemann-Silberstein vector \({\bf F}={\bf E}+{\rm i}{\bf B}\) is used for the electromagnetic field [23]. The free Maxwell equations imply the Schrödinger-like equation \({\rm i}\partial{\bf F}/\partial t=\mathbf{\nabla}\times{\bf F}\), together with the constraint \(\mathbf{\nabla}\cdot{\bf F}=0\). Eliminating the complex part of \({\bf F}\) then corresponds to eliminating the magnetic field.
\[\frac{\partial^{2}\psi_{r}}{\partial t^{2}}=-H^{2}\psi_{r}. \tag{35}\]
In this way the imaginary part is eliminated. Rather than taking the ontology to be given by \(\psi\), it can be taken to be just \(\psi_{r}\), satisfying the real wave equation (35). The theory is still equivalent to the Schrodinger equation, by defining 11
Footnote 11: The inverse of the Hamiltonian operator \(1/H\) can be defined in terms of the Green’s function for \(H\). We have hereby assumed that the inverse of \(H\) does indeed exist. It might not exist if the spectrum includes zero. However, if the spectrum is bounded from below, this can easily be taken care of by shifting the spectrum, via the Hamiltonian \(H^{\prime}=H+E\), with \(E\) a constant such that the spectrum of \(H^{\prime}\) no longer includes zero. On the level of the Schrödinger equation this shift leads to an equivalent theory, since it merely entails a phase shift of the solutions, given by \(\psi^{\prime}={\rm e}^{-{\rm i}Et}\psi\).
\[\psi_{i}=\frac{1}{H}\partial_{t}\psi_{r}. \tag{36}\]
The equation (35) is second order in the time derivative, so that the theory is time reversal invariant under mere temporal order reversal.12 The usual time reversal transformation (17) for \(\psi\) is recovered since the definition (36) for the imaginary part \(\psi_{i}\)
entails that it transforms as the time derivative of \(\psi_{r}\). So \(\psi_{i}\) roughly plays the role of a field velocity.
Footnote 3: The \(\psi_{i}\) is a function of the time derivative of the wave function, and the \(\psi_{i}\) is a function of the time derivative of the wave function. The \(\psi_{i}\) is a function of the time derivative of the wave function, and the \(\psi_{i}\) is a function of the time derivative of the wave function. The \(\psi_{i}\) is a function of the time derivative of the wave function, and the \(\psi_{i}\) is a function of the time derivative of the wave function. The \(\psi_{i}\) is a function of the time derivative of the wave function, and the \(\psi_{i}\) is a function of the time derivative of the wave function.
## 5 Time reversal invariance with different time reversal transformations
In the previous section, examples of ontologies were provided that make electrodynamics and the non-relativistic Schrodinger equation invariant under mere temporal order reversal. This was done by respectively removing the magnetic field and the imaginary part of the wave function from the ontology. In these cases, the transformation was in agreement with the standard time reversal transformation. That is, the temporal order reversal of the fundamental variables was just the standard time reversal transformation, as was the induced transformation of the non-fundamental variables. However, this need not always be the case. The transformations may disagree, yet yield time reversal invariance under both notions. Consider for example a complex scalar field \(\phi\) satisfying the Klein-Gordon equation (20), which describes a charged spinless field. According to the standard notion of time reversal, the field should transform as \(\phi({\bf x},t)\rightarrow\phi^{*}({\bf x},-t)\), whereas under mere temporal order reversal, taking \(S_{a}(t)=\phi({\bf x},t)\), \(T_{a}:\phi({\bf x},t)\rightarrow\phi({\bf x},-t)\). So there is disagreement about what counts as time reversal. Yet, both transformations are symmetries of the Klein-Gordon equation (they map solutions to solutions) and hence according to both notions the theory is time reversal invariant. (The same is true for the Van der Waerden equation that was mentioned in section 3. With the Van der Waerden spinor as ontology, the theory is invariant under temporal order reversal, even though this does not amount to the standard time reversal operation.)
The previous example can be extended to include an electromagnetic field. In terms of the scalar field \(\phi\) and the vector potential \(A^{\mu}\), this theory (scalar electrodynamics) has the equations of motion
\[D_{\mu}D^{\mu}\phi+m^{2}\phi=0,\qquad\partial_{\mu}F^{\mu\nu}=j^{\nu}, \tag{37}\]
where \(D_{\mu}=\partial_{\mu}+{\rm i}eA_{\mu}\) is the covariant derivative and
\[j^{\mu}={\rm i}e\left[\phi^{*}D^{\mu}\phi-\phi(D^{\mu}\phi)^{*}\right] \tag{38}\]
is the charge current. The theory is invariant under the standard time reversal operation
\[\psi({\bf x},t)\to\psi^{*}({\bf x},-t),\qquad A^{0}({\bf x},t)\to A^{0}({\bf x},-t ),\qquad A^{i}({\bf x},t)\to-A^{i}({\bf x},-t). \tag{39}\]
But taking \(\phi\) and \(A^{\mu}\) as the ontology does not make this theory invariant under temporal order reversal.
Consider now the temporal gauge \(A_{0}=0\). Then the equations of motion are
\[\frac{\partial^{2}\phi}{\partial t^{2}}-{\bf D}\cdot{\bf D}\phi+m^{2}\phi=0, \quad\Box{\bf A}+\mathbf{\nabla}(\mathbf{\nabla}\cdot{\bf A})={\bf j},\quad-\mathbf{\nabla }\cdot\frac{\partial{\bf A}}{\partial t}=j_{0}, \tag{40}\]
with now \({\bf D}=\mathbf{\nabla}-{\rm i}e{\bf A}\) and the charge density and 3-current respectively given by
\[j_{0}={\rm i}e\left(\phi^{*}\frac{\partial\phi}{\partial t}-\phi\frac{\partial \phi^{*}}{\partial t}\right),\qquad{\bf j}={\rm i}e\left[\phi{\bf D}\phi^{*}- \phi^{*}{\bf D}\phi\right]. \tag{41}\]
Taking the state to be
\[S_{a}(t)=\left(\phi({\bf x},t),{\bf A}({\bf x},t)\right), \tag{42}\]
then it is readily checked that the theory is invariant under temporal order reversal \(T_{a}:S_{a}(t)\to S_{a}(-t)\). Nevertheless, this is a symmetry different from the standard time reversal symmetry (39). The transformation \(T_{a}\) considered here actually corresponds to the joint time reversal (T) and charge conjugation (C) in the standard picture. Namely, under charge conjugation, one has \(\phi\to\phi^{*}\) and \(A^{\mu}\to-A^{\mu}\). So in this case, the temporal order reversal transformation \(T_{a}\) amounts to the joint \(TC\) transformation of the standard picture. (This explains why under \(T_{a}\), the charge current in (41) flips sign. That is, it transforms as \(j_{0}({\bf x},t)\to-j_{0}({\bf x},-t)\), \({\bf j}({\bf x},t)\to-{\bf j}({\bf x},-t)\).) A similar point is made in [8, 12, 13], where it is argued that the joint CPT transformation is really a PT transformation.
## 6 Active and geometric time reversal
So far we have been dealing with the Albert and Callender notion of time reversal which amounts to the mere reversal of the temporal order of the instantaneous states. It is worth comparing this notion with those of active and geometric time reversal that were mentioned in the introduction. The difference comes about especially in the case of an ontology with tensorial objects in space-time. Such objects are the natural ontological objects to consider in a Lorentz invariant theory like electrodynamics. However, under mere reversal of temporal order, they do not transform in a way to make electrodynamics time reversal invariant. (This is why we have achieved time-reversal invariance in section 4 only by considering an ontology in terms of the electric field as a spatial 3-vector or by considering the Wheeler-Feynman ontology in terms of just particles.) To see this, consider first a 4-vector field \(V^{\mu}(x)\). Under mere temporal order reversal, the vector field transforms as
\[V^{\mu}({\bf x},t)\to V^{\mu}({\bf x},-t). \tag{43}\]
So the instantaneous state at a time \(t\) is taken to be the vector field at that time and the temporal order of these states is reversed. As a concrete example, consider the vector potential \(A^{\mu}\). Transforming \(A^{\mu}\) as in (43) does not amount to a symmetry of electrodynamics (and neither does it imply the transformation (14) for the electric field since it will flip sign). The same conclusion holds for the electromagnetic tensor \(F^{\mu\nu}\). Under mere temporal order reversal, \(F^{\mu\nu}({\bf x},t)\to F^{\mu\nu}({\bf x},-t)\), and as such the electric and magnetic field \(E_{i}=F_{0i}\) and \(B_{i}=-\epsilon_{ijk}F^{jk}/2\) transform as in (14), without the sign flip for the magnetic field. But as discussed before this is not a symmetry.
Let us now turn to active time reversal [5, 8, 12, 13]. For this we first need to consider the passive transformation which is obtained by taking a different coordinate system with \(t^{\prime}=-t\). In terms of the new coordinates, the vector field \(V^{\mu}(x)\) reads
\[V^{\prime\mu}({\bf x},t^{\prime})=(-V^{0}({\bf x},-t),V^{i}({\bf x},-t)). \tag{44}\]
The active transformation then keeps the coordinate system fixed but changes the vector field \(V^{\mu}(x)\) to a different one with the same form as in the passive transformation:
\[V^{\mu}({\bf x},t)\to(-V^{0}({\bf x},-t),V^{i}({\bf x},-t)). \tag{45}\]
Applying this to the vector potential, this transformation induces the following transformation of the electric and magnetic field [5]:
\[{\bf E}({\bf x},t)\to-{\bf E}({\bf x},-t),\qquad{\bf B}({\bf x},t)\to{\bf B}({ \bf x},-t). \tag{46}\]
This is not the standard transformation given in (12). Actually, it corresponds to the joint transformation of standard time reversal and charge conjugation (as also encountered in the previous section). The same holds for the electromagnetic tensor \(F^{\mu\nu}\). Under active time reversal,
\[F^{00}({\bf x},t)\to F^{00}({\bf x},-t),\qquad F^{0i}({\bf x},t)\to-F^{0i}({ \bf x},-t),\qquad F^{ij}({\bf x},t)\to F^{ij}({\bf x},-t) \tag{47}\]
and hence again (46) follows. Electrodynamics with point charges is then invariant under active time reversal if the world lines also have an intrinsic direction which flips under time reversal, in accordance with Feynman's view of anti-particles [8]. If instead of point charges, matter consists of a complex scalar field \(\phi({\bf x},t)\), then under active time reversal \(\phi({\bf x},t)\to\phi({\bf x},-t)\), which again amounts to the joint transformation of the standard time reversal and charge conjugation transformations, given respectively by \(\phi({\bf x},t)\to\phi^{*}({\bf x},-t)\) and \(\phi({\bf x},t)\to\phi^{*}({\bf x},t)\). These observations were used to argue that the CPT theorem is actually a PT theorem, removing the mystery why charge conjugation should have anything to do with space-time symmetries [12, 13].
Malament's geometric time reversal [6] employs the notion of a temporal orientation, which is represented by a time-like vector field \(\tau^{\mu}\). Geometrical objects are not changed under time reversal, unlike in the case of temporal order reversal and active time reversal, but merely the temporal orientation is flipped: \(\tau^{\mu}\to-\tau^{\mu}\). Representations of the geometrical objects may depend on the temporal orientation and may
transform non-trivially as a result. Take for example the charge current. Malament considers the fundamental geometrical object to be \(J\) which is a linear map from tangent lines to scalars, representing the charge densities. Using the temporal orientation, this map can be represented by a 4-vector field \(J^{\mu}\). Namely, a (time-like) tangent line determines two unit tangent vectors \(\xi^{\mu}\) and \(-\xi^{\mu}\), where \(\xi^{\mu}\) is future-directed relative to \(\tau^{\mu}\), that is, \(\xi^{\mu}\tau_{\mu}>0\), and \(-\xi^{\mu}\) is past-directed relative to \(\tau^{\mu}\). The 4-vector field \(J^{\mu}\) is then defined as the map from the future-directed (relative to \(\tau\)) tangent vectors to scalars, given by \(\xi^{\mu}\to J^{\mu}\xi_{\mu}\). Under geometric time reversal, the geometrical object \(J\) is held fixed, while \(\tau^{\mu}\rightarrow-\tau^{\mu}\). As a result, the vector field representation \(J^{\mu}\) changes as \(J^{\mu}\rightarrow-J^{\mu}\). On Malament's view the electromagnetic tensor is again a representation of some more fundamental object, namely a map \(F\) from tangent lines to forces. Together with a temporal orientation this map \(F\) determines a tensor \(F^{\mu\nu}\), which is understood as a map from future-directed tangent vectors to forces. Under geometric time reversal, \(F^{\mu\nu}\rightarrow-F^{\mu\nu}\). Together with the transformation of the charge current, this implies that Maxwell's equations are invariant under geometric time reversal. What does geometric time reversal entail for the transformations of the electric and magnetic field? As Malament explains, to define the electric and magnetic field, one needs to introduce a volume element \(\epsilon_{\mu\nu\alpha\beta}\) and a 'frame' which is represented by a constant vector field \(\eta^{\mu}\), and which determines a space-time splitting, with the surfaces normal to \(\eta^{\mu}\) corresponding to the spatial slices. The electric and magnetic field are then defined as \(E^{\mu}=F^{\mu\nu}\eta_{\nu}\) and \(B^{\mu}=\epsilon^{\mu\nu\alpha\beta}\eta_{\nu}F_{\alpha\beta}\) (which are vectors tangential to the spatial hyperplanes). Malament argues that time reversal flips the sign of both \(\eta^{\mu}\) and \(\epsilon_{\mu\nu\alpha\beta}\), so that \(E^{\mu}\to E^{\mu}\) and \(B^{\mu}\rightarrow-B^{\mu}\), which amounts to the standard transformations of the electric and magnetic fields.
Arntzenius and Greaves offer also another possible ontology for which electrodynamics is invariant under geometric time reversal [8]. It is an ontology suggested by the Feynman picture where anti-particles are regarded as particles moving backwards in time. Arntzenius and Greaves take the ontology to be given by the electromagnetic tensor \(F^{\mu\nu}\) which, unlike Malament's choice, is now regarded as fundamentally a map from four-vectors to four-vectors. So the tensor does not depend on the temporal orientation and hence is invariant under geometric time reversal. The other difference with Malament is that the world lines of the particles are assumed to carry a direction, which may or may not align with the temporal orientation, but which in any case is independent of it and hence also does not change under geometric time reversal.
The theories considered in section 4 that were invariant under temporal order reversal are also invariant under active and geometric time reversal. Consider for example, electrodynamics with only the electric field and the point-particles as the ontology. The electric field is taken to be a spatial 3-vector rather than as derived from a vector potential or electromagnetic tensor and hence implies the active time reversal transformation \({\bf E}({\bf x},t)\rightarrow{\bf E}({\bf x},-t)\). This transformation agrees with the temporal order reversal. Likewise for the positions of the particles. So the theory is invariant under active time reversal. Under geometric time reversal the electric field and the positions are left invariant, they do not depend on the temporal orientation, but the frame \(\eta^{\mu}\) changes
to \(-\eta^{\mu}\), so that the time derivative in the equations of motion (which depends on the frame) changes as \(\partial/\partial t\to-\partial/\partial t\). This makes the theory invariant under geometric time reversal. Also the Wheeler-Feynman theory with its particles-only ontology and non-relativistic quantum mechanics with \(\psi_{r}\) as the ontology are both invariant under active and geometric time reversal.
## 7 Conclusion
Prima facie, according to the Albert-Callender notion of time reversal invariance theories like electrodynamics and quantum mechanics do not seem to be time reversal invariant, suggesting a temporal orientation of space-time. However, the conclusion also depends on the choice of ontology. We have argued that ontologies can be considered for electrodynamics and quantum mechanics so that they are time reversal invariant. As such, whether one adopts the notion of time reversal invariance of Albert and Callender or the standard one, the conclusion can be the same, namely that these theories do not suggest a temporal orientation of space-time.
We do not want to suggest that any of these ontologies are preferred. We merely wanted to point out that such ontologies do exist. An ontology respecting the relativistic character of electrodynamics remains challenging. This was only achieved with the particle-only ontology of the Wheeler-Feynman theory.
## 8 Acknowledgments
It is a pleasure to thank Craig Callender, Bryan Roberts and Sylvia Wenmackers for useful comments and discussions, and the reviewers and editors for their helpful suggestions. Support is acknowledged from the Research Foundation Flanders (Grant No. G066918N).
|
2310.07241 | Surrogate modeling for stochastic crack growth processes in structural
health monitoring applications | Fatigue crack growth is one of the most common types of deterioration in
metal structures with significant implications on their reliability. Recent
advances in Structural Health Monitoring (SHM) have motivated the use of
structural response data to predict future crack growth under uncertainty, in
order to enable a transition towards predictive maintenance. Accurately
representing different sources of uncertainty in stochastic crack growth (SCG)
processes is a non-trivial task. The present work builds on previous research
on physics-based SCG modeling under both material and load-related uncertainty.
The aim here is to construct computationally efficient, probabilistic surrogate
models for SCG processes that successfully encode these different sources of
uncertainty. An approach inspired by latent variable modeling is employed that
utilizes Gaussian Process (GP) regression models to enable the surrogates to be
used to generate prior distributions for different Bayesian SHM tasks as the
application of interest. Implementation is carried out in a numerical setting
and model performance is assessed for two fundamental crack SHM problems;
namely crack length monitoring (damage quantification) and crack growth
monitoring (damage prognosis). | Nicholas E. Silionis, Konstantinos N. Anyfantis | 2023-10-11T07:13:16Z | http://arxiv.org/abs/2310.07241v1 | Surrogate modeling for stochastic crack growth processes in structural health monitoring applications
###### Abstract
Fatigue crack growth is one of the most common types of deterioration in metal structures with significant implications on their reliability. Recent advances in Structural Health Monitoring (SHM) have motivated the use of structural response data to predict future crack growth under uncertainty, in order to enable a transition towards predictive maintenance. Accurately representing different sources of uncertainty in stochastic crack growth (SCG) processes is a non-trivial task. The present work builds on previous research on physics-based SCG modeling under both material and load-related uncertainty. The aim here is to construct computationally efficient, probabilistic surrogate models for SCG processes that successfully encode these different sources of uncertainty. An approach inspired by latent variable modeling is employed that utilizes Gaussian Process (GP) regression models to enable the surrogates to be used to generate prior distributions for different Bayesian SHM tasks as the application of interest. Implementation is carried out in a numerical setting and model performance is assessed for two fundamental crack SHM problems; namely crack length monitoring (damage quantification) and crack growth monitoring (damage prognosis).
keywords: Stochastic Crack Growth, Surrogate modeling, Structural Health Monitoring, Gaussian Processes, Uncertainty Quantification +
Footnote †: journal:
## 1 Introduction
A significant number of structures across such diverse fields as marine and offshore, aerospace, civil infrastructure etc., are currently operating at or close to the limit of their design life. As they are subject to different deterioration phenomena, it is inevitable that some operate at decreased levels of performance and consequently increased levels of risk with regard to failure. One of the most common types of deterioration, for metal structures in particular, are cracks that propagate under the influence of dynamic loading. In certain domains, such as ship structures which are the authors' main focus, it is expected that fatigue cracks exist in structures currently in operation, especially aging ones. Current monitoring practices typically specify a series of on-site inspection events, based on either visual assessment or more specialized techniques like liquid penetrant tests [1]. The emergence of the field of Structural Health Monitoring (SHM) has shifted interest towards predictive or condition-based maintenance (CBM), where the goal is to use data from in-situ
sensors to continuously monitor deteriorating structures and enable safe lifetime extension as well as flexible maintenance planning through prognostic models [2; 3; 4; 5].
On account of their prevalence and importance to structural safety, fatigue cracks have been a focal point of SHM research [6] with works dealing with all different levels of the SHM hierarchy proposed by Rytter [7]; namely, damage detection [8; 9; 10], localization [11; 12; 13], quantification [14; 15; 16] and prognosis [17; 18; 19]. Naturally, the prognostic aspect of SHM is of particular interest when it comes to fatigue crack growth [20; 21], as its focus is to obtain probabilistic predictions of the evolution of structural deterioration. These can be in turn updated in the presence of data, and therefore offer more robust predictions on quantities of interest such as the remaining useful life (RUL) or the probability of failure. Using data to update prior knowledge on particular aspects of crack growth processes offers a natural means to quantify the uncertainty associated with them, which has long been recognized by the research community [22; 23] and is primarily associated with material-related properties and fatigue loading [19; 24; 25; 26; 27; 28].
The present work is concerned with the effect of uncertainty on crack growth processes that are described using crack growth models based on fracture mechanics (e.g., Paris-Erdogan [29] or NASGRO [30]). Typical approaches for uncertainty quantification (UQ) in crack growth models are based on sampling techniques belonging to the broader family of Monte Carlo (MC) methods [25; 26]. These rely on first assuming a specific probabilistic model over the different parameters that are considered as random, i.e., material, load-related or both, and subsequently obtaining realizations of the crack growth process by propagating samples drawn from the probability distributions of the parameters to the crack growth model. Since such models operate under the assumption of constant fatigue loading during crack propagation, these methods have to be modified in case the structure of interest is subject to operational conditions characterized by varying levels of stress amplitude and frequency. Such conditions are however commonly encountered by numerous structural systems today, including ocean going vessels and wind turbines among others [31; 32; 33].
In a previous work focusing on ship structures [28], the authors developed a numerical scheme that uses MC simulation to propagate uncertainty over the crack growth model parameters and simultaneously employs a time-discretization algorithm to account for load variability in each crack growth realization. Although the original goal was to generate RUL distributions and time-based reliability estimates, the outcomes of the numerical scheme effectively represent realizations of a stochastic crack growth process, as shown in Figure 1. These constitute a valuable resource for both diagnostic and prognostic fatigue crack monitoring systems that follow the model-based, and more specifically Bayesian paradigm, as they can be employed to construct physically consistent prior distributions.
The general goal of Bayesian methods for SHM is to use structural response measurements to obtain up-to-date estimates on the probability distribution over a set of structural parameters that are of monitoring interest, e.g., crack length at some point in time or the time-invariant parameters of a crack growth model. This is achieved by first assuming a prior distribution over those parameters and then using Bayes' rule to obtain a data-informed posterior distribution (see [34] for a more detailed description). Selecting an appropriate prior distribution can lead to more rapid convergence to the posterior distribution, and in some cases even to an analytical derivation [35]; more pertinently, if chosen judiciously it can ensure that updated quantities remain consistent with underlying problem physics. The crack growth trajectories demonstrated in Figure 1 correspond to realizations from a physically consistent crack growth prior process; they are not however in their initial form suitable to construct Bayesian priors as they require a computationally
costly, physics-based simulation to be obtained.
The objective of the present work is to propose an efficient, unified framework that utilizes Gaussian Process (GP) regression models to learn a parsimonious surrogate model of the prior process. Moreover, by exploiting principles of latent variable modeling, the proposed methodology is capable of constructing priors suited to different SHM tasks, such as crack length monitoring (quantification) and crack growth monitoring (prognosis). By virtue of the employed model, these priors are able to effectively encode uncertainty related both to time-invariant (material-related), and time-variant (load-related) parameters. To the authors' knowledge, this is the first time that such a unified, physically consistent approach to Bayesian prior construction for crack SHM is presented in the literature. Furthermore, through its use of probabilistic Machine Learning (ML) tools, the proposed surrogate demonstrates a practical and computationally efficient framework that can be applied within the general sphere of fatigue crack growth problems to account for the stochasticity that is inherent to them.
A brief, yet self-contained, description of the process employed to generate stochastic crack growth trajectories following the work of Makris et al. [28] is presented in Section 2. Section 3 introduces Gaussian Process regression, with a focus on sparse methods that can deal with large quantities of data. Priors constructed using the proposed model for different SHM tasks are presented in Section 4, while Section 5 offers concluding remarks.
Figure 1: Realizations of a stochastic crack growth process according to Makris et al. [28]
## 2 Modeling stochastic crack growth trajectories
The dataset employed throughout this work was obtained using a methodology proposed by the authors [28], where a spectral method was combined with a stochastic crack growth model to generate SCG process realizations, to be referred heretofore as trajectories, that are representative of a typical marine structural element under realistic operational conditions. In this work, only a brief outline of this process will be provided and the primary focus will be placed on the characteristics of the trajectories themselves and how these informed the choice of the proposed surrogate modeling framework. For more detailed information the interested reader is referred to the original work [28].
To generate the crack growth trajectories shown in Figure 1, it was initially assumed that a crack already exists on a particular structural component (double-bottom girder) of a containership, whose length at the time of detection was considered as a Gaussian random variable, i.e., \(\alpha_{0}\sim\mathcal{N}(\mu_{\alpha_{0}},\sigma_{\alpha_{0}}^{2})\). This is consistent with commonly accepted industry opinion that fatigue cracks exist on operating ships and that when detected, some uncertainty is expected in the initial measurement due to equipment-related noise which is commonly modeled using a Gaussian distribution [36]. To model the crack growth process, the Paris-Erdogan law [29] was employed, which in the time domain has the following general form:
\[\frac{d\alpha}{dt}=N_{\mathrm{avg}}C\left[\Delta K(\Delta S)\right]^{m} \tag{1}\]
which holds when the fatigue loading pair \(\{\Delta S,N_{\mathrm{avg}}\}\) is time-invariant. However, fatigue loading of ship hull structural components is primarily caused by ocean waves, which are stationary only over specific periods of time, known as sea states [37]. In order to use the Paris-Erdogan law, a discretization of the operational lifetime of the vessel was employed to obtain intervals that correspond to individual sea states. For each of these, a set of parameters describing its intensity, i.e., significant wave height \(H_{\mathrm{S}}\), and duration, i.e., zero up-crossing period \(T_{\mathrm{Z}}\), were sampled from a joint probability model based on oceanographic observations [38]. These were then transformed under a linear model for ship hydrodynamics to the corresponding fatigue loading pairs \(\{\Delta S,N_{\mathrm{avg}}\}\), thus allowing the model of Eq. (1) to be used within each temporal interval.
To account for material-related uncertainty, the time-invariant model parameters \(C,m\) were assumed to be random variables. According to commonly accepted practice in the literature (e.g., [19]), they were considered to follow the lognormal and normal distributions respectively; namely \(C\sim\ln\mathcal{N}(\mu_{C},\sigma_{C}^{2})\) and \(m\sim\mathcal{N}(\mu_{m},\sigma_{m}^{2})\). A closed form equation for the stress intensity factor \(\Delta K\) was employed as the structural component in question can be considered to operate, without loss of generality, as a plate. Details on the numerical values assigned to these quantities are provided in the original work [28] and are not included here in the interest of brevity.
To generate the trajectories depicted in Figure 1 a MC simulation was first applied over the model-related parameters and the initial crack length to generate \(N=10^{4}\) tuples, namely \(\left\{C^{(i)},m^{(i)},\alpha_{0}^{(i)}\right\}_{i=1}^{N}\), each of which corresponds to a specific trajectory. The time-discretization scheme was then employed for each one by assuming a uniformly distributed random sea-state duration, ranging from 5-7 hours. The discontinuity caused by this discretization is evident in the trajectories, which generally exhibit a non-smooth behavior. The duration of the crack propagation was restricted to 3 years after the initial crack was detected, in a choice motivated by the 5-year fixed inspection schedule followed by ships [39]. Furthermore, an upper limit to the crack length was set at \(\alpha_{\mathrm{cr}}=155\) mm, according to a failure criterion established based on a structural
reliability analysis (see Makris et al. [28]). As a result of the constraints imposed on the numerical scheme, individual trajectories ultimately do not have the same length, i.e., number of points. This is an interesting characteristic of the data, that prohibits the use of traditional batch regression techniques that are based on learning one-to-one mappings [40]; it can also be interpreted as a reflection of the underlying dynamics of the crack growth process.
The dataset obtained from the initial physics-based simulation has a prohibitively large size, due to the sea state duration, which is itself dictated by physical considerations. Such a fine discretization offers no added value in terms of adequately capturing the underlying probabilistic structure of the crack growth process. Therefore, a subsampling scheme was implemented to reduce the dimensionality of the dataset, while retaining the probabilistic structure of the data. According to this, it was considered that for each trajectory, the crack length is available at bimonthly intervals. This allows for the essential characteristic of different trajectory length to be retained; it also ensures that the resultant crack length values correspond to the same points in time. The latter transforms the data to a format consistent with the theoretical principles underpinning stochastic processes, which allows estimating mean and covariance functions and broadens the scope of tools available for analysis.
In Figure 2, a comparative depiction of the mean functions and 95% credible intervals (C.I.) for the original and subsampled data is provided; evidently, the reduced dataset sufficiently captures the probabilistic characteristics of the crack growth process. Not only is the mean trend captured but so is the heteroscedastic, i.e., input-dependent, variance structure; the latter is another key characteristic of the data, and also poses a potential challenge in choosing the surrogate model form. Based on the preceding analysis and stated goal of this work, this form has to be such that
Figure 2: Comparison between initial and reduced dataset after subsampling. The darker orange color of the 95 % C.I. is a result of the overlap between the orange and purple colors corresponding to each dataset.
it is unaffected by the crack growth trajectory length and can also model heterescodastic behavior. In order to be used effectively to construct priors for different SHM tasks, it must also be able to generate conditional distributions over the crack length at different levels of conditioning variables; namely time instances, crack growth model parameters and the initial crack length. Finally, it has to be functional for large numbers of data, as even after subsampling the dataset employed in this work has considerable size.
Gaussian Process regression models [40] have been chosen as the surrogate here, as they satisfy all these requirements under certain conditions. They are by definition able to provide probabilistic predictions over outputs, i.e., crack lengths, and are able to accommodate potentially multi-dimensional inputs; thus they are able to provide the flexibility required for conditioning on different variables. Being framed on the assumption that all outputs belong to an underlying Gaussian Process, they operate in a point-wise fashion and thus are unaffected by different trajectory lengths. In their sparse formulation they can accommodate large datasets and also model heteroscedastic behavior [41]. They can also be trained effectively with the help of variational learning [42; 43; 44], while offering an inherent guarantee against overfitting [45]. What follows is a description of the basic theoretical underpinnings of GP regression, with a focus on sparse GPs and variational learning.
## 3 Gaussian Process regression
### Fundamental principles
We begin with the typical problem setting for supervised learning; namely let us consider that there exist \(N\) available training observations, arranged in a training set \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\), where \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) are \(d\)-dimensional inputs, while \(y_{i}\in\mathbb{R}\) are noisy observations of an unobserved or latent function \(f(\mathbf{x}_{i})\). If we consider the noise to be independent Gaussian with variance \(\sigma^{2}\), then for every noisy observation in the dataset:
\[y_{i}=f(\mathbf{x}_{i})+\epsilon_{i},\qquad\epsilon_{i}\sim\mathcal{N}(0, \sigma^{2}) \tag{2}\]
The goal of a GP is to essentially set a prior over functions \(f\) and then use Bayesian inference to obtain the posterior distribution in light of the observed data [40]. This prior can be expressed as:
\[f(\mathbf{x})\sim\mathcal{GP}(m_{f}(\mathbf{x};\theta),k_{f}(\mathbf{x}, \mathbf{x}^{\prime};\theta)) \tag{3}\]
and is fully defined by its mean and covariance functions, \(m_{f}(\cdot)\) and \(k_{f}(\cdot,\cdot)\) respectively, which in turn are controlled by a vector of hyperparameters \(\theta\), the dependency on which will be heretofore implied and removed from the notation. To obtain the posterior over \(f\), Bayes' rule is applied:
\[p(\mathbf{f}|\mathbf{y})=\frac{p(\mathbf{y}|\mathbf{f})p(\mathbf{f})}{\int p( \mathbf{y}|\mathbf{f})p(\mathbf{f})\,d\mathbf{f}} \tag{4}\]
where \(p(\mathbf{f})\) is the GP prior and \(p(\mathbf{y}|\mathbf{f})\) is the likelihood function, which for the observation model we have considered is also Gaussian. Furthermore, taking into account the stochastic independence between observations, we can write:
\[p(\mathbf{y}|\mathbf{f})=\prod_{i=1}^{N}\mathcal{N}(\mathbf{m}_{f},\mathbf{K} _{f}+\sigma^{2}\mathbb{I}_{N}) \tag{5}\]
where \(\mathbf{m}_{f}\) and \(\mathbf{K}_{f}\) refer to entries to the mean vector and covariance matrix, defined as:
\[\mathbf{m}_{f}[i]\triangleq m_{f}(\mathbf{x}_{i}) \tag{6}\]
\[\mathbf{K}_{f}[i,j]\triangleq k_{f}(\mathbf{x}_{i},\mathbf{x}_{j}) \tag{7}\]
By virtue of of Eq. (3) & (5), the posterior distribution of Eq. (4) is also Gaussian with tractable mean and covariance functions:
\[m_{\mathbf{y}}(\mathbf{x})=K_{\mathbf{x}n}(\sigma^{2}\mathbb{I}_{N}+K_{nn})^{- 1}\mathbf{y} \tag{8}\]
\[k_{\mathbf{y}}(\mathbf{x},\mathbf{x}^{\prime})=k_{f}(\mathbf{x},\mathbf{x}^{ \prime})-K_{\mathbf{x}n}(\sigma^{2}\mathbb{I}_{N}+K_{nn})^{-1}K_{n\mathbf{x}^{ \prime}} \tag{9}\]
Where \(K_{nn}\) is the \(N\times N\) covariance matrix on the training inputs, \(K_{\mathbf{x}n}\) is an \(N\)-dimensional row vector of covariance function values between \(\mathbf{x}\) and the training inputs, \(K_{n\mathbf{x}^{\prime}}=K_{\mathbf{x}n}^{\top}\) and \(\mathbb{I}_{N}\) is the \(N\)-dimensional identity matrix. This form of tractable posterior leads also to a tractable posterior predictive distribution, which describes the probability of obtaining a prediction \(y_{*}\) at some unseen input \(\mathbf{x}_{*}\), and is given by:
\[p(y_{*}|\mathbf{y})=\int p(\mathbf{y}_{*}|\mathbf{f})p(\mathbf{f}|\mathbf{y}) \,d\mathbf{f}=\mathcal{N}(y_{*}|m_{\mathbf{y}}(\mathbf{x}_{*}),k_{f}(\mathbf{ x}_{*},\mathbf{x}_{*})+\sigma^{2}) \tag{10}\]
The posterior GP, and by extension the posterior predictive distribution, depend on the values of the hyperparameters of the mean and covariance functions and the observation noise standard deviation, namely \(\{\theta,\sigma^{2}\}\). Training a GP consists of estimating them so as to maximize the logarithm of the marginal likelihood, which in this case is also tractable and given by:
\[\log p(\mathbf{y})=\log\left[\mathcal{N}(\mathbf{y}|\mathbf{0},\sigma^{2} \mathbb{I}_{N}+K_{nn})\right] \tag{11}\]
Clearly, the expressions contained in Eq. (11) are unsuitable for large datasets, since they include the inversion of an \(N\times N\) matrix which scales as \(\mathcal{O}(N^{3})\). By definition, they are also incapable of modeling a heteroscedastic, i.e., input-dependent, variance structure due to the independent Gaussian assumption for the observation noise. The latter is desirable for the problem at hand, given the nature of the crack growth data.
### Variational learning for sparse Gaussian Process regression
In this work, inducing point methods will be employed which are used to construct Sparse Gaussian Processes (SGPs) and are suitable for treating large datasets as well as modeling heteroscedastic variance, albeit to a certain extent [41]. Inducing point methods rely on introducing a set of inducing variables \(\mathbf{u}=\{f(\mathbf{z}_{i})\}_{i=1}^{M},M\ll N\), that are points of the function calculated at the inducing points \(Z=\{\mathbf{z}_{i})\}_{i=1}^{M}\) and therefore belong to the same space as \(\mathbf{x}\). Since the inducing variables also belong to the same space as \(\mathbf{f}\), then \(p(\mathbf{f},\mathbf{u})\) is jointly Gaussian. Ultimately, the goal is to use the inducing variables to directly approximate the posterior GP mean and covariance functions of Eq. (8) & (9) at significantly lower computational cost. Constructing this approximation consists of selecting both the inducing points as well as the model hyperparameters so as to maximize the log-marginal likelihood \(p(\mathbf{y})\), at a reduced computational cost and while retaining the Gaussianity of the posterior [42].
Variational Inference (VI) will employed for this task following the example of Hensman et al. [43; 44]; the goal of VI is to approximate the true posterior using another distribution, known as the variational distribution, by framing the problem as one of optimization [46]. In this context, the parameters of the variational distribution act as design variables while the objective function is a measure of distance between the two distributions. The set of parameters that minimizes this distance yields a distribution that is very similar to the true posterior. Here, we shall denote the variational distribution as \(q(\mathbf{f})\); the variational objective is to minimize the Kullback-Leibler (KL) divergence \(\text{KL}\left(q(\mathbf{f})||p(\mathbf{f}|\mathbf{y})\right)\) between the two distributions, which is a well-known metric of similarity between probability density functions [47]. By expanding the KL divergence (see Blei et al. [46] for a more detailed proof), the following expression can be obtained:
\[\text{KL}\left(q(\mathbf{f})||p(\mathbf{f}|\mathbf{y})\right)=\text{KL}\left( q(\mathbf{f})||p(\mathbf{f})\right)+\log p(\mathbf{y})-\mathbb{E}_{q(\mathbf{ f})}\left[\log p(\mathbf{y}|\mathbf{f})\right] \tag{12}\]
where it is observed that the variational objective is connected to the evidence term that we seek to, but cannot effectively, maximize. However, noting that the KL divergence is by definition non-negative we obtain:
\[\log p(\mathbf{y})\geq\mathbb{E}_{q(\mathbf{f})}\left[\log p(\mathbf{y}| \mathbf{f})\right]-\text{KL}\left(q(\mathbf{f})||p(\mathbf{f})\right) \tag{13}\]
The term on the right-hand side provides a lower bound on the log-marginal likelihood and is known as the Evidence Lower Bound (ELBO). Contrary to the evidence term itself, the ELBO is a more convenient objective function as it is more amenable to gradient-based optimization. Initially, the inducing variables do not appear in the ELBO. However, recalling that they belong to the same space as function outputs, then \(q(\mathbf{f})\) can be written as \(q(\mathbf{f})=\int p(\mathbf{f}|\mathbf{u})q(\mathbf{u})\,d\mathbf{u}\). The typical assumption of a Gaussian variational distribution over the inducing variables is then made [44], i.e., \(q(\mathbf{u})=\mathcal{N}(\mathbf{m},\mathbf{S})\), where \(\mathbf{m}\) and \(\mathbf{S}\) denote the mean vector and covariance matrix respectively. It follows that the approximate posterior is available in functional form:
\[q(\mathbf{f})=\mathcal{N}\left(\mathbf{f}|\mathbf{A}\mathbf{m},\mathbf{K}_{nn }+\mathbf{A}\left(\mathbf{S}-\mathbf{K}_{mm}\right)\mathbf{A}^{\top}\right) \tag{14}\]
where \(\mathbf{A}=\mathbf{K}_{nn}\mathbf{K}_{mm}^{-1}\) and \(\mathbf{K}_{mm}\) is the \(M\times M\) covariance matrix on the inducing points, which is now the only matrix subject to inversion. The last step in order to complete deriving the ELBO, which can then be maximized with respect to \(\{\mathbf{m},\mathbf{S}\}\) is to use the stochastic independence of the observations to factorize the likelihood function term, which yields:
\[\mathcal{L}_{ELBO}=\sum_{n=1}^{N}\mathbb{E}_{q(f_{n})}\left[\log p(y_{n}\mid f _{n})\right]-\text{KL}\left[q(\mathbf{u})||p(\mathbf{u})\right)] \tag{15}\]
In this form, we can now use gradient based optimization to find the parameters of \(q(\mathbf{u})\) that maximize this bound on the log-marginal likelihood. Using the optimal parameters, predictions at some unseen test input \(\mathbf{x}_{*}\) and corresponding latent function value \(f_{*}\) can be made using the following predictive equation:
\[p(f_{*}|\mathbf{y})=\int p(f_{*}|\mathbf{u})q(\mathbf{u})\,d\mathbf{u} \tag{16}\]
This integral is also tractable and the mean and variance of \(f_{*}\) can be calculated, from which follows the calculation of the posterior predictive distribution \(p(y_{*}|\mathbf{y})\) as in Eq. (10).
## 4 Surrogate modeling for stochastic crack growth
The proposed surrogate model will be implemented and showcased for prior distribution construction on two fundamental crack SHM tasks, i.e., crack length and crack growth monitoring. Training the model and assessing its performance will be discussed and results will be presented for the two tasks, along with an investigation on the effect of adding varying levels of knowledge to the model through different conditioning variables.
### Building the GP regression model
Building a GP regression model starts from specifying the characteristics of the GP prior (see Eq. (3)) in the form of its mean and covariance functions. In principle, the mean function can be any function of the inputs, but is typically set to zero because of the relative flexibility of GPs to model complex relationships [40], especially for data which have been transformed to a zero-mean space. Accordingly, a zero mean function has been selected here as well. The role of the covariance (or kernel) function is to control the level of similarity between pairs of input points, which is ultimately reflected in the covariance matrix, whose entries are defined as in Eq. (7). Several popular kernel choices exist for the covariance function; in this work we have employed a Matern 3/2 kernel, which is defined as follows:
\[k_{f}\left(\mathbf{x}_{i},\mathbf{x}_{j}\right)=\alpha^{2}\left(1+\frac{ \sqrt{3}|\mathbf{x}_{i}-\mathbf{x}_{j}|}{l}\right)\exp\left(-\frac{\sqrt{3}| \mathbf{x}_{i}-\mathbf{x}_{j}|}{l}\right) \tag{17}\]
where \(\theta=\left\{\alpha,l\right\}\) are the kernel function hyperparameters; the process variance \(\alpha\) controls variations around the mean while the length scale \(l\) represents the smoothness of the function [48]. In the conventional setting, training GPs with a zero mean function consists of finding the set of hyperparameters \(\theta\) that maximize the log-marginal likelihood. For sparse GPs, training includes simultaneously learning the inducing point locations as well. The ELBO, as described in the previous sections, offers a convenient objective function formulation that accommodates both sets of parameters, along with those related to the variational distribution. Moreover, VI allows for state-of-the-art stochastic gradient descent algorithms to be used as the engine of the training process.
For all applications this work is concerned with, a mean-field Gaussian approximation was employed as the variational distribution [46], while the Adam optimizer [49] was used to train the model. The number of iterations for training as well as the learning rate were adjusted for each model to ensure optimal performance. A hold-out scheme was employed to construct the training set, as well as the test set used to assess model performance. According to this, an equal portion of the initial \(10^{4}\) crack growth trajectories was assigned to each set, which due to the unequal number of points in each curve led to slightly unbalanced training and test sets. Trajectories were sampled randomly so as to ensure that no bias occurs during selection.
The ELBO defined in Eq. (15) was used to monitor model performance over the training set. Subsequently, the trained model fit was assessed using two different metrics. The first was the normalized mean square error (NMSE), defined here as in Rogers et al. [50]:
\[NMSE=\frac{100}{N_{\text{test}}\sigma_{y}^{2}}\sqrt{\left(\mathbf{y}-\mathbf{ \hat{y}}\right)^{\top}\left(\mathbf{y}-\mathbf{\hat{y}}\right)} \tag{18}\]
where \(N_{\text{test}}\) refers to the number of samples in the test set, which are collected in the vector \(\mathbf{y}\) and have variance \(\sigma_{y}^{2}\); finally, the model predictions over the test set are denoted as \(\mathbf{\hat{y}}\). The
NMSE captures the predictive capacity of the model in terms of point predictions, therefore for probabilistic models, such as the one employed herein, it can be used to assess their accuracy in terms of the mean prediction. In the way it is defined here, a perfect fit returns a score of zero while a score of 100 refers to a model with the same predictive capacity as simply taking the mean over the data. Since uncertainty quantification is both a strength as well as one of the main motivators behind model selection in this work, it is desirable to employ a performance metric that is capable of providing a probabilistic outlook on the goodness-of-fit.
Such a metric can be found in the GP formulation itself, more specifically in the form of the joint likelihood function described in Eq. (5). This provides a direct means to assign a likelihood to each model prediction based on the commonly employed zero-mean Gaussian prediction error formulation [51]. For strictly computational purposes, this work employs the logarithm of the likelihood as the metric, as it conveniently transforms the product in the joint likelihood to a sum. Crucially, it must be noted that to ensure physical consistency is maintained when assessing the model, both metrics are evaluated on a trajectory-to-trajectory basis.
### Constructing priors for crack length monitoring
The surrogate model is first implemented for the task of constructing prior distributions for the problem of crack length monitoring. This corresponds to a typical damage quantification setting, which in the Bayesian context requires choosing a prior over the crack length at a given moment in time, i.e., \(p\left(\alpha(t)\right)\). This can be obtained as the output of the surrogate model by considering a training set \(\mathcal{D}_{\text{I}}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}=\left\{\left(t^{ (i)},a^{(i)}\right)\right\}_{i=1}^{N}\), where \(t_{i},\alpha_{i}\in\mathbb{R}\). In this setting, all sources of uncertainty affecting the crack length, namely the initial crack length as well as material- and load-related variability, are treated as latent variables of the surrogate model. Training took place over 1000 iterations requiring approximately 7 minutes in a workstation equipped with an NVIDIA(r) RTX A4000 GPU. Implementation took place using the Pyro probabilistic programming language written in Python [52].
The predictive performance of the trained model is showcased in Figure 3, where the mean function is plotted over the 3 year crack growth period alongside the 95 % C.I. obtained using the covariance function. This is compared with the mean function estimated over the test data and the corresponding credible interval. The model is capable of precisely capturing the mean trend in the data, while also being largely successful in representing the heteroscedasic variance structure. However, it is noteworthy that the predicted variance underestimates the actual variance of the test data early in the crack propagation process. This indicates an inability of the model to successfully account for the uncertainty in the initial condition, i.e., the initial crack length \(\alpha_{0}\).
The observed variance depletion could pose a problem in a Bayesian setting, as insufficient variance in the prior could lead to poor convergence to the posterior. However, as the crack growth rate increases this phenomenon subsides, which indicates that the generated priors will be sufficient as deterioration becomes more extensive, and thus structural reliability is decreased. It should be noted that the NMSE and log-likelihood for this model will be reported in the next section in comparison to the model trained for crack growth monitoring, as it was felt that this was more contextually appropriate.
### Constructing priors for crack growth monitoring
Crack growth monitoring using Bayesian inference consists of using available measurements, either of the structural response or of the crack length itself, to obtain estimates of the posterior distribution over the crack growth model parameters, namely \(C,m\) in the Paris-Erdogan law (see Eq. (1)). The probabilistic nature of these parameters is well-attested in the literature [19; 26] and suggested priors were used in the authors' previous work [28] to generate the SCG trajectories shown in Figure 1. To apply Bayesian inference, a likelihood function is required which is capable of performing a transformation from the prior (parameter) space to the observable (measurement) space. Regardless of the measured quantity, a probabilistic model is required which returns a distribution over the crack length at some moment in time, conditioned the parameters \(C,m\), i.e., \(p\left(\alpha(t)|C,m\right)\).
Albeit providing realizations from this underlying distribution, the employed numerical scheme used to generate crack growth trajectories does not provide a convenient model for prior construction. The proposed surrogate can achieve precisely that by being trained over a training set \(\mathcal{D}_{\text{II}}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}=\left\{\left( \left[t^{(i)}\;C^{(i)}\;m^{(i)}\right]^{\top},a_{i}\right)\right\}_{i=1}^{N}\), where now \(\mathbf{x}_{i}\in\mathbb{R}^{3}\) and \(\alpha_{i}\in\mathbb{R}\). Under this parametrization, model parameters are included in the set of observable variables while load-related ones and the initial crack length are passed onto the latent space. Again, training took place for 1000 iterations on the same workstation requiring a similar amount of time.
The effect of conditioning on the crack growth model parameters is evident in Figure 4. There, the mean function and 95 % C.I. of the predictive crack growth process \(\alpha\left(t;C,m\right)\) are plotted for a specific \(C,m\) realization and compared against the actual crack growth trajectory as well as the mean and 95 % C.I. of \(\alpha(t)\) from Section 4.2. This particular trajectory is representative of the
Figure 3: Surrogate model predictive performance for crack length prior construction
effect of the stochastic sequence of fatigue loads, as it exhibits non-smooth behavior, as well as that of outlying parameter realizations that lead to a rapidly increasing crack growth rate. As a result, the model that treats all sources of uncertainty as latent variables is significantly outperformed by that conditioned on \(C,m\). This is further reinforced by the fact that the log-likelihood of the actual trajectory with respect to the model of the \(\alpha\left(t;C,m\right)\) process is seven times greater than the one with respect to \(\alpha(t)\).
However, in absolute terms that likelihood is still negative, which is reflective of the fact that for significant portions of the crack growth the actual trajectory falls outside of the high probability density region predicted by the model. Nevertheless, it should be noted that the additional parametrization leads to marked uncertainty reduction, along with a more robust description of the heteroscedastic tendency of the data. This is especially evident from the fact that the 95 % C.I. of the GP predictive process for \(\alpha\left(t;C,m\right)\) extends to crack lengths at the limit of the critical threshold (\(\alpha_{\text{cr}}=155\) mm), which is not the case for \(\alpha(t)\).
The predictive capacity of the trained GP for \(\alpha\left(t;C,m\right)\) is presented more extensively in Figure 5. To produce it, four previously unseen crack growth trajectories were selected randomly from the test set. The predictive mean and 95 % C.I. were obtained from the trained surrogate for the corresponding \(C,m\) realizations and the results are plotted comparatively. The trajectories depicted on the left panels are typical of the mean behavior of the SCG process, as they exhibit relatively moderate crack growth rates and somewhat smoother shapes. The model in these cases is highly effective in capturing the mean trend while also providing a narrow high probability density region.
In the top-right panel, the actual trajectory begins from an outlying initial crack length and
Figure 4: Comparison between prior models for crack length and crack growth monitoring with uncertain initial crack length
then exhibits a rapid increase in crack growth rate beyond approximately the 1.5 year mark. This is indicative of the effect of the stochastic time-variant loading, as more severe sea states and therefore \(\{\Delta S,N_{\text{avg}}\}\) pairs, lead to acceleration of crack growth. As a result, the model largely fails to provide a sufficient prediction for prior construction. Interestingly, while the case in the bottom-right panel also exhibits the mark of significant uncertainty in the loading sequence, the predictive performance is visibly improved. This is a consequence of the fact that the initial crack length for the actual trajectory is not an outlier, thus highlighting the effect of uncertainty in the initial crack length.
Motivated by this, we decided to investigate the predictive performance of the surrogate when the initial crack length is included in the conditioning variables. This decision also has a practical dimension, in that when monitoring crack growth an initial crack is expected to have been detected and its length measured, up to some confidence level. The training set now becomes \(\mathcal{D}_{\text{III}}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}=\left\{\left( \left[t^{(i)}\ C^{(i)}\ m^{(i)}\ \alpha_{0}^{(i)}\right]^{\top},a_{i}\right)\right\}_{i=1}^{N}\), with \(\mathbf{x}_{i}\in\mathbb{R}^{4}\) and \(\alpha_{i}\in\mathbb{R}\). This time, training was implemented over 2000 iterations which required approximately 14 minutes using the same machine, as well as hold-out scheme for training and test set construction. The resultant GP predictive process for \(\alpha\left(t;C,m,\alpha_{0}\right)\) features only fatigue loading-related quantities as its latent variables.
The uncertainty reduction enabled by the inclusion of the initial crack length to the model inputs is clearly demonstrated in Figure 6, where the predicted 95 % C.I. is drastically narrower.
Figure 5: Predictive performance of prior model for crack growth monitoring for different \(C,m\) realizations
A fourfold increase is observed in the log-likelihood of the trajectory, along with an NMSE decrease from 0.2 to 0.1; the very low values signifying the overall success of the model in terms of capturing the mean trend. Furthermore, its capacity to capture heteroscedastic behavior has clearly been improved; this is further reinforced by the results presented in Figure 7.
There, despite the fact that the initial crack length for the actual trajectory is an outlier, the model is capable of capturing the crack growth very effectively, providing an almost pointwise accurate prediction over the first year. After that, a relative lag is observed alongside an increasingly less smooth behavior which indicates that less severe loading has caused a deceleration of the crack growth process. This is followed by a rapid increase in the crack growth rate indicating a reversal in the fatigue loading pattern. The trained model is proven capable of following this stochastic behavior by producing a highly heteroscedastic predictive process which, compared to the predictive process for \(\alpha\left(t;C,m\right)\), exhibits lower variance throughout.
Quantitatively, the log-likelihood of the crack growth trajectory is three times greater for the GP predictive process \(\alpha\left(t;C,m,\alpha_{0}\right)\), compared to \(\alpha\left(t;C,m\right)\), with respective NMSE values of 0.08 and 0.11. While relatively equivalent in terms of mean predictions, adding knowledge to the model about the initial conditions has a decidedly positive effect on reducing uncertainty, as is also made clear from Figure 8. The crack growth trajectories and corresponding model predictions shown therein were obtained for test set realizations of both the Paris-Erdogan parameters \(C,m\) and the initial crack length \(\alpha_{0}\).
The GP predictive processes for the top panels, which contain smooth trajectories largely unaffected by loading variability, exhibit almost point-wise consistency in the mean alongside very narrow credible intervals. For the bottom panels where loading variability is more pronounced,
Figure 6: Comparison between prior models for crack length and crack growth monitoring including conditioning on the initial crack length \(\alpha_{0}\)
especially on the left-hand side, the model still produces heteroscedastic processes that accurately capture the trends in the actual trajectories. As expected, the performance of the model predictive processes is unaffected by outlying initial crack lengths, such as the one on the top-left panel.
Figure 9 further showcases the capability of the model to generate prior distributions over the crack length at different moments in time and under different levels of conditioning variables. For \(t=1.5\) years, the trained predictive GP models are used to generate (Gaussian) prior distributions. For the models including conditioning variables, i.e., \(\alpha\left(t;C,m\right)\) and \(\alpha\left(t;C,m,\alpha_{0}\right)\), the same realizations of \(C,m,\alpha_{0}\) as in Figures 6 & 7 were employed. Here as well, the uncertainty reduction is evident, while the variance of the resulting distributions is seen to increase under lower levels of information, i.e., when more variables are treated as latent. This leads to wider priors which are less prone, in the Bayesian setting, to introduce bias during the inference process. Therefore, in addition to its ability to accurately represent problem physics, as demonstrated previously, the model is shown to exhibit statistical properties which prove it is suitable for the task of constructing physically consistent Bayesian priors.
Figure 7: Comparison between prior models for crack growth monitoring with and without conditioning on initial crack length \(\alpha_{0}\)
Figure 8: Predictive performance of prior model for crack growth monitoring for different \(C,m,\alpha_{0}\) realizations
Figure 9: Crack length priors under different levels of knowledge about the crack growth process at \(t=1.5\) years
## 5 Concluding Remarks
This work demonstrated the potential of using GP regression models as surrogates for stochastic crack growth processes in order to construct physically consistent prior distributions in Bayesian SHM problems. The proposed model is capable of accounting for different sources of uncertainty, either directly as input parameters or indirectly as latent variables, thus allowing it to be used under different levels of physical knowledge as well as for a hierarchy of different tasks. Implementation took place using an existing dataset of crack growth realizations for a typical ship structural component, which were obtained using the Paris-Erdogan law and taking into account both material and load-related uncertainty.
When used to construct prior distributions for crack length monitoring, where all uncertain quantities are modeled as latent variables, the surrogate was proven up to the task for most of the crack growth duration. However, in the initial phase of crack growth where lower growth rates are observed the model was found to underestimate process variance, which could prove problematic in the Bayesian setting. Although generally capable of modeling heteroscedastic behavior due to the use of inducing point methods, the employed model is not strictly built to model heteroscedastic variance. This could provide an avenue for future research in view of improving model predictive performance for the crack length monitoring task.
The introduction of more knowledge on the physical parameters of the crack growth process, in the form of the initial crack length and/or the Paris-Erdogan law parameters, led to a reduction in the predictive uncertainty of the model, as well an improvement of its capacity to model heteroscedastic variance. This observation provides incentive for further research into the impact of introducing physical knowledge into the model, this time not through the training data but through its structure. Such a grey-box modeling approach, where problem physics can be introduced either through the mean function shape or via constraints on the covariance function, offers the possibility for a model that can generalize well using smaller amounts of potentially lower quality, and thus cheaper to obtain, data.
Finally, it is important to state that this work constitutes the first step in a broader research effort that the authors are currently undertaking. The developed models are meant to act as parts of a hierarchical methodology that aims to tackle both tasks the models were demonstrated on, i.e., crack length and crack growth monitoring, in a simultaneous and interchangeable manner.
## Acknowledgements
The authors would like to gratefully acknowledge the contribution of Pavlos Makris, who was instrumental in producing the stochastic crack growth dataset employed throughout this work.
|
2305.03291 | The Design and Operation of Digital Platform under Sociotechnical Folk
Theories | We consider the problem of how a platform designer, owner, or operator can
improve the design and operation of a digital platform by leveraging a
computational cognitive model that represents users's folk theories about a
platform as a sociotechnical system. We do so in the context of Reddit, a
social media platform whose owners and administrators make extensive use of
shadowbanning, a non-transparent content moderation mechanism that filters a
user's posts and comments so that they cannot be seen by fellow community
members or the public. After demonstrating that the design and operation of
Reddit have led to an abundance of spurious suspicions of shadowbanning in case
the mechanism was not in fact invoked, we develop a computational cognitive
model of users's folk theories about the antecedents and consequences of
shadowbanning that predicts when users will attribute their on-platform
observations to a shadowban. The model is then used to evaluate the capacity of
interventions available to a platform designer, owner, and operator to reduce
the incidence of these false suspicions. We conclude by considering the
implications of this approach for the design and operation of digital platforms
at large. | Jordan W. Suchow, Lea Burton, Vahid Ashrafimoghari | 2023-05-05T05:47:37Z | http://arxiv.org/abs/2305.03291v1 | # The Design and Operation of Digital Platform under Sociotechnical Folk Theories
###### Abstract
We consider the problem of how a platform designer, owner, or operator can improve the design and operation of a digital platform by leveraging a computational cognitive model that represents users' folk theories about the platform as a sociotechnical system. We do so in the context of Reddit, a social-media platform whose owners and administrators make extensive use of shadowbanning, a non-transparent content moderation mechanism that filters a user's posts and comments so that they cannot be seen by fellow community members or the public. After demonstrating that the design and operation of Reddit have led to an abundance of spurious first-party suspicions of shadowbanning in cases where the mechanism was not in fact invoked, we develop a computational cognitive model of users' folk theories about the antecedents and consequences of shadowbanning that predicts when users will attribute their on-platform observations to a shadowban. The model is then used to evaluate the capacity of interventions available to a platform designer, owner, and operator to reduce the incidence of these false suspicions. We conclude by considering the implications of this approach for the design and operation of digital platforms at large.
Digital platforms, folk theories, causal graphical models, shadowbanning, computational cognition
## Introduction
A digital platform is a website or mobile application that enables its users to interact with each other or with platform operators (De Reurer, Sorensen, & Basole, 2018). The choices that designers, owners, and operators make with respect to the design and operation of a digital platform affect the user's acceptance and usage of the platform (Venkatesh, Morris, Davis & Davis, 2003). In some cases, these choices may lead the user to develop a folk theory about how the platform works (DeVito et al., 2018). A folk theory is a layperson's understanding of how something works. It is typically an oversimplification of reality, but it can nonetheless be useful for guiding behavior. For example, many people have a folk theory about how a disease is transmitted (Motta & Callaghan, 2020). This folk theory may be oversimplified or even incorrect, but it can nonetheless be useful for guiding hygiene behavior. In the context of digital platforms, folk theories about how the platform works can guide the user's behavior on the platform. For example, a user may develop a folk theory that the platform is designed to promote certain content over other content. This folk theory may lead the user to believe that the platform is biased against certain groups of people. If the user identifies with one of those groups, the user may avoid posting certain types of content on the platform, leading to an indirect chilling of speech that flows from the user's mental model of the platform's operation.
When designers, owners, or operators of a digital platform aim to correct users' misperceptions about a platform's operation, they do not have direct access to the users' folk theories about the platform. Rather,
they must rely on indirect evidence, such as user behavior on the platform or in certain cases the results of surveys or other instruments aimed at understanding the userbase. In some cases, this indirect evidence may be sufficient for the platform designer, owner, or operator to identify a user's folk theory and take steps to correct it. In other cases, however, the indirect evidence may be ambiguous, making it difficult to identify the user's folk theory with certainty. In these cases, a computational cognitive model of the user's folk theory can be used to disambiguate the indirect evidence and identify the user's folk theory with greater certainty. Once the user's folk theory has been identified, the platform designer, owner, or operator can use the folk theory to make predictions about how certain interventions will affect users' perceptions of the platform and thereby guide improvement to the platform.
Computational models of cognition provide a formal framework for modeling a user's folk theory of a sociotechnical system. Computational models of cognition have a long and rich history in psychology and cognitive science. These models have been used to study various aspects of human cognition, including perception, reasoning, decision making, and memory. In recent years, computational modeling has been increasingly used to study social phenomena. For example, computational models of social cognition have been used to study how people reason about the mental states of others, how they form and change their beliefs about the social world, and how they interact with others in social games.
In this paper, we study the problem of how a platform designer, owner, or operator can improve a digital platform by leveraging a computational cognitive model that represents users' folk theories about the platform's operation. We begin by reviewing mechanisms for content-moderation moderation on digital platforms in the context of process and outcome transparency. Next, we introduce the role that computational models of cognition can play in the design of digital platforms. We then demonstrate that the design and operation of Reddit have led to an abundance of spurious first-party suspicions of shadowbanning in cases where the mechanism was not invoked. We proceed to develop a computational cognitive model that represents users' folk theories about the antecedents and consequences of shadowbanning and predicts when users will attribute their on-platform observations to a shadowban. We then use the model to determine the interventions available to a platform designer, owner, and operator to reduce the incidence of these false suspicions and consider the implications of the approach for the design and operation of digital platforms at large.
## Shadowbanning and non-transparent content moderation
Online communities use content-moderation mechanisms to promote and enforce norms of discourse within a community and to mitigate harms that would undermine the community's purpose (Kraut and Resnick, 2012). These harms include, for example, the propagation of content that encourages and effectuates sexism, racism, radicalization, disinformation, fraud, and spam (Grimmelmann, 2015; Chandrasekharan et al., 2021). Content moderation involves screening, evaluating, categorizing, approving, promoting, removing, or hiding user-generated content based on predefined rules, guidelines, and policies (Grimmelmann, 2015). The various content-moderation mechanisms differ significantly in the transparency of their processes and outcomes (Sander, 2019; Cook et al., 2021).
Shadowbanning is a form of non-transparent content moderation where a moderator or platform owner secretly bans a user from participating in an online community or silences a user within it (Myers West, 2018; Cole, 2018). Under a shadowban, the platform is configured to filter the user's generated content (e.g., their posts and comments) so that, unbeknownst to the user, it cannot be seen by fellow community members or the public. Shadowbanning therefore produces near total non-transparency in the outcome of moderation. Shadowbanning is employed when platform owners believe that other content-moderation mechanisms with more transparent outcomes, such as suspensions or outright bans, will be evaded. For example, the user may exploit the anonymity offered by a social-media platform to create a new account and circumvent the suspension or ban (Grimmelmann, 2015). Shadowbans can have a transparent or non-transparent process, depending on whether moderators and platform owners disclose a policy that clearly articulates the conditions under which it is invoked.
## Bayesian models of cognition
Causal graphical models ("Bayes nets") are a formalism for describing the structure and strength of causal relations between events. A causal graphical model is defined by two components, a graph that describes
its structure and a set of probability tables that describe the contingencies between events. The graph defining the model's structure has nodes that are random variables, with each node representing an observable or latent variable. Directed edges between the nodes represent dependencies between the events. The structure of the causal graphical model is also sometimes referred to as a directed acyclic graph, or DAG. Each node is associated with a conditional probability table, which specifies the probability of each possible outcome of the event given the outcome of the events that it depends on.
Critically, a causal graphical model can be used by a researcher in one of two ways. First, it can be used as an instrument of social science to represent the researcher's model of the world. Second, it can be used as an instrument of cognitive science to represent the researcher's model of the user's folk theories about the world. The latter formulation becomes most useful when the user's model of the world diverges from that of the experimenter, in which case it can be used to represent how users' folk or "intuitive" theories about the world differ from reality.
## False suspicions of shadowbanning on Reddit
We performed an empirical analysis of first-party suspicions of shadowbanning on Reddit, a social media platform organized into many diverse communities ("subreddits"). Reddit is a fitting platform for studying suspicions of shadowbanning because its administrators make extensive use of shadow content moderation, both at the user level (shadowbanning) and post level (shadow post removal). Critically, a little-known subreddit, /r/ShadowBan, enables users to directly determine whether they are shadowbanned. Though Reddit's shadowbanning mechanism operates at the level of the platform, not the subreddit, a subreddit's moderators can approve a post by an otherwise shadowbanned user so that it appears on the subreddit. The /r/ShadowBan subreddit is configured to automatically approve posts by shadowbanned users; a bot on the subreddit then replies to approved posts, informing the user whether they are shadowbanned and which of their recent posts have been removed. The present study therefore operationalizes the construct _first-party suspicions of shadowbanning_ as the act of posting on /r/ShadowBan. Certainly, this definition is both too broad (because a user can post on /r/ShadowBan for any reason) and too narrow (because a user may be unaware of the subreddit or disinclined to reveal their suspicion publicly, perhaps in fear of retaliation). Even so, the subreddit provides a unique view into a phenomenon that is otherwise invisible and largely confined to private thought.
We began our analysis by empirically analyzing public logs of Reddit posts and comments (Baumgartner et al., 2020) to measure the incidence of suspicious of shadowbanning in the decade following /r/ShadowBan's creation in 2011. Over that period, 52,539 unique users registered 94,173 suspicions of shadowbanning. Of these suspicions, 3,247 (3.4%) were by shadowbanned users, whereas 90,926 (96.6%) were false suspicions by members who were not shadowbanned. Suspicious users represent a diverse sampling of Reddit users, having collectively created 5,985,042 posts and 55,976,306 comments across 152,949 subreddits. The modal subreddit that the suspicious contribute to, with over 3 million posts and comments, is /r/AskReddit, one of the largest subreddits. However, the tail is long, with 894 subreddits having at least 10,000 posts or comments by suspicious users, 4,981 subreddits having at least 1,000, and 17,322 subreddits having at least 100.
Next, we surveyed 500 Reddit users who in 2021 suspected that they were shadowbanned. To conduct the survey, we sent direct messages to Reddit users shortly after they posted to the /r/ShadowBan community, requesting that they share the basis of their suspicion. All users who posted the forum during the data-collection period were surveyed, except for those excluded according to several criteria that were applied to improve the response rate and minimize the burden of unwanted communication associated with the surveying method. First, we contacted users only if their account age was at least 6 months, which has the effect of excluding professional spammers and fraudsters, who tend to repeatedly create new accounts and respond with aggression to direct messages. Second, we contacted each user at most once and did not recontact users who checked for shadowbans multiple times. Third, we contacted only 10% of the users who were not otherwise excluded.
Our analysis of the reported bases of suspicions focused on distinguishing between process and outcome transparency (Grimmelmann, 2015) by examining perceptions of the antecedents and consequences of shadowbanning on the platform, respectively. We first report empirical results related to the _antecedents_ of shadowbanning: observable factors that users interpret as causes of a moderator enacting the
shadowban. The antecedent of shadowbanning most cited (by 5.5% of users) as a reason for suspicion was having written a post or comment that they believe was controversial or antagonistic. Some users (2.2%) cited the belief that moderators are strict and heavy handed in their use of shadowbans. Users also cited tens of other less frequent antecedents, often referencing specific on-platform actions by the user, fellow community members, other users, or moderators that led to conflict. Next, we report empirical results related to the _consequences_ of shadowbanning: observable factors that users interpret as downstream effects of a moderator enacting the shadowban. The plurality cited cause of suspicion related to the consequences of a shadowban, reported by 16% of surveyed users, was observing less engagement with their posts and comments than expected. Alternative consequence-based routes to suspicion included observing that a post was removed (10.6%), that a comment was removed (6.2%), that multiple posts did not appear in a particular subreddit (3.1%), that their account profile was not visible to a confederate or when using a private browser session (2.7%), an inability to comment (2.7%), an inability to chat (2.2%), and a long tail of observed failures in other actions taken on the platform. (Notably, only some of the cited consequences are possible effects of shadowbans on the platform; others are technical glitches that users wrongly attributed to a shadowban.)
## Bayesian model of suspicions of shadowbanning
### Figure 1
_A folk theory that guides users' first-party suspicions of shadowbanning._
Note. The figure shows a diagram representing the causal model that underlies a user's first-party suspicions of shadowbanning. Nodes (N1-N7) are events and directed edges (E1-E8) are dependencies between events. Solid nodes are unobservable and dotted nodes are observable by the user. Nodes filled in grey represent events that can be intervened upon by platform designers, owners, and moderators. The edge E8 is dashed because it introduces a cycle, violating an assumption of Bayesian networks. Please note that, although event descriptions are framed actively (e.g., "Platform employs..."), nodes are events with multiple possible outcomes, including the implied negative frame (e.g., "Platform does not employ...").
In the context of shadowbanning, we might ask which events a platform designer, owner, or operator might intervene upon to affect first-party suspicions of shadowbanning. We note, however, that such an intervention would be an indirect one in that, ultimately, what is being intervened upon is only an input into the cognitive processes that construct the intuitive model under consideration here. Intervening upon
the world may or may not cause a commensurate change in the user's intuitive model of sociotechnical system. Indeed, articulation of intuitive theories is most useful as a practice when there is daylight between the ground truth of a platform's operation and the intuitive theories held by those participating in it. The intuitive theories of platform owners and operators, moderators, and users may diverge in complex ways that have ramifications for how users understand the system.
When a Bayesian network is interpreted as an intuitive theory describing the user's mental model of the world, it becomes possible for a platform owner to intervene in ways that would ordinarily be fruitless when attempting to intervene directly upon the world. In particular, the platform owner can intervene upon the user's priors over events and contingencies between events in a way that goes beyond simply setting the outcome of particular nodes. Further, the platform owner can intervene in ways that have no direct effect on the platform's operation but nonetheless has an indirect effect because of the way that they alter the inferences drawn by its users.
For an example of the alternative modes of intervention available under the intuitive-theory interpretation of a causal model, consider node N1 -- whether the platform employs the shadowbanning mechanism. Under one mode of intervention, the platform owner exerts their control over the platform through a design choice about the mechanisms of moderation that are available to moderators on the platform: they can design the platform in such a way that moderators have the shadowban mechanism available to them, or alternatively, they can design the platform in such a way that no such mechanism is available. To intervene upon the user's mental model of the world, in contrast, the platform owner can publicly reveal the presence of the mechanism or attest to its absence, thereby intervening upon the user's prior over whether the platform employs the mechanism while changing nothing about the platform itself.
|
2308.12360 | Turing patterns on a two-component isotropic growing system. Part 2:
Conditions based on a potential function for exponential growth/shrinkage | We propose conditions for the emergence of Turing patterns in a domain that
changes in size by homogeneous growth/shrinkage. These conditions to determine
the bifurcation are based on considering the geometric change of a potential
function whose evolution determines the stability of the trajectories of all
the Fourier modes of the perturbation. For this part of the work we consider
the situation where the homogeneous state of the system are constant
concentrations close to its stationary value, as occurs for exponential
growth/decrease. This proposal recovers the traditional Turing conditions for
two-component systems in a fixed domain and is corroborated against numerical
simulations of increasing/decreasing domains of the Brusselator reaction
system. The simulations carried out allowed us to understand some
characteristics of the pattern related to the evolution of its amplitude and
wavenumber and allow us to anticipate which features strictly depend on its
temporal evolution. | Aldo Ledesma-Durán | 2023-08-23T18:05:51Z | http://arxiv.org/abs/2308.12360v1 | # Turing patterns on a two-component isotropic growing system.
###### Abstract
We propose conditions for the emergence of Turing patterns in a domain that changes in size by homogeneous growth/shrinkage. These conditions to determine the bifurcation are based on considering the geometric change of a potential function whose evolution determines the stability of the trajectories of all the Fourier modes of the perturbation. For this part of the work we consider the situation where the homogeneous state of the system are constant concentrations close to its stationary value, as occurs for exponential growth/decrease. This proposal recovers the traditional Turing conditions for two-component systems in a fixed domain and is corroborated against numerical simulations of increasing/decreasing domains of the Brusselator reaction system. The simulations carried out allowed us to understand some characteristics of the pattern related to the evolution of its amplitude and wavenumber and allow us to anticipate which features strictly depend on its temporal evolution.
Presentation
Turing patterns in reaction-diffusion systems where the domain changes size present different aspects to those of a fixed domain, the most important being that the shape of the pattern at a specific time depends crucially on its past history [1; 2]. It is believed that this phenomenon, which can be understood as a type of hysteresis, is probably related to persistence, _i.e._, the ability of a dissipative structure to maintain its current wavenumber, even when there is another state with a wavenumber that can be more stable, _i.e._, more resistant to sideband disturbances [3]. To test this possibility for any reaction-diffusion system or type of growth, a non-linear approximation to the solution of the reaction diffusion dilution (RDD) system near the Turing bifurcation is required. However, in the case of a domain that changes with time, this analysis is impractical since it has not been conclusively resolved, even from the linear approach, how to find the Turing bifurcation in an increasing domain. Some important approaches to this respect are those in [4; 5].
The strategy followed in this work to find the Turing bifurcation is to consider the changes in the structure of the phase plane in order to find a potential function for the Fourier modes perturbations. From this potential function, it is expected that all the trajectories decrease to stable point in absence of diffusion, and some become unstable saddles for some wavenumber when diffusion is turned on. We will show that the observance of the geometrical structure changes of the potential function allow one to establish some hypotheses of where Turing pattern can emerge. These hypotheses will be tested with specific numerical simulations of the Brusselator RDD system using finite differences method in one-dimensional reaction diffusion system with different types of homogeneous growing/shrinking.
### Summary of Part1: Homogeneous state and perturbations
Consider a reaction-diffusion process in a domain that grows in size \(l(t)\). If growth occurs homogeneously, the relationship between the real and computational domain can be written as \(x=x_{0}+l(t)\xi\) with \(\xi\in[0,1]\), where \(\xi\) the fixed coordinate and \(x\) the actual coordinate. If \(\mathbf{c}\) represents the vector of concentrations, \(\mathbf{D}\) its diffusion matrix, and \(\mathbf{f}(\mathbf{c})\) the vector of chemical reactions, we prove in Part 1 of this series that the RDD equation describing the dynamics [6] obeys
\[\frac{\partial\mathbf{c}}{\partial t}+\frac{\dot{l}(t)}{l(t)}\mathbf{c}(\xi,t )=\frac{1}{l^{2}(t)}\mathbf{D}\frac{\partial^{2}\mathbf{c}}{\partial\xi^{2}}+ \mathbf{f}(\mathbf{c}). \tag{1}\]
To exemplify some differences in the occurrence of patterns in fixed and increasing domains, in this work we will consider in our simulations the Brusselator given by [7]
\[\mathbf{f}(\mathbf{c})=(A-Bc_{u}-c_{u}+c_{u}^{2}c_{v},Bc_{u}-c_{u}^{2}c_{v})^ {T}. \tag{2}\]
The homogeneous state \(\mathbf{c}_{s}(t)\), _i.e._ that related to the zero Fourier mode of the solution of (1) with no spatial contribution obeys
\[\frac{\partial\mathbf{c}_{s}}{\partial t}+\frac{l^{\prime}(t)}{l(t)}\mathbf{ c}_{s}=\mathbf{f}(\mathbf{c}_{s}), \tag{3}\]
and presents very discernible features in each case. For the non-equilibrium Brusselator system, the dilution term can induce the following scenarios: 1) the homogeneous solution for large times can be different from the fixed point solution \(\mathbf{c}_{0}\), _i.e._ that where both reaction rates are null (for exponential growing/shrinking); 2) the homogeneous state can oscillate around the fixed-point concentration (for sinusoidal variation); and 3 ) the homogeneous state can tend very slowly to the fixed point concentration or diverge when the domain shrinks too much (for linear and quadratic shrinking). In contrast, for systems whose fixed point is the origin, all the types of growing lead the concentrations to the same final steady state defined by the fixed point at the origin.
Under appropriate approximations related to 1) slow domain variation, 2) safe distance from bifurcations and 3) smallness of non-linear terms, we prove with comparisons with numerical solutions that a good approximation for the homogeneous state in (3) is given by
\[\mathbf{c}_{s}(t)=\mathbf{c}_{0}+\frac{l(0)}{l(t)}\mathbf{P}e^{\mathbf{A}t} \mathbf{P}^{-1}[\mathbf{c}_{s}(0)-\mathbf{c}_{0}]-\mathbf{P}\frac{e^{ \mathbf{A}t}}{l(t)}\left(\int\limits_{0}^{t}l^{\prime}(t^{\prime})e^{- \mathbf{A}t^{\prime}}dt^{\prime}\right)\mathbf{P}^{-1}\mathbf{c}_{0}. \tag{4}\]
In this equation \(\mathbf{\Lambda}\) is the diagonal matrix of eigenvalues of the Jacobian \(\mathbf{J}=\frac{\partial f}{\partial\mathbf{c}}(\mathbf{c}_{0})\), \(\mathds{P}\) its modal matrix and \(\mathbf{c}_{0}\) the constant fixed point. We prove how this approximation correctly describes the main features of the homogeneous state for exponential, linear, quadratic and sinusoidal domain variations.
We also show that for certain types of domain variation, the concentration changes are small and therefore, a representative constant value \(\mathbf{C}_{0}\) can approximate the homogeneous state \(\mathbf{c}_{s}(t)\) on a time interval between \(t_{i}\) and \(t_{f}\). One form of this representative value is given by its direct temporal average :
\[\mathbf{C}_{0}=\mathbf{c}_{0}+\frac{1}{t_{f}-t_{i}}\left(\int_{t_{i}}^{t_{f}} \boldsymbol{\delta C}(t)\,dt\right)\mathbf{c}_{0}, \tag{5}\]
The explicit deviation \(\boldsymbol{\delta C}(t)\) was calculated explicitly in Part 1 of this paper for exponential, linear, quadratic, and sinusoidal growth functions by averaging the equation (4). We prove with comparisons with the numerical solutions that this equation approximates \(\mathbf{c}_{s}(t)\) with a low error for slow growth/decrease rates, once a transient time has passed, and up to an increase/decrease time interval of ten times the size of the original domain.
This distinction between fixed point concentration \(\mathbf{c}_{0}\), time dependent homogeneous state \(\mathbf{c}_{s}(t)\) and representative and constant concentration \(\mathbf{C}_{0}\), is relevant for non-equilibrium systems like the Brusselator. For systems like the BVAM, in the steady state, these three concentrations represent the same point: the origin [8; 9].
The importance of this summary here lies in the effect that the homogeneous state has on the perturbations. In Part 1 we prove that the perturbations \(\boldsymbol{\zeta}\) of the system (1), to first order obey
\[\frac{\partial\boldsymbol{\zeta}}{\partial t}+\frac{\dot{l}(t)}{l(t)} \boldsymbol{\zeta}=\frac{1}{l^{2}(t)}\mathds{D}\frac{\partial^{2}\boldsymbol{ \zeta}}{\partial\xi^{2}}+\frac{\partial\mathbf{f}}{\partial\mathbf{c}}( \mathbf{c}_{s})\boldsymbol{\zeta}. \tag{6}\]
Therefore, the evaluation of the last term in general depends explicitly on the time-dependent homogeneous state. However, it is expected that in time intervals where the homogeneous concentration changes little, this Jacobian can be approximated by a constant value given by \(\hat{\mathbf{J}}=\frac{\partial\mathbf{f}}{\partial\mathbf{c}}(\mathbf{C}_{0})\). This strategy simplifies the characterization of disturbances and allowed us to corroborate that their stability will depend on three factors: 1) the change in concentration with respect to the fixed point concentration; 2) the linear change in reaction rates induced by the dilution and 3) the direct effect of dilution as measured by local increase/decrease in volume. We show that an approximate criterion for the stability of perturbations is
\[\lambda(t)=\text{Re}\{\hat{\mathbf{\Lambda}}_{i}\}-\frac{1}{t}\log\frac{l(t) }{l(0)}, \tag{7}\]
to be negative for all the eigenvalues \(\hat{\mathbf{\Lambda}}_{i}\) numbered by \(i\) of the Jacobian matrix evaluated at \(\mathbf{C}_{0}\) given by \(\hat{\mathbf{J}}\).
In the next sections we will explain our methodology to find the Turing bifurcation from a potential function in successive stages. First, in Section II, we will present the idea of our potential function approximation to reproduce the well-known Turing conditions for a fixed domain obtained from the eigenvalue problem. Then in Section III, we will derive the conditions for an RDD system where the steady-state concentration is approximated by \(\mathbf{C}_{0}\), constant, as for exponential growth. In Section IV we will test our hypotheses against extensive numerical solutions of the Brusselato RDD equations for this type og growing. Finally, in Section V, we provide our discussions and conclusions on the wavenumber selection problem.
## II Turing conditions on a fixed domain in term of an energy function
In this section we first summarize the Turing conditions of a reaction-diffusion system in a domain of fixed size \(L\) for a two-component system from the traditional linear eigenvalue problem. Then, we will state the consequence of these conditions on a potential function that will allow us to state the general problem of finding the Turing bifurcation as the result of structural changes of this function.
The eq. (6) for fixed domain implies \(l^{\prime}(t)=0\), \(x=l(t)\xi=L\xi\) and \(\mathbf{C}_{s}\to\mathbf{c}_{0}\), and therefore, the perturbations obey
\[\frac{\partial\boldsymbol{\zeta}}{\partial t}=\mathds{D}\frac{\partial^{2} \boldsymbol{\zeta}}{\partial x^{2}}+\mathbf{J}\boldsymbol{\zeta}. \tag{8}\]
Here \(\mathbf{J}\) is the Jacobian evaluated at the fixed point where \(\mathbf{f}(\mathbf{c}_{0})=\mathbf{0}\). It can be assumed periodic boundary conditions for the variable \(\boldsymbol{\zeta}\) between \(0\) and \(L\) and random small noise around \(\mathbf{c}_{0}\) as initial conditions for the concentrations.
### Analysis in terms of eigenvalues
The solution of (8) consists of a linear combination of Fourier modes \(\mathbf{\zeta}_{k}(x,t)=e^{ikx+\lambda t}\mathbf{v}^{(k)}\), where \(k\) is the wavenumber, \(\lambda_{k}\) an eigenvalue, and \(\mathbf{v}^{(k)}\) the eigenvector of the matrix
\[\mathds{A}(k)=\mathds{J}-k^{2}\mathds{D}. \tag{9}\]
These eigenvalues are related to the dispersion relation
\[\lambda_{k}^{2}-\tau_{\mathds{A}}(k)\lambda_{k}+\Delta_{\mathds{A}}(k)=0, \tag{10}\]
where \(\tau_{\mathds{A}}\) and \(\Delta_{\mathds{A}}\) are the trace and determinant of \(\mathds{A}\), both depending explicitly on the wavenumber \(k\).
In this case, the conditions for Turing pattern formation are:
* The system should be stable in absence of diffusion. This implies that the real part of the eigenvalues of \(\mathds{A}\) for the mode with \(k=0\) accomplishes \(\mathrm{Re}\{\lambda_{0}\}<0\). From (10), this requires \[\tau_{\mathds{A}}(0)<0\text{ and }\Delta_{\mathds{A}}(0)>0.\] (11)
* The system should be unstable when diffusion is turned on for at least some wavenumber \(k_{m}\). Since the previous condition on the trace implies that \(\tau_{\mathds{A}}(k)<0\) for all \(k\), the only way to unstabilize the system and obtain \(\mathrm{Re}\{\lambda(k_{m})\}\geq 0\) is through \[\Delta_{\mathds{A}}(k_{m})\leq 0\text{ for some }k_{m}>0.\] (12)
Let us assume that the related matrices on (8) for a two component system are
\[\mathds{J}=\left(\begin{array}{cc}j_{11}&j_{12}\\ j_{21}&j_{22}\end{array}\right)\text{ and }\mathds{D}=\left(\begin{array}{cc}d_{ u}&0\\ 0&d_{v}\end{array}\right). \tag{13}\]
In this terms, the first two conditions in (11) lead to \(\tau_{\mathds{J}}(0)<0\) and \(\Delta_{\mathds{J}}>0\), whereas the second condition requires that the determinant given by \(\Delta_{\mathds{A}}(k,t)=k^{4}\Delta_{\mathds{D}}-k^{2}\sigma_{\mathds{D} \mathds{J}}+\Delta_{\mathds{J}}\) has a minimum at \(k_{m}\) where the function is negative. Here, \(\sigma_{\mathds{D}\mathds{J}}\equiv j_{11}d_{v}+j_{22}d_{u}\). The minimum of the determinant in the \(k\) coordinate occurs at
\[k_{m}=\sqrt{\frac{\sigma_{\mathds{D}\mathds{J}}}{2\Delta_{\mathds{D}}}}. \tag{14}\]
where it has the value \(\Delta_{\mathds{A}}(k_{m},t)=\Delta_{\mathds{J}}-\frac{\sigma_{\mathds{D} \mathds{J}}^{2}}{2\Delta_{\mathds{D}}}\). Therefore, the conditions on(12) in terms of the original matrix are
\[\sigma_{\mathds{D}\mathds{J}}>0\text{ and }\sigma_{\mathds{D}\mathds{J}}^{2}-4 \Delta_{\mathds{D}}\Delta_{\mathds{J}}>0. \tag{15}\]
The width of wavenumbers where \(\lambda_{k}\) has positive real part, and where patterns can occur, is given by
\[|k^{2}-k_{m}^{2}|\leq\sqrt{k_{m}^{4}-\frac{\Delta_{\mathds{J}}}{\Delta_{ \mathds{D}}}}. \tag{16}\]
If some of the two diffusion coefficients on \(\Delta_{\mathds{D}}\) is used as a bifurcation parameter, then Turing bifurcation occurs when \(\Delta_{\mathds{D}}^{b}=\sigma_{\mathds{D}\mathds{J}}^{2}/4\Delta_{\mathds{J}}\) at the wavenumber \(k_{b}=\sqrt{2\Delta_{\mathds{J}}/\sigma_{\mathds{D}\mathds{J}}}\). A detailed computation of these conditions is standard and given for example at Ref. [10].
### Analysis in terms of potential functions
In component form, the Fourier mode of the perturbation on (8) \(\boldsymbol{\zeta}_{k}=(u_{k},v_{k})^{T}\) becomes
\[u^{\prime}_{k}(t) =-k^{2}d_{u}u_{k}+j_{11}u_{k}+j_{12}v_{k},\] \[v^{\prime}_{k}(t) =-k^{2}d_{v}v_{k}+j_{21}u_{k}+j_{22}v_{k}.\]
After some calculations, this system can be written as a second order differential equation
\[u^{\prime\prime}_{k}(t)-\tau_{\text{A}}(k)u^{\prime}_{k}+\Delta_{\text{A}}(k)u_ {k}=0, \tag{17}\]
and exactly the same equation holds for \(v_{k}\). The characteristic equations would lead to the dispersion relation in (10), and therefore to the same conditions on (11) and (12).
Now let us understand the Turing conditions differently. If we multiply (17) by \(u^{\prime}_{k}\), the equation for the \(k-\)esim Fourier is
\[\frac{dV_{k}}{dt}=\tau_{\text{A}}(k)u^{\prime 2}. \tag{18}\]
where the function \(V_{k}\) is given by
\[V_{k}(u,u^{\prime})\equiv\frac{u^{\prime 2}}{2}+\Delta_{\text{A}}(k)\frac{u^{2 }}{2}. \tag{19}\]
When the conditions (11) for stability in absence of diffusion apply (\(k=0\)), this function for the mode \(k=0\), is an elliptic paraboloid with minimum at the origin, always positive and accomplishing \(\frac{d}{dt}V_{0}\leq 0\). Therefore \(V_{0}\) accomplish the conditions for a Lyapunov function and the origin is asymptotically stable in absence of diffusion.
Now, when diffusion is turned on and the conditions in (12) holds, then \(\frac{d}{dt}V_{k_{m}}\leq 0\), but now \(V_{k_{m}}\) is a saddle. Therefore, if \(\mathbf{u}_{k}=(u_{k},u^{\prime}_{k})\), as the relation
\[\frac{dV_{k}}{dt}=\frac{dV_{k}}{d\mathbf{u}_{k}}\cdot\frac{d\mathbf{u}_{k}}{dt} \tag{20}\]
holds for any wavenumber \(k\), then the condition \(\frac{dV_{k_{m}}}{dt}\leq 0\) implies that there is at least one trajectory that diverges as the time grows, making the origin \(\mathbf{u}_{k_{m}}=\mathbf{0}\) necessarily unstable for this wavenumber.
In conclusion, the well-known conditions for the occurrence of a Turing pattern in a fixed domain given by (11) and (12) imply that, as increasing \(k\), the family of functions \(V_{k}\) deforms from an elliptical paraboloid (with \(k=0\)) to a saddle (for wavenumbers around \(k_{m}\)) along which the trajectories related to that Fourier mode diverge; passing this interval, the functions are again elliptic paraboloids where the trajectories associated to the wavenumbers are stable. In all cases, the dynamics of the system is such that the the value of \(V(t)\) always decrease with time, and therefore, can be interpreted as a kind of potential function for each Fourier mode numbered by \(k\). This form of potential is valid only in the neighborhood of the origin where the linear approximation of each Fourier mode holds, and the nonlinear terms are expected to produce saturation containing the growth of the amplitudes of the unstable modes, as occurs in a fixed domain.It will result physical insightful how these particular function can be associated to a single potential function of the entire reaction-diffusion process that takes into account all the modes [11].
The hypothesis of the work for finding the Turing conditions for a growing domain is the following: there are a set of conditions (similar to (11) and (12)) for the emergence of unstable modes that we do not know, but whose effect on a potential functions is the same: to deform from an elliptic paraboloid to a saddle and again to a paraboloid when increasing the wavenumber; therefore, some conditions for Turing formation can be guessed from the behaviour of the potential functions. In the following section we will provide a methodology to obtain the equivalent of these two conditions for a system that grows with time. Then we will test by numerical simulations if the proposed conditions predict Turing patterns.
## III Turing conditions for growing domain with constant homogeneous concentration
Let us consider first the case where \(\mathbf{c}_{s}\approx\mathbf{C}_{0}\) and therefore \(\frac{\partial\mathbf{f}(\mathbf{c}_{s})}{\partial\mathbf{c}}\approx\frac{ \partial\mathbf{f}(\mathbf{C}_{0})}{\partial\mathbf{c}}=\hat{\mathbf{J}}\). In this case, after taking the Fourier transform in the computational domain, for each wavenumber \(\kappa\), eq. (1) becomes
\[\frac{\partial\boldsymbol{\zeta}_{\kappa}}{\partial t}=\left[\hat{\mathbf{J}}- \left(\frac{\kappa}{l(t)}\right)^{2}\mathds{D}-\frac{\dot{l}(t)}{l(t)} \mathds{I}\right]\boldsymbol{\zeta}_{\kappa}, \tag{21}\]
In order to simplify notation, we can use \(k(t)\equiv\kappa/l(t)\) as the wavenumber in the actual domain and \(g(t)\equiv\dot{l}(t)/l(t)\). The matrix in parenthesis is therefore:
\[A(\kappa,t)=\hat{\mathbf{J}}-k^{2}(t)\mathds{D}-g(t)\mathds{I}. \tag{22}\]
### Potential function
If \(\boldsymbol{\zeta}_{\kappa}=(u_{\kappa},v_{\kappa})\), this system in component form is
\[u^{\prime}_{\kappa}(t) =-\left(\frac{\kappa}{l(t)}\right)^{2}d_{1}u_{\kappa}+j_{11}u+j_ {12}v_{\kappa}-g(t)u_{k},\] \[v^{\prime}_{\kappa}(t) =-\left(\frac{\kappa}{l(t)}\right)^{2}d_{2}v_{\kappa}+j_{21}u_{ \kappa}+j_{22}v_{\kappa}-g(t)v_{k}.\]
The second order equation for \(u\) is
\[u^{\prime\prime}_{\kappa}(t)-\tau_{A}(\kappa,t)u^{\prime}_{\kappa}+\left[ \Delta_{A}(\kappa,t)+2d_{u}k(t)k^{\prime}(t)+g^{\prime}(t)\right]u_{\kappa}=0, \tag{23}\]
and one similar for \(v_{k}\) changing \(d_{u}\to d_{v}\). Multiplying by \(u^{\prime}_{\kappa}\) and rearranging, we get
\[\frac{d}{dt}\left\{\frac{u^{\prime 2}_{\kappa}}{2}+\left[\Delta_{A}(\kappa,t)+2d _{u}k(t)k^{\prime}(t)+g^{\prime}(t)\right]\frac{u^{2}_{\kappa}}{2}\right\}= \tau_{A}(\kappa,t)u^{\prime 2}_{\kappa}+\frac{u^{2}_{\kappa}}{2}\frac{d}{dt} \left[\Delta_{A}(\kappa,t)+2d_{u}k(t)k^{\prime}(t)+g^{\prime}(t)\right]. \tag{24}\]
From this equation is clear that the potential function now is
\[V_{\kappa}=\frac{u^{\prime 2}_{\kappa}}{2}+\left[\Delta_{A}(\kappa,t)+2d_{u}k(t )k^{\prime}(t)+g^{\prime}(t)\right]\frac{u^{2}_{\kappa}}{2} \tag{25}\]
, and its time rate is
\[\dot{V}_{\kappa}=\tau_{A}(\kappa,t)u^{\prime 2}_{\kappa}+\frac{u^{2}_{\kappa}}{ 2}\frac{d}{dt}\left[\Delta_{A}(\kappa,t)+2d_{u}k(t)k^{\prime}(t)+g^{\prime}(t )\right]. \tag{26}\]
The conditions for stability in absence of diffusion (\(\kappa=0\)) require \(V_{0}\geq 0\) and \(\dot{V}_{0}\leq 0\). This implies that
\[\Delta_{A}(0,t)+g^{\prime}(t)\geq 0\text{, }\tau_{A}(0,t)\leq 0\text{ and }\frac{d}{dt}\left[\Delta_{A}(0,t)+g^{\prime}(t)\right]\leq 0. \tag{27}\]
These conditions guarantee that \(V_{0}\) is an elliptical paraboloid centered at the origin, towards which all trajectories are directed. However, care must be taken since, unlike what happens in the fixed domain, the paraboloid defined by
\[V_{0}=\frac{u^{\prime 2}}{2}+[\Delta_{\mathcal{A}}(0,t)+g(t)]\frac{u^{2}}{2}, \tag{28}\]
is changing its width with time, and therefore (27) could not necessarily guarantee that all trajectories reach the origin. To do this, one would have to add a condition that the rate at which the width of the paraboloid increases is slower than the rate at which the mode decays to zero.
Instability with diffusion requires that for some \(\kappa_{m}\), \(V_{m}\geq 0\) and that \(V_{m}\) is a saddle, which requires
\[\Delta_{\mathcal{A}}(\kappa_{m},t)+2d_{u}k_{m}(t)k^{\prime}_{m}(t)+g^{\prime} (t)<0. \tag{29}\]
The condition for \(V_{m}\) to be a potential is that all the trajectories descend, \(\dot{V}_{m}\leq 0\), this requires the extra conditions
\[\tau_{A}(\kappa_{m},t)<0\text{ and }\frac{d}{dt}\left[\Delta_{A}(\kappa_{m},t)+ 2d_{u}k_{m}(t)k^{\prime}_{m}(t)+g^{\prime}(t)\right]<0. \tag{30}\]
This second set of conditions for unstability is also interesting because implies that the unstabilization of a mode can be due to a change in the structure of the potential function \(V\) from a paraboloid to a saddle, as we have supposed on (29), but also due to a change in the sign of \(\dot{V}\) in (26). We will follow the first line since this replicates what occur for fixed domain, and left the second possibility for future studio.
In conclusion, due to these details, the set of conditions (27) and (29) can be seen for now as an approximate set of Turing conditions, neither necessary nor sufficient, but as we shall see, gives a very good idea of the Turing region in the parameter space and will be proved numerically in the following sections. Now we focus on the relation between the conditions on (30) and the relation with the original RDD system.
### Turing conditions in terms of the original matrix
From (22), trace and determinant of \(A\) are respectively
\[\tau_{\text{A}}(\kappa,t)=\tau_{\mathbbm{j}}-k^{2}(t)\tau_{\mathbbm{D}}-2g(t), \tag{31}\]
\[\Delta_{\mathcal{A}}(\kappa,t)=\Delta_{\mathbbm{D}}k^{4}(t)-k^{2}(t)\sigma_{ \mathbbm{D}\bar{\mathbbm{j}}}+\Delta_{\bar{\mathbbm{j}}}+[k^{2}(t)\tau_{ \mathbbm{D}}-\tau_{\bar{\mathbbm{j}}}]g(t)+g^{2}(t). \tag{32}\]
In this terms, the conditions for stability with \(\kappa=0\) in (27) requires
\[\begin{array}{rl}\Delta_{\bar{\mathbbm{j}}}-\tau_{\bar{\mathbbm{j}}}g(t)+g^ {2}(t)+g^{\prime}(t)&\geq 0,\\ \tau_{\bar{\mathbbm{j}}}-2g(t)&\leq 0,\\ -g^{\prime}(t)[\tau_{\bar{\mathbbm{j}}}-2g(t)]+g^{\prime\prime}(t)&\leq 0. \end{array} \tag{33}\]
The condition (29) for instability require that the function
\[H_{u}(k)=k^{4}(t)\Delta_{\mathbbm{D}}+[\tau_{\mathbbm{D}}g(t)-\sigma_{ \mathbbm{D}\bar{\mathbbm{j}}}]k^{2}(t)+2d_{u}k(t)k^{\prime}(t)+\Delta_{\bar{ \mathbbm{j}}}-\tau_{\bar{\mathbbm{j}}}g(t)+g^{2}(t)+g^{\prime}(t), \tag{34}\]
to be negative. To find the minimum wavenumber, we return this expression to the wavenumber \(\kappa\), derivate with respect to this variable, and return the result to the actual wavenumber. The minimum of \(H_{u}\) occurs at
\[k_{u}(t)=\sqrt{\frac{(2d_{u}-\tau_{\mathbbm{D}})g(t)+\sigma_{\mathbbm{D}\bar{ \mathbbm{j}}}}{2\Delta_{\mathbbm{D}}}}. \tag{35}\]
Therefore, the existence of the minimum of \(H_{u}\) requires
\[(2d_{u}-\tau_{\mathbbm{D}})g(t)+\sigma_{\mathbbm{D}\bar{\mathbbm{j}}}>0. \tag{36}\]
Since the value of the function \(H_{u}\) must be negative at \(k_{u}\),
\[\sigma^{2}_{\rm D\bar{J}}-4\Delta_{\rm D}\Delta_{\bar{\bf j}}+2g(t)[(2d_{u}-\tau_{ \rm D})\sigma_{\rm D\bar{\bf j}}+2\tau_{\rm D}\Delta_{\bar{\bf j}}]+g^{2}(t)[4d _{u}^{2}-4d_{u}\tau_{\rm D}-4\Delta_{\rm D}+\tau_{\rm D}^{2}]-4\Delta_{\rm D}g^{ \prime}(t)>0. \tag{37}\]
The possible wavenumbers where unstability occurs are those with
\[|k^{2}-k_{u}^{2}|\leq\sqrt{k_{u}^{4}-\frac{1}{\Delta_{\rm D}}[\Delta_{\bf J}+g ^{\prime}(t)-\tau_{\rm J}g(t)+g(t)^{2}]}, \tag{38}\]
and the bifurcation value is
\[\Delta_{\rm D}^{u,b}=\frac{[(2d_{u}-\tau_{\rm D})g(t)+\sigma_{\rm DJ}]^{2}}{4 \left[\Delta_{\bf J}+g^{\prime}(t)-\tau_{\rm J}g(t)+g(t)^{2}\right]} \tag{39}\]
Two aspects are worth mentioning in this deduction. The first is related to the concentration dependence \(u\) through the coefficient \(d_{u}\) in the equations (34)-(39). Since the concentration \(v\) obeys an equation similar to (23) but with \(d_{v}\), then the same reasoning applies substituting \(d_{v}\). If one is only interested in the conditions for the occurrence of patterns, then it is enough to ask, for \(I4\), for example, that \(\min_{i}\{(2d_{i}-\tau_{\rm D})g(t)+\sigma_{\rm DJ}\}>0\), and something similar for the condition \(I5\). Once the diffusion coefficient (where the condition of minimum applies) is selected, it is possible to predict if the wave number is \(k_{u}\) or \(k_{v}\). This will be illustrated by the example of the Brusselator.
In Table 1 we summarize the conditions for Turing pattern emergency in fixed and growing domains. Notice that the right column can recover the well conditions for a fixed domain in the left when the growth rate is zero \(g(t)\to 0\). For simplicity we have added the labels \(S\), \(I\) and \(D\), pointing to stability, unstability and domain conditions, respectively.
The second aspect is much more delicate and has to do with the explicit appearance of time in the equations. In the following Part of the work, we will reveal through numerical examples some observations made in this regard and its role as a parameter of the Turing bifurcation. However, as a start, the exponential growth presents a simple situation since \(g(t)\) is constant, therefore many terms in the Turing conditions become zero, plus the Turing conditions can be studied independent of time. Therefore, it will be our first case study.
## IV The exponential case
Consider the case where growth/shrinkage is exponential \(l(t)=l(0)e^{rt}\), where the growth rate is \(g(t)=r\), constant, and all derivatives of \(g\), they become zero. As we have explained in Section I.1, the homogeneous state of exponential growth (\(r>0\)) and shrinkage (\(r<0\)) quickly tends to a constant value. We showed in Part 1 of this paper that if \({\bf c}_{0}\) is the fixed point of the reactive system where \(f({\bf c}_{0})={\bf 0}\), then an approximation for the value representative in (5), if \(|r|\) is relatively small, is
\[{\bf C}_{0}=[{\rm I}+r({\rm J}-r{\rm I})]{\bf c}_{0}. \tag{40}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline \# & Fixed Domain & Growing Domain \\ \hline \hline S1) & \(\Delta_{\rm J}>0\) & \(\Delta_{\rm J}-\tau_{\rm J}g(t)+g^{\prime}(t)>0\) \\ \hline S2) & \(\tau_{\rm J}<0\) & \(\tau_{\rm J}-2g(t)<0\) \\ \hline D3) & - - - & \(g^{\prime}(t)[2g(t)-\tau_{\rm J}]+g^{\prime\prime}(t)\leq 0\) \\ \hline \hline
14) & \(\sigma_{\rm DJ}>0\) & \((2d_{i}-\tau_{\rm D})g(t)+\sigma_{\rm DJ}>0\) \\ \hline
15) & \(\sigma_{\rm DJ}^{2}-4\Delta_{\rm D}\Delta_{\rm J}\geq 0\) & \(\sigma_{\rm DJ}^{2}-4\Delta_{\rm D}\Delta_{\rm J}+2g(t)[(2d_{j}-\tau_{\rm D}) \sigma_{\rm DJ}+2\tau_{\rm J}\Delta_{\rm D}]\) \\ & \(+g^{2}(t)[4d_{i}^{2}-4d_{i}\tau_{\rm D}-4\Delta_{\rm D}+\tau_{\rm D}^{2}]-4 \Delta_{\rm D}g^{\prime}(t)\geq 0\) \\ \hline \hline \(k_{m}\) & \(\sqrt{\frac{\sigma_{\rm DJ}}{4\Delta_{\rm D}}}\) & \(\min_{i}\left\{\sqrt{\frac{(2d_{i}-\tau_{\rm D})g(t)+\sigma_{\rm DJ}}{2\Delta_{ \rm D}}}\right\}\) \\ \hline \(\delta k^{2}\) & \(\sqrt{k_{m}^{4}-\frac{\Delta_{\rm J}^{2}}{\Delta_{\rm D}}}\) & \(\sqrt{k_{m}^{4}-\frac{[\Delta_{\rm J}+g^{\prime}(t)-\tau_{\rm J}g(t)+g(t)^{2}] }{\Delta_{\rm D}}}\) \\ \hline \(\Delta_{\rm D}^{b}\) & \(\frac{4k_{m}^{4}\Delta_{\rm D}^{b}}{\Delta_{\rm J}}\) & \(\frac{4k_{m}^{4}\Delta_{\rm D}^{b}}{(\Delta_{\rm J}+g^{\prime}(t)-\tau_{\rm J}g (t)+2g(t)^{2})}\) \\ \hline \end{tabular}
\end{table}
Table 1: Turing conditions for a two component system with isotropic growth and constant homogeneous state. Center column summarize the well known Turing conditions for fixed domain in Section II. Right column presents the equivalent conditions for growing domain. \(\tau\) and \(\Delta\) refer to trace and determinant of the matrix in the subscript, as it does \(\sigma\), which can be the Jacobian \({\bf J}\) at the fixed point, the diagonal diffusion \({\rm D}\), or the Jacobian at the representative concentration \(\bar{\bf J}\), all constant matrices.
The importance of the non-linear term of \(\mathbf{f}(\mathbf{c})\) in finding the homogeneous state of (3) lies in the fact that the dilution term can induce new equilibrium points for large values of \(|r|\). To see this, notice that if \(g(t)=r\) in eq. (5), the homogeneous state can be studied as
\[\frac{\partial\mathbf{c}_{s}}{\partial t}=\hat{\mathbf{f}}(\mathbf{c}_{s}), \tag{41}\]
where \(\hat{\mathbf{f}}(\mathbf{c}_{s})\equiv\mathbf{f}(\mathbf{c}_{s})-r\mathbf{c} _{s}\) and therefore, it is enough to study the fixed points of the modified reaction \(\hat{\mathbf{f}}(\mathbf{c}_{s})\). Let us call this points \(\hat{\mathbf{c}}_{0}\), where \(\hat{\mathbf{f}}(\hat{\mathbf{c}}_{0})=\mathbf{0}\)[12]. In other words, eq. (41) shows that the dilution term for exponential growth has the same effect that adding two linear reactions of the same rate constant \(r\) for each concentration. As we will illustrate for our study cases, this can change the number of fixed points and also its stability.
### Brusselator example
#### iii.1.1 Fixed point concentration
The Brusselator is given by (2). The fixed point of the reaction itself is at \(\mathbf{c}_{0}=(A,B/A)^{T}\), and the Jacobian and diffusion matrix of the fixed-domain problem in (13) (see Ref. [3]) are
\[\mathbb{J}=\left(\begin{array}{cc}-1+B&A^{2}\\ -B&-A^{2}\end{array}\right)\text{ and }\mathbb{D}=\left(\begin{array}{cc} \sigma&0\\ 0&1\end{array}\right). \tag{42}\]
When the domain grows and dilution is included, from (41), the equation describing the homogeneous state is
\[\frac{\partial c_{u}}{\partial t} =A-Bc_{u}-c_{u}+c_{u}^{2}c_{v}-rc_{u}, \tag{43}\] \[\frac{\partial c_{v}}{\partial t} =Bc_{u}-c_{u}^{2}c_{v}-rc_{v}. \tag{44}\]
Given the non-linearity of the equations, the system can have from one to five equilibrium points depending on the values of the parameters. In Fig. 1, we illustrate the real-valued fixed point \(\hat{\mathbf{c}}_{0}\) for some combinations of parameters.
We will study the situation when the fixed point changes slowly as a function of the growth parameter \(r\). In particular, for values of \(|r|\) close to zero, the linear approximation of the representative concentration at (40) is:
\[\mathbf{C}_{0}=\left(\frac{A\left(A^{2}-2Br+r\right)}{A^{2}(r+1)+r(-B+r+1)}, \frac{AB(2r+1)}{A^{2}(r+1)+r(-B+r+1)}\right)^{T} \tag{45}\]
This value is plotted only for the first two columns in Fig. 1 (in black lines) and compared with the numerical results giving good results between \(r=-0.15\) and \(0.15\). Notice that by increasing the value of \(B\), the system goes from a single fixed point (\(B\) up to \(6\)) to three (\(B=15\)). We will focus only on parameter values that result in a single fixed-point real-valued concentration. Using numerical approximations up to the second order in \(r\), near \(r\approx 0\), the approximation of the fixed concentration of the system at (43) (in blue and red in Fig. 1) is
Figure 1: Real valued fixed point concentrations. In solid color we obtain that obtained solving numerically \(\hat{\mathbf{f}}(\hat{\mathbf{c}}_{0})=\mathbf{0}\). In dot-dashed lines we plot the approximation for (45). The values of the Brusselator in (43) used where \(A=1\) and \(B=\{1.7,2.4,6,15\}\), respectively.
\[\hat{\mathbf{c}}_{0}\approx\left(A+\frac{r\left(-A^{2}-B\right)}{A}+\frac{r^{2} \left(A^{4}-B^{2}+B\right)}{A^{3}},\frac{B}{A}+\frac{B(-1+A^{2}+B)r}{A^{3}}+ \frac{B(1-4B+2B^{2}+A^{2}(-3+2B))r^{2}}{A^{5}}\right)^{T} \tag{46}\]
which matches (45) to first order but approximates better for values larger of \(|r|\). In what follows, we use the fixed point obtained numerically directly from solving \(\hat{\mathbf{f}}(\hat{\mathbf{c}}_{0})=\mathbf{0}\) for our graphic results on Turing patterns and keep the approximation in (46) for analytic expressions.
#### iii.2.2 Turing space
Using the approximation to the fixed-point at (46), the Turing conditions in Table 1 for the Brusselator and valid for low values of \(|r|\) are given in Table 2. The full expression for these conditions using, in turn, the full expressions for \(\hat{\mathbf{c}}_{0}\) is too big to write on one page, but they are the ones considered to build Turing space in Figure 2.a. Setting \(A=1\) and \(\sigma=0.1\), we consider the variation of the parameters \((r,B)\) to study the effect of growth and distance to the bifurcation, respectively. Furthermore, to distinguish the different orders of approximations, in Fig. 2.a, we also construct the Turing space using the complete expressions for \(\hat{\mathbf{c}}_{0}\) (solid boundary region), using the approximation \(\mathbf{C}_{0}\) (dotdashed boundary) and the fixed point of the pure reaction \(\mathbf{c}_{0}\) (dashed boundary).
To test these predictions, in Fig. 2.b we present the results of the numerical simulations performed in Comsol Multiphysics of the RDD system at (1) for the Brusselator system at (2) for exponential growth. The simulations are performed in a fixed computational domain with 100 equidistant vertices with a simulation time \(t_{max}\) calculated as the time it takes for the system to grow/shrink ten times the original size, depending on whether it is growing/shrinking, respectively, _i.e._, \(t_{max}=|(1/r)\log(10)|\). The initial domain size is calculated using as reference the bifurcation wavenumber in a fixed domain (\(r=0\)), \(k_{c}=\sqrt{A/\sqrt{\sigma}}\), and the expression \(l(0)=2n\pi/k_{c}\). with \(n\) equal to 3 or 19 for \(r>0\) and \(r<0\), respectively. We have used periodic boundary conditions and random disturbances of 10% of the value of the initial concentration of \(\hat{\mathbf{c}}_{0}\).
In Figure 2, the symbols H, T, and M represent homogeneous, Turing, and mixed-mode solutions, respectively. Homogeneous refers to the solution in which the initial spatial disturbance disappears and the system returns to fixed-point concentration over the entire domain. Turing solutions are those in which the system has a non-zero amplitude wavenumber, and spatial oscillations occur around the constant fixed-point concentrations. Mixed-mode patterns are solutions that are not properly a Turing pattern but have the same quality of being solutions with a very distinctive wavenumber with non-zero amplitude; the difference with Turing patterns is that the spatial pattern oscillates around not a fixed concentration, but around a limit cycle derived from proximity of the Hopf bifurcation. These solutions have already been found for fixed domains in [13; 14; 15], and reported for growing domains in [5].
As can be seen in Fig. 2.b, the region predicted by our scheme summarized in Table 1 provides a good approximation of the Turing space found in the simulations. This region presents two asymmetries: the first concerns the Turing bifurcation line, which is constituted by the two parabolas that intersect at \(r=0\), and it is different on each side due to the duplication of conditions for the different coefficients of diffusion; and the second concerning the Hopf bifurcation line, which is the upper parabola slightly tilted to the left. For the Brusselator, this combination makes the Turing space wider for shrinking than for growing, as predicted by our scheme and confirmed by numerical simulations.
Regarding the differences between our prediction and the numerical simulations, especially regarding the borders of the Turing region with homogeneous solutions, it should be noted that these are minimal for low values of \(|r|\lesssim 0.1\)
\begin{table}
\begin{tabular}{|c|c|} \hline \# & Approximate expressions for the Turing conditions in Table 1 \\ \hline \hline S1) & \(A^{2}+r\left(-A^{2}-3B+1\right)+r^{2}\left(-\frac{(B-4)B}{A^{2}}+A^{2}+1 \right)>0\) \\ \hline S2) & \(\left(B-1-A^{2}\right)+\frac{2\left(A^{2}-1\right)\left(A^{2}+B\right)r}{A^{2} }+r^{2}\left(\frac{2B(1-2B)}{A^{4}}+\frac{(B-6)B}{A^{2}}-3A^{2}-2B\right)<0\) \\ \hline I4a) & \(\left(B-1-A^{2}\sigma\right)+r\left(2\sigma\left(A^{2}+B\right)-\frac{2B}{A}- \sigma+1\right)+\frac{r^{2}\left(-3A^{4}\sigma+B^{2}\left(A^{2}\sigma-4\right)-2 B\left(A^{4}\sigma+A^{2}\sigma+2\right)-1\right)}{A^{4}}\geq 0\) \\ \hline I4b) & \(\left(B-1-A^{2}\sigma\right)+r\left(2\sigma\left(A^{2}+B\right)-\frac{2B}{A}- \sigma+1\right)+\frac{r^{2}\left(-3A^{4}\sigma+B^{2}\left(A^{2}\sigma-4\right)- 2B\left(A^{4}\sigma+A^{2}\sigma+2\right)-1\right)}{A^{4}}\geq 0\) \\ \hline I5a) & \(\left[\left(A^{2}\sigma-B+1\right)^{2}-4A^{2}\sigma\right]-\frac{2r\left(2A^{ 4}\sigma^{2}+A^{4}\left(2B+1\right)(\sigma-1)\sigma+A^{2}\left(-2B^{2}\sigma- 7B\sigma+B\sigma-1\right)+2\left(B-1\right)B\right)}{A^{2}}\geq 0\) \\ \hline I5b) & \(\left[\left(A^{2}\sigma-B+1\right)^{2}-4A^{2}\sigma\right]-\frac{2r\left(2A^{ 4}\sigma^{2}+A^{4}\left(2B-1\right)(\sigma-1)\sigma+A^{4}\left(-2B^{2}\sigma- B\left(5\sigma+1\right)+\sigma+1\right)+2\left(B-1\right)B\right)}{A^{2}}\geq 0\) \\ \hline \end{tabular}
\end{table}
Table 2: Approximate Turing conditions for the Brusselator for exponential growth, valid near \(|r|\approx 0\). For the exponential growth, the condition D3 in Table 1 fulfils trivially since derivatives of \(g(t)\) are null.
and arise for high values of \(|r|\) and that could be due to the criteria used to distinguish between both solutions. To make the distinction, we proposed that a solution is homogeneous if its time-averaged amplitude is less than 0.01. This means that for more negative values of \(r\) (left side), the T symbols at the bottom may be overestimated because the simulation time is too short; the domain is reduced very quickly and our criteria may not be sufficient to ensure that the initial disturbance has been homogenized or not. As for the horn-shaped region on the right-hand side (more positive values of \(r\)), the transition between the Homogeneous, Turing, and Mixed mode solutions occurs very quickly as the parameter \(B\) increases and also in a less differentiated way, so it is difficult to set strict limits with the type of simulation considered here. Therefore, the simulations carried out, rather than giving a strict characterization, allow us to confirm the essential characteristics of Turing space in terms of its location, growth/shrink asymmetry, and range of occurrence between both bifurcations.
To clarify the distinction between the three types of solutions, in Fig. 3.a we draw the spatiotemporal maps (not to scale, to save space) of the concentration profile of Homogeneous (red), Turing (green ) and Mixed mode solutions (blue). In these maps, the differences between the solutions are not necessarily appreciable. To distinguish quantitatively the differences between those solutions, several factors were measured such as 1) the amplitude of the spatial pattern (the amplitude of the most predominant Fourier mode), 2) the way the predominant Fourier mode increased/decreased with growth/shrinkage, 3) the evolution of the wavenumber in the actual domain (not in the computational domain), and 4) the evolution in the phase plane of both zero-order Fourier mode concentrations. This study was performed for all the simulations, and the characteristic results of each type of solution (H, T or M) are exemplified in Fig. 3.b. The homogeneous solutions have low amplitude, greater persistence of the predominant Fourier mode, and a constant steady state. The Turing pattern shares this last characteristic, but has a medium amplitude and a monotonic increase/decrease of the predominant Fourier mode. The mixed mode shares this characteristic with the Turing pattern but has a larger amplitude and oscillates around a limit cycle.
#### iii.1.3 Amplitude and wavenumber of Turing patterns
The three types of solutions presented so far are distinguished by their amplitude and wavenumber and have been previously exemplified in some cases. To have the complete picture, in Figure 4.a and 4.a we present the results of measuring the average amplitude and the wavenumber of all numerical solutions performed. As can be seen in 4.a the average amplitudes grow from the Turing bifurcation (bottom of the region with the points illustrated in green) and continue to increase towards the region of mixed solutions. This agrees with what is known for Turing patterns in fixed domains, where the amplitude increases with the square root of the distance to the bifurcation [3]. The aspect that this model does not predict correctly is related to the wavenumber which, as we illustrate in Fig. 4.c, is expected to increase as you move away from the Turing bifurcation (as occurs in a fixed domain). As observed in the numerical result presented in Fig. 4.b, the wave number depends more on the growth parameter \(r\) than on the distance to the bifurcation (parabolic curve formed by the lower numerical points in green).
To carry out Fig. 4 we have taken into account that the instability condition is different depending on the sign
Figure 2: Turing space of the Brusselator for exponential growth. a) Prediction of our model using as fixed point concentration: \(\mathbf{\hat{c}}_{0}\) (solid boundary), \(\mathbf{C}_{0}\) (dotdashed boundary) and \(\mathbf{c}_{0}\) (dashed boundary). b) Homogeneous (H), Turing (T) and Mixed mode solutions (MM) as derived from numerical simulations.
of \(r\); if \(r>0\), the condition that destabilizes the system is I4 applied to the lowest diffusion coefficient, that is, the one related to \(d_{u}\); for \(r<0\), the bifurcation condition is the one related to \(d_{v}\). From here we make the hypothesis that the wave number expressed for example in (35) must be \(k_{u}\) and \(k_{v}\), for \(r\), positive and negative, respectively. Our model does predict correctly that for exponential growth the wavenumber in the average real domain remains essentially constant in time in general and, as confirmed by 4, decreases as \(r\) increases, however it does no t correctly predicts its value.
The reason for the discrepancy may be due to the fact that the wave number at a given moment crucially depends on its past history [16; 17]. This would mean that, for example, for \(r>0\), at any time \(t_{1}\), the pattern has a number of waves \(N\); due to persistence _i.e_, the ability of a system to preserve its wavenumber, the number of spatial waves will remain the same until it is outside the range of Eckhaus stability [3]; at that moment, one more spatial wave
Figure 4: Left and Center. Temporal averaged amplitude and wavenumber (in the actual domain) of the numerical solutions. The Turing solutions are highlighted in green. Right. Predicted wavenumber according to \(k=k_{u}(r>0)\) or \(k=k_{v}(r<0)\) given by our model.
Figure 3: Different types of solution. a) Spatio-temporal maps (not to scale) of the solutions Homogeneous (red), Turing (green) and Mixed-mode (blue) solutions for different values of \((r,B)\). Example of amplitude, preponderant mode, actual wavenumber and phase space of the zero order Fourier mode for the three highlighted boxes on the left using the same three colors for distinguishing the cases.
(or perhaps several, depending on how fast the growth is) will enter to bring the system back within the range of \(k\) values; unless the growth is very fast, we can assume that one more space wave will enter and the system will have \(N+1\) waves, which will allow the solution to re-enter the stable \(k\) range; in a growing domain, this will generally make wave numbers in the low \(k\) region more likely for growth. The opposite would happen on the left side (\(r<0\)), where the wavenumber tends to values greater than the one predicted in 4.c, since, in general, the solution tends to gradually eliminate the waves that already had, making wave numbers at the top of the range more likely.
To illustrate this wavenumber selection process, in Fig. 5, we plot the wavenumber in the actual domain of the numerical solution (solid black lines), the expected wavenumber of the pattern (dashed line), according to \(k=k_{u}(r>0)\) or \(k=k_{v}(r<0)\), and the range of unstable wavenumbers \((k_{min},k_{max})=(\sqrt{k^{2}-\delta k^{2}},\sqrt{k^{2}+\delta k^{2}})\) (between the orange dotted lines) given by our model. For low values of \(|r|\), the prediction of the range of wavenumbers is accurate and, as follows from the asymmetry to grow/shrink, for \(r>0\), the wavenumbers lie at the bottom of the range, and the opposite occurs for \(r<0\). However, as can be seen by comparing the two cases of shrinkage, the wavenumber range criterion fails for higher values of \(|r|\) as the wavenumber of the numerical pattern lies outside the predicted range, probably due to that the inertial effects on pattern persistence are higher for more abrupt changes in the domain. However, this memory phenomenon will be studied in more detail in another Part of the work.
## V Discussion and conclusions
In this work we have proposed a new way of considering Turing destabilization for growing domains of two components. To do this, we have first considered the system of RDD equations for disturbances mapped to a fixed domain and have written them as a pair of second-order equations. We have rewritten each equation in such a way that each resembled the evolution of a system with a potential function. Hypothesizing that such a function would predict a destabilization of the same nature as it occurs in a fixed domain, we have generalized Turing's same ideas to an increasing domain by studying the deformation of such a function from a paraboloid to a saddle. We show that this strategy recovers the well-known Turing conditions for fixed domains and, for this Part of the work, we have exemplified it for exponential growth where the homogeneous state does not change considerably with time.
To demonstrate this, we use numerical simulations of the Brusselator and observe that the conditions predicted by our model allow us to give a good estimate of Turing space. These simulations show us that near the Turing region there are homogeneous solutions that show negligible amplitude and, at least numerically, tend to keep the same spatial mode. In addition, in the vicinity of the Hopf bifurcation, very stable spatial patterns appear with greater amplitude than Turing patterns, which differ from them only in that the homogeneous state is a stable limit cycle. The features of this type of solutions will be studied in another part of this work.
In this article, hypotheses are presented to give robust conditions for the formation of Turing patterns based on arguments about a possible potential function, contrasting with previous approximations whose comparison requires another work. However, our scheme allows us to understand the pattern formation process in a more general context related to the energy and entropy of dissipative structures. In this direction, in this work we have measured the Fourier spectrum of the solutions as a function of time and we have shown that the type of solution found (homogeneous, Turing or mixed) presents characteristics in observables such as the amplitude of the pattern and its average wavenumber. In the first case, we show that the Turing region is bound to the amplitude and that, therefore, the potential function found can represent an approximation to the energy of the system near the origin where the nonlinear terms would complete a kind of potential landscape [11].
Our scheme allows us to correctly predict that the wavenumber in actual real domain expressed by the system under exponential growth is maintained with oscillations around a constant value. Also, for growing/shrinking domain, the expressed wavenumber is generally less/greater than predicted, respectively, probably due to the fact that Turing
Figure 5: Measured (solid black) and predicted wavenumber (dotdashed red) and the range of posibble wavenumbers (between the orange dotted lines) for two cases of shrinkage and one of growing. The growth rates \(r\) are in the inset.
patterns have some tendency to retain the wavenumber they had at the previous time, _i.e._, a type of memory [18; 19]. This issue of pattern persistence is related to the non-linearity of the system and makes it difficult to determine the wavenumber the pattern will have based solely on a linear approximation and using a criterion, to put it in some way, static, that is, it does not depend on the initial or previous conditions of the system.
Despite these difficulties that arise mainly for large values of \(|r|\), our scheme allows us to give closed analytic conditions for pattern formation for low values of \(|r|\), which needless to say, are the most important from biological perspective, since in this case we can specify the fixed-poin concentrations, the Turing conditions, including the expected wave number and its range. In this sense, our approach presents a complete picture of this important process in diffusion reaction systems in both increasing and decreasing domains, the latter case being little studied in the literature.
Finally, it is worth mentioning that the detailed determination of the boundaries of the Turing region, especially for high values of \(|r|\), requires much more detailed numerical simulations than those used in this work. This is because, for shrinkage, the lower bound of the region that distinguishes homogeneous solutions of Turing patterns leads to a very small characteristic time, so temporal refinement is necessary to make the average amplitude more representative. In the same way, for the upper edge, longer simulation times are required to be able to distinguish if the zero mode will actually tend to a limit cycle or decay to a constant asymptotic state. For the case of growth, since more spatial oscillations appear, considering longer times will require more spatial refinement to capture the evolution of the patterns. However, the precision used in this work is sufficient to conclude that our approximation very adequately captures the main features of the Turing space in growing domains.
|
2302.11632 | Rename Chains: An Exploratory Study on the Occurrence and
Characteristics of Identifiers Undergoing Multiple Renamings | Identifier names play a significant role in program comprehension activities,
with high-quality names improving developer productivity and system quality. To
correct poor-quality names, developers rename identifiers to reflect their
intended purpose better. However, renames do not always result in high-quality,
long-lasting names; in many cases, developers perform multiple rename
operations on the same identifier throughout the system's lifetime. In this
paper, we report on a large-scale empirical study that examines the occurrence
of identifiers undergoing multiple renames (i.e., rename chains). Our findings
show the presence of rename chains in almost every project, with methods
typically having more rename chains than other identifier types. Furthermore,
it is usually the same developer responsible for creating all renames within a
chain, with most names maintaining the same grammatical structure.
Understanding rename chains can help us provide stronger advice, and targeted
research, on how to craft high-quality, long-lasting identifiers. | Anthony Peruma, Christian D. Newman | 2023-02-22T20:17:55Z | http://arxiv.org/abs/2302.11632v2 | Rename Chains: An Exploratory Study on the Occurrence and Characteristics of Identifiers Undergoing Multiple Renamings
###### Abstract
Identifier names play a significant role in program comprehension activities, with high-quality names improving developer productivity and system quality. To correct poor-quality names, developers rename identifiers to reflect their intended purpose better. However, renames do not always result in high-quality, long-lasting names; in many cases, developers perform multiple rename operations on the same identifier throughout the system's lifetime. In this paper, we report on a large-scale empirical study that examines the occurrence of identifiers undergoing multiple renames (i.e., rename chains). Our findings show the presence of rename chains in almost every project, with methods typically having more rename chains than other identifier types. Furthermore, it is usually the same developer responsible for creating all renames within a chain, with most names maintaining the same grammatical structure. Understanding rename chains can help us provide stronger advice, and targeted research, on how to craft high-quality, long-lasting identifiers.
## I Introduction
Be it bug fixing or updating features, program comprehension is an essential part of any software maintenance activity [1]. Program comprehension is the act of developers reading the code to understand its behavior in order to know where to make updates to the source code [2]. Therefore, to ensure both developer productivity and system quality, it is essential for developers to craft identifiers with meaningful names. In other words, the name should accurately reflect its intended behavior.
Research shows that identifier names account for almost 70% of the characters in a software system's codebase [3], with well-constructed names improving comprehension activities by an estimated 19% [4]. Unfortunately, there are significant problems with many identifiers, and no generalizable methods to measure identifier quality. This is likely part of the reason renaming is one of the most frequent types of rework (i.e., refactoring) developers perform on their code base, contributing to around 40% of the rework developers perform throughout the lifetime of the system [5, 6, 7].
While Rename Refactoring is the approach developers take to correct poor-quality names, there is no guarantee that the resulting new name is of high quality, with developers sometimes performing multiple rename operations to the same identifier. For instance, let us compare the code snippets in Listing 1 and Listing 2, both of which show multiple renamings of a method's name. In Listing 1, the developer renames the method sendPacket2\(\rightarrow\)sendPacket3\(\rightarrow\)sy ncedSendPacket. The original and first iteration of the name contain digits, and this type of naming is known as a _Distinguisher_[8]. Developers utilize such names to prevent name collisions at compile time when multiple identifiers with the same name are in the class/file. The final version of the name, syncedSendPacket, is no longer a _Distinguisher_ and is more descriptive than the original. In contrast, it can be argued that the end result of the method rename in Listing 2 does not produce a high-quality name as println is a copy of the statement inside the method and provides no additional information.
Even though prior work on identifier naming examines the lexical semantic updates developers make to a name when performing the rename operation [5, 6, 9, 10], they fall short of investigating how each identifier evolves throughout the system's entire lifetime. In other words, they do not examine if the individual rename operations are related to each other. Likewise, studies that propose rename opportunities in the code do not consider the historical evolution of the identifiers [11, 12, 13].
```
- public void sendPacket2(Packet9Respawn packet) { + public void sendPacket3(Packet9Respawn packet) { - public void sendPacket3(Packet9Respawn packet) { + public void syncedSendPacket(Packet9Respawn packet) { activeChunks.clear(); super.sendPacket(packet); } }
```
Listing 1: An example of a method name undergoing multiple renames to make it more descriptive of its purpose ([14]\(\rightarrow\)[15]).
### _Goal & Research Questions_
The goal of this study is to explore the evolution of identifier names by _constructing and studying the characteristics of a chain of renames for identifiers (i.e., a rename chain)_. Through the findings from our study, we aim to understand the multiple rename refactoring operations developers perform on an identifier that can feed into tools and techniques to better support developers with crafting and maintaining identifiers in their code. Therefore, we propose and answer the following research questions (RQs):
**RQ1: To what extent do identifiers undergo multiple rename refactoring operations?** This RQ reports on the volume and types of identifiers that undergo multiple renames during their lifetime and how frequently they occur in projects. Knowing the popularity of rename chains in a project's evolution will direct us to further research in this area.
**RQ2: How frequently do renames occur within a rename chain, and who is responsible for their creation?** From this RQ, we gain insight into the developers performing the renames in the chain and how frequently developers perform the rename operations within the chains. By considering the developers responsible for creating rename chains, rename recommendation techniques can improve their accuracy and usability.
**RQ3: How do the semantics of an identifier's name evolve in a rename chain?** This RQ analyzes the lexical-semantic properties of the renames by comparing the part-of-speech tags of the first and last names in the chain. Findings and heuristics from this RQ can be incorporated into automated identifier name appraisal and recommendation tools and techniques.
**RQ4: To what extent can commit log messages help contextualize the occurrence of rename chains?** Using the generated rename chains in our dataset, this RQ examines how effectively commit messages can identify the specific causes for developers to create rename chains.
### _Contribution_
The main contributions from this work are as follows:
* Our results represent a significant step toward understanding how an identifier's name evolves through the project's lifetime. Through our discussion, we pave the way for subsequent research to enhance our knowledge of high-quality identifier naming, especially in automated identifier name appraisal and recommendation tools.
* We make our dataset of rename chains, including specific characteristics of the renames, publicly available.
## II Related Work
This section discusses the work related to identifier renaming. Broadly, these studies fall into two categories, empirical studies that examine the semantic characteristics of names and studies that propose rename recommendation techniques/models.
### _Empirical Studies_
In a developer survey, Arnoudova et al. [9] report that developers perform renaming as part of their implementation workflow and admit that renaming is not straightforward. Furthermore, the authors propose a taxonomy to classify the types of semantic updates a name undergoes when renamed.
An empirical examination by Peruma et al. [6] of the semantic updates developers make to a name shows that developers frequently make simple renames by adding or removing a single term in a name. Further, the authors also show that developers frequently narrow the meaning of the name. The authors also highlight specific grammar patterns developers utilize when crafting unit test method names [10] and produce a taxonomy of digits occurring in an identifier's name [8]. Additionally, as a means of contextualizing the renames developer perform, Peruma et al. [5] show relationships between the data type and the plurality of the name. Specifically, the name changes from singular to plural when the data type changes from a non-collection to a collection type. The authors also show that specific identifier renamings tend to co-occur with other types of refactoring operations. Further, the authors also show that novice developers tend to perform more renames than other types of refactoring operations [18].
### _Rename Recommendations_
A model called NATURALIZE that uses statistical natural language processing to mine and learn the style (i.e., coding norms) of a codebase and offers renaming recommendations are introduced by Allamanis et al. in their paper [11]. To standardize names used in related contexts, NATURALIZE learns syntactic restrictions, or sub-grammars, on identifier names like camelcase or underscore. The authors also recommend a neural probabilistic language model to automatically suggest descriptive, idiomatic method and class names [12]. An n-gram based approach for assessing the comprehensibility of method names and recommending intelligible method names is introduced by Suzuki et al. [13]. The authors' solution involves gathering and learning method names from Java systems. The authors employ the n-gram model to provide recommendations to the developer and a threshold to assess the comprehensibility score of a method's name as part of their analysis process. Deep learning methods are used by Liu et al. [19] to spot incorrect method names. Their methodology retrieves in-depth representations of method bodies and names. The model is trained by the authors using numerous techniques from actual projects. The name recommendation method compares the overlap between the set of method names whose bodies are
close in the method body vector space and the closeness of method names in the method name vector space.
## III Experiment Design
In this section, we provide details about the methodology for our study. Figure 1 shows a high-level overview of our experiment, which we describe in detail below. Furthermore, the dataset we utilize/generate in this study is available on our project website for replication and extension purposes [20].
### _Source Dataset_
In this study, we utilize an existing dataset of mined refactoring operations made available by Peruma et al. [10] from their research on test method name renaming and resued by the authors in another study on identifier names [8]. The authors of the dataset utilized RefactoringMiner [21] to mine the refactoring operations of 800 well-engineered open-source Java systems. RefactoringMiner is a state-of-the-art tool that iterates through a project's commit log mining refactoring operations. The dataset contains mined rename refactoring operations for classes, attributes, methods, parameters, and variables.
### _Rename Chain Construction_
Our manual analysis of the source dataset shows the presence of auto-generated code related to projects utilizing Antlr. Since such code could skew our findings, we first exclude such source files from our analysis; our dataset contains the query we used to perform the exclusion. After performing this exclusion, we start the work of constructing the rename chain for each identifier type, using custom scripts. Our approach involves the use of the fully qualified name of the identifier to perform name comparisons to form links in the chain. The general approach involves first obtaining the refactorings for each identifier type sorted by the author commit date for each project. Next, for each type of identifier rename refactoring in the project, we search for instances where the new name in the refactoring is the old name in a subsequent rename operation. If such a match exists, it forms a link in the chain. This process continues recursively. Our replication package contains the code utilized to create the chains for each identifier type.
### _Part-of-Speech Tagging_
To understand the semantic change to an identifier's name, we utilize a specialized identifier name part-of-speech tagger made available by Newman et al. [22]. This is a state-of-the-art tagger that outperforms other taggers, including the Stanford tagger [23], for identifier names. The tagger utilizes a subset of the Penn Treebank tagset [24] and includes nouns, verbs, noun modifiers, determiners, etc; details of which are available at [25]. Using this tagger, we generate the part-of-speech tags for each term in an identifier's name for the original and last name in the rename chain.
### _Topic Modeling_
To contextualize the presence of rename chains, we perform a topic modeling analysis utilizing the latent Dirichlet allocation (LDA) algorithm [26]. Before performing the topic modeling analysis, we perform a set of text preprocessing tasks on the commit messages; we remove non-alphabet characters, such as punctuations, set the text to lowercase, lemmatize words, and finally, remove standard English stopwords. To arrive at the optimal number of topics, we iteratively extracted topics from two to ten in increments of one, where each topic execution cycle had 100 passes and 200 iterations. We manually examined the word frequencies present in each topic cycle to determine the optimum topic.
### _Research Question Analysis_
To answer our research questions, we follow a mixed methods approach, where we supplement our quantitative findings with qualitative examples from our dataset. This technique helps us understand our results through contextualization. We employ custom scripts and database queries to answer our research questions and elaborate on our approach when addressing each research question in Section IV.
## IV Experiment Results
In this section, we report on the findings of our experiments. Since our work is based on the rename refactoring operations performed on identifiers, we first present an overview of renames in our dataset. In total, our dataset contains 285,786 rename refactorings spread across the five identifier types. Table I shows the volume of renames by identifier type, most of which were method renames (29.50%).
Moving on, we focus our analysis on rename chains by answering our RQs. The first RQ examines the volume of rename chains present in the dataset, while The second and third RQ investigates specific characteristics of these rename chains. Due to space constraints, specific tables in the RQs show only the most frequently occurring instances; the complete set is available on our project website [20].
### _Rq1: To what extent do identifiers undergo multiple rename refactoring operations?_
In this RQ, we quantitatively analyze the rename chains in our dataset. In total, we mined 285,786 rename refactoring operations. We then analyzed this raw data to construct rename chains. A chain is a combination of rename operations applied
to a single identifier. We construct a chain if two or more rename operations are applied to an identifier. In total, we detected 17,404 rename chains spread across all identifier types. In contrast, our dataset contains 247,567 identifiers that underwent only a single rename operation and, hence do not form a chain. A granular examination shows that, out of all identifier types in our dataset, methods (approx. 30.73%) are most likely to have rename chains, followed by variables (approx. 23.47%), classes (16.85%), parameters (approx. 16.81%), and attributes (approx. 12.14%). Table II provides an overview of the mined rename chains.
Moving on, we focus on the number of rename operations that form a rename chain. Overall, an identifier rename chain contains a median of 2 and an average of 9 rename instances. On a granular level, we observe that classes, attributes, methods, and parameters have a median of two renames in their chains, while variables have three rename instances. Table III shows a statistical summary of the number of rename instances for each identifier type in their rename chain.
Finally, while our dataset contains 798 projects having rename refactorings, 668 (or 83.71%) of these projects contain rename chains. Looking at the volume of rename chains within these projects, we observe that projects have a median of nine and a mean of 26.05 identifiers undergoing multiple renames.
**Summary for RQ1.** Though rename operations are prevalent in the implementation and maintenance of software systems, most identifiers typically undergo a single rename throughout their lifetime. However, rename chains are present in most systems. Method names typically undergo multiple renamings and typically contain around two renames in their rename chain. Variables, on the other hand, undergo around three renamings.
_Rq2: How frequently do renames occur within a rename chain, and who is responsible for their creation?_
In the prior RQ, we show the occurrence of rename chains in the evolution of the code base of a software system. Moving on, this RQ examines the renames occurring within these chains. More specifically, we investigate the interval duration between the renames in the chains and the developers performing these renames. The findings from this RQ help us better understand the characteristics of rename chains.
**Interval Analysis**
This analysis examines the interval (i.e., time duration) between renames in chains having two or more rename instances. An overall examination of the median number of days between renames shows that the renames occur two days apart. Next, in a more granular examination, we observe that attributes have a median of 25 days, followed by classes having
Fig. 1: Overview of our experiment design.
19 days, methods with 14 days, parameters with seven days, and variables having two days between renames in the chain.
Our subsequent examination looks at the interval between the first and last rename in the chain. Parameters have the lowest interval with a median of 17 days between the first and last rename. In contrast, variables have the longest of 357 days between the first and last rename in the chain. Finally, classes, attributes, and methods have an interval of 32, 35, and 22 days, respectively.
**Developer Analysis**
In this analysis, we investigate who performs the renames involved in the rename chain. To this extent, we utilize the email address associated with the commit containing the rename (i.e., git author email). Prior studies have used the email address to determine unique developers, including those that examine identifier renaming [5]. Our analysis is on chains having two or more rename instances.
First, the same developer performs just over half of the rename chains (i.e., 10,799 or 62.05% instances). Next, focusing on the chains having multiple developers, we observe that 760 or 11.51% of instances have a different developer performing the first and last rename in the chain. Furthermore, these multi-developer chains have a median and average of approximately two unique developers performing the renames in the chain. At a more granular level, attribute chains have the most developers involved in the rename process, with a median of four developers, followed by variables with a median of three. Class and method chains have a median of two developers performing the renames in their respective chains.
**Summary for RQ2.** Rename chains are typically constructed with rename refactoring operations that occur days apart, with variables typically having the shortest duration (approx. two days) and attributes the longest. Furthermore, rename chains are usually constructed by the same developer. Finally, multi-developer chains usually involve two developers, with the construction of attribute chains involving more developers than other identifier rename chains.
### _Rq3: How do the semantics of an identifier's name evolve in a rename chain?_
This RQ continues our analysis of the evolution of renames chains by examining the lexical-semantic structure of the identifier names in the chain. Our analysis includes examining the part-of-speech tags instead of the semantics of actual word since the tags are more constrained and leave less room for misinterpretation. Furthermore, prior work has shown that developers utilize specific grammatical patterns when crafting identifier names [27]. To this extent, we utilize a specialized ensemble tagger for identifier names ([22]) to generate the part-of-speech tags for the words in an identifiers name.
Since a chain can be composed of a varying number of renames, comparing and analyzing each and every rename within the chain is not feasible. Hence, we limit our analysis to the first and last rename in the chain. In other words, we compare the name of the identifier before the first rename and the name of the identifier after the final rename. Our analysis shows that 7,266 or 41.75% rename chains have the same part-of-speech pattern for the original and final name.
Next, we examine the common part-of-speech patterns utilized for the original and final names for each identifier type. Shown in Table IV are the top three widely used part-of-speech tags for the original and final name for each identifier type. From this table, we observe that the majority of the commonly used part-of-speech tags for both names are the same. For example, in the class rename chain TestServl et\(\rightarrow\)TheTestServlet\(\rightarrow\)TestServlet ([28]\(\rightarrow\)[29]), the part-of-speech pattern starts with NM--N, then changes to DT--NM--N when the developer prepends the determiner "The" to the name, before finally reverting the name structure to NM--N. The complete set of part-of-speech patterns are available in our shared dataset.
Furthermore, it is encouraging to note that developers utilize standard naming structures when crafting names for identifiers [27]. From Table IV, we can see that classes, attributes, parameters, and variables begin with either a noun/noun-plural (N/NPL) or noun modifier (NM), while methods start with a verb (V). Additionally, we also observe instances where developers correct poorly structured names. For example, in the attribute rename chain setToValue\(\rightarrow\)groupName\(\rightarrow\)groupNameTextArea([30]\(\rightarrow\)[31]), the original name starts with a verb (i.e., "set"), which is generally incorrect for an attribute. However, within the chain, the developer changes the name to start with a noun modifier
A high-level examination of the words making up the name in the identifiers shows that there are 7,584 instances where the original and final names contain an equal number of words. Further, our dataset contains 3,901 chains with identical original and final names. Additionally, we encounter 38 rename chains where the only difference between the names is a change in the case (e.g., experimentEngine\(\rightarrow\)junkEngine\(\rightarrow\)ExperimEngine[32]\(\rightarrow\)[33]) and seven chains where the difference is a removal/addition of punctuation(s) (e.g., _locator\(\rightarrow\)loader\(\rightarrow\)locator[34]\(\rightarrow\)[35]).
**Summary for RQ3.** There are numerous instances where even though the words in an identifier's name change, the grammatical structure of the initial and last name in the chain remains the same. Furthermore, developers frequently follow well-established identifier naming structures when crafting names.
### _Rq4: To what extent can commit log messages help contextualize the occurrence of rename chains?_
While the prior RQs examine the occurrence and characteristics of rename chains, we need to understand why developers create these chains. Since surveying all the developers responsible for creating chains in our dataset is not feasible, this RQ performs an automated analysis of the commit log messages associated with commits that form rename chains. We analyze the commit message from the second rename onwards for each rename chain (i.e., two or more renames) as the second rename
indicates the start of the rename chain. In our analysis, we perform a topic modeling analysis utilizing the LDA algorithm, as described in Section III. The results of our LDA analysis yield three distinct topics associated with these messages - Code Cleanup, Refactoring, and Bug Fix/Testing.
The Code Cleanup topic includes words such as'renaming', 'naming', 'convention', 'cleanup', and 'whitespace', where the renames in the chain are due to the developer improving code style quality by adhering to standards, which includes following naming standards. For example, in the chain my FilenameFilter\(\rightarrow\)libFilenameFilter\(\rightarrow\)LibFile nameFilter ([36]\(\rightarrow\)[37]), the renaming of libFilenameFilter\(\rightarrow\)LibFilenameFilter is associated with the message "_Lots of fuses using Checkstyle - Fixed some names to follow conventions..._".
The Refactoring topic includes words such as'refactor','revert', 'updated', 'changed','removed, and 'add'. These commits are associated with developers updating the code related to the behavior and design of the system. For example, the commit message of last rename in the chain: KenyaEm rConfigurator\(\rightarrow\)KenyaEmrModelConfigurator([38]\(\rightarrow\)[39]) is "_Major refactor to start process of eventually moving content manager classes into separate module. For now they are moved to a different subpackage but remain in the KenyaEMR module until all dependencies on KenyaEMR are removed_".
Finally, the Bug Fix/Testing topic is associated with the words 'fix', 'bug', 'test', and 'testcase'. In these instances, the renames are part of either a bug fix developers perform or are part of unit testing. However, we do notice that usually, the messages are not very descriptive. For example, the last message in the chain result\(\rightarrow\)dependencies\(\rightarrow\)cal c ([40]\(\rightarrow\)[41]) is "_fixed bug with searching for transitive dependencies + added test for it_".
The topics yielded from our analysis are at a high level. While they show the actions causing the rename, further insight into why the developer utilized a specific word for the rename or how the name is related to the action or code is challenging due to the nature of commit messages.
**Summary for RQ4.** A topic modeling analysis on the rename chain commit messages shows the renames are related to Code Cleanup, Refactoring, and Bug Fix/Testing. However, these topics are at a high-level due to the nature of commit messages.
## V Threats To Validity
Even though the projects are limited to Java systems and might not necessarily generalize to systems written in other languages, these systems follow software engineering best practices and have been utilized in similar research on identifier names. Likewise, our methodology utilizes specific tools, such as RefactoringMiner and the part-of-speech tagger, which pose a risk because they could not be entirely accurate. These tools, however, are well-known and state-of-the-art in their respective domain and employed in similar work. Even though our construction of rename chains is limited to identifiers renamed within the same class, our results still yield a large number of chains. In RQ2, we utilize the commit author email to identify individual developers. While this can introduce threats to the study, manually verifying our dataset's large volume of emails is not feasible. Furthermore, as mentioned in RQ2, emails for identifying developers have been used in prior work.
## VI Discussion & Conclusion
Interpreting identifier names form the backbone of any code comprehension task. However, with developers free to craft names using words of their choosing, they introduce the threat of having names that do not accurately reflect their behavior (i.e., names of poor quality), which hinders the maintenance of the system. To correct such poor-quality names, developers rename them, which can continue throughout the system's lifetime. In this study, our analysis of multiple renames applied to a single identifier (i.e., rename chain) shows that almost all projects exhibit this phenomenon, with an average chain size of two renames. Furthermore, we report on characteristics such as the interval between renames, developers responsible for chain construction, and grammatical changes. While our findings extend the knowledge in identifier naming, there are avenues for further research, including expanding on our RQ4 analysis to study the motivation and contextualization for the occurrence
of rename chains. Below, we discuss how the findings from our RQs support the community through a series of takeaways.
**Takeaway 1 - _Reliance on part-of-speech patterns when crafting and evaluating names._** From RQ3, we observe that part-of-speech tags are an efficient means of studying the semantic updates a name undergoes when renamed. This finding shows that academia and practitioners should not focus only on the words in a name but also consider the grammatical structure of the name when crafting and evaluating identifier names. Additionally, this also presents the research/vendor community with an opportunity to construct rename recommendation tools that incorporate the name's grammatical structure in addition to the existing features they utilize.
**Takeaway 2 - _Improvements to name recommendations and appraisal techniques._** In addition to incorporating the grammatical structure, identifier name recommendation and appraisal techniques should also consider the historical evolution of an identifier's name in the evaluation process. Current techniques usually consider the styling and features present in the version of the code base under analysis. By examining the historical evolution of the name, the likelihood of overreliance on outliers is greatly reduced.
**Takeaway 3 - _Emphasis on the importance of using high-quality names._** Academia should instill in students the importance of having high-quality names in the source code. For example, our dataset shows the use of abbreviations and acronyms in forming identifier names. Such tokens are known to impede code comprehension [4, 42]. Specifically, the initial versions of the attribute rename chain: TEMP_TUNNI_ID\(\rightarrow\)TUNNI_IDENTIFIER have a generic word, 'TEMP', and an abbreviation, 'ID', which are corrected in the final version of the name. Finally, in addition to using static analysis tools to detect poor programming practices, such as code and test smells [43, 44], there should also be a focus on using tools that evaluate the quality of names, such as linguistic anti-patterns [45, 46].
**Takeaway 4 - _Challenges with the automated contextualization of rename chains._** Even though our attempts, in RQ 4, at contextualizing the occurrence of rename chains using the messages in the commit log yielded topics, these topics are at a high level. They are insufficient in helping us understand how the changed words in the identifier's name are related to the code or developer activity/task. This shows the need for more specialized natural language processing techniques and also the analysis of other software engineering artifacts.
### _Future Work_
Our future work in this area includes a human subject study. In this proposed study, we will work with developers of varying experience and skills to validate our empirical findings and expand our knowledge on understanding the rationale for the presence of rename chains in projects. Further, we plan to discover additional heuristics we can incorporate into appraising and recommending high-quality identifier names.
|
2301.07500 | Multi-objective Software Architecture Refactoring driven by Quality
Attributes | Architecture optimization is the process of automatically generating design
options, typically to enhance software's quantifiable quality attributes, such
as performance and reliability. Multi-objective optimization approaches have
been used in this situation to assist the designer in selecting appropriate
trade-offs between a number of non-functional features. Through automated
refactoring, design alternatives can be produced in this process, and assessed
using non-functional models.
This type of optimization tasks are hard and time- and resource-intensive,
which frequently hampers their use in software engineering procedures.
In this paper, we present our optimization framework where we examined the
performance of various genetic algorithms. We also exercised our framework with
two case studies with various levels of size, complexity, and domain served as
our test subjects. | Daniele Di Pompeo, Michele Tucci | 2023-01-18T13:17:16Z | http://arxiv.org/abs/2301.07500v1 | # Multi-objective Software Architecture Refactoring driven by Quality Attributes
###### Abstract
Architecture optimization is the process of automatically generating design options, typically to enhance software's quantifiable quality attributes, such as performance and reliability. Multi-objective optimization approaches have been used in this situation to assist the designer in selecting appropriate trade-offs between a number of non-functional features. Through automated refactoring, design alternatives can be produced in this process, and assessed using non-functional models.
This type of optimization tasks are hard and time- and resource-intensive, which frequently hampers their use in software engineering procedures.
In this paper, we present our optimization framework where we examined the performance of various genetic algorithms. We also exercised our framework with two case studies with various levels of size, complexity, and domain served as our test subjects.
refactoring, multi-objective optimization, software architecture, performance
## I Introduction
Different factors, such as the addition of new requirements, the adaption to new execution contexts, or the deterioration of non-functional features, can lead to software refactoring. The challenge of identifying the best refactoring operations is challenging because there is a wide range of potential solutions and no automated assistance is currently available.
In this situation, search-based approaches have been widely used [1, 2, 3, 4, 5].
Multi-objective optimization approaches, which are search-based, have lately been used to solve model refactoring optimization issues [6, 7]. Searching among design alternatives (for example, through architectural tactics) is a typical feature of multi-objective optimization methodologies used to solve model-based software restructuring challenges [8, 7].
In this study, we describe a many-objective evolutionary framework that automatically searches and applies sequences of refactoring actions leading to the optimization of four objectives: i) performance variation (analyzed through Layered Queueing Networks [9]), ii) reliability (analyzed through a closed-form model [10]), iii) number of performance antipatterns (automatically detected [11]), and iv) architectural distance [12].
In particular, our framework automatically applies refactoring actions to the initial architecture, and we analyze the contribution of the architectural distance to the generation of Pareto frontiers [13]. Furthermore, we study the impact of performance antipatterns on the quality of refactoring solutions. Since it has been shown that removing performance antipatterns leads to systems that show better performance than the ones affected by them [11], we aim at studying if this result persists in the context of many-objective optimization, where performance improvement is not the only objective.
Our approach applies to UML augmented by MARTE [14] and DAM [15] profiles that allow to embed performance and reliability properties. However, UML does not provide native support for performance analysis, thus we introduce a model-to-model transformation that generates Layered Queueing Networks (LQN) from annotated UML artifacts. The solution of LQN models feeds the performance variation objective.
Here, we consider refactoring actions that are designed to improve performance in most cases [16, 17]. Since such actions may also have an impact on other non-functional properties, we introduce the reliability among the optimization objectives to study whether satisfactory levels of performance and reliability can be kept at the same time. In order to quantify the reliability objective, we adopt an existing model for component-based software systems [10] that can be generated from UML.
We also minimize the distance between the initial architecture and the ones resulting from applying refactoring actions. Indeed, without an objective that minimizes such distance, the proposed solutions could be impractical because they could require to completely disassemble and re-assemble the initial architecture.
In a recent work [18], we extended the approach in [12, 6], by investigating architecture optimization, thus widening the scope of eligible models. We analyze the sensitivity of the search process to configuration variations. We refine the cost model of refactoring actions and we investigate how it contributes to the generation of Pareto frontiers.
The experimentation lasted several hours and generated thousands of model alternatives. Generally, multi-objective optimization is beneficial when the solution space is so large that an exhaustive search is impractical. Hence, due to the search of the solution space, multi-objective optimization requires a lot of time and resources.
Finally, to encourage reproducibility, we publicly share the
implementation of the approach 1, as well as the data gathered during the experimentation 2.
Footnote 1: [https://github.com/SEALABQualityGroup/EASIER](https://github.com/SEALABQualityGroup/EASIER)
Footnote 2: [https://github.com/SEALABQualityGroup/2022-ist-replication-package](https://github.com/SEALABQualityGroup/2022-ist-replication-package)
## II Related Work
In the past ten years, studies on software architecture multi-objective optimization have been developed to optimize various quality attributes (such as reliability and energy) [19, 20, 21, 22, 6]; with various degrees of freedom in the modification of architectures (such as service selection [23].
Recent research analyzes the capacity of two distinct multi-objective optimization algorithms to enhance non-functional features inside a particular architecture notation (i.e., Palladio Component Model) [24, 25, 7]. The authors use architectural approaches to find the best solutions, which primarily include changing system parameters (such as hardware settings or operation requirements). On the other hand, in this work, we employ refactoring techniques that alter the basic architecture structure while keeping the original behavior. The architecture notation is another difference; rather than using a unique Domain Specific Language, we use UML with the intention of experimenting with a standard notation.
Menasce et al. have provided a framework for architectural design and quality optimization, [26]. This framework makes use of architectural patterns to help the search process (such as load balancing and fault tolerance). The approach has two drawbacks: performance indices are computed using equation-based analytical models, which may be too simple to capture architectural details and resource contention; the architecture must be designed in a tool-specific notation rather than in a standard modeling language (as we do in this paper).
A method for modeling and analyzing AADL architectures has been given by Aleti et al.[27]. A tool that may be used to optimize various quality attributes while adjusting architecture deployment and component redundancy has also been introduced. Our framework, instead, makes use of UML and takes into account more intricate refactoring procedures as well as various goal attributes for the fitness function. In addition, we look into the function of performance antipatterns in the context of optimizing many-objective architecture refactoring.
## III Approach
The process that we describe in this research is illustrated in Figure 1.
An _Initial Architecture_ and a list of refactoring actions are supplied into the process. The _Create Combined Population_ step, where mating operations (_i.e.,_ selection, mutation, and crossover) are implemented to create _Architecture Alternatives_ involves the _Initial Architecture_ and the _Refactoring Actions_. The refactoring activities are randomly and automatically applied by the mating operations, producing alternatives that are functionally comparable to the initial architecture.
Therefore, each architecture alternative is given the _Evaluation step_. The model options are then sorted (_Sorting step_) based on the following four goals: _perfQ, reliability, #changes, and performance antipatterns_. Throughout the process, these qualities are appraised and taken into consideration to select the optimal candidates.
Recently, we investigated how performance antipatterns affect the effectiveness of refactoring methods [18]. We aim to investigate whether this phenomenon also holds in the context of multi-objective optimization, where performance improvement is not the only goal, given that it has been demonstrated that removing performance antipatterns results in systems that show better performance than those affected by them [28, 29, 11].
Furthermore, we looked into whether adding a time budget could shorten the amount of time an evolutionary algorithm requires [30]. The purpose of setting such a time constraint is to determine the extent to which, in a model-based multi-objective refactoring optimization scenario, the imposition of a time-based search budget can degrade the quality of the resultant Pareto fronts. Furthermore, we are curious about how various algorithms respond to various search budgets. In order to test this, we chose two case studies and ran the optimization with search budgets of _15, 30_, and _60_ minutes.
Currently, our framework supports three genetic algorithms, _NSGA-II_[31], SPEA2[32], PESA2[33]. We selected these algorithms with respect their different searching policies. Thus, our results cover evolutionary algorithms of different characteristics.
## IV Conclusion and Future Work
We have developed a framework for multi-objective architecture optimization that takes into account quality attributes. In the context of architecture optimization, we concentrated our investigation on the potential effects of evolutionary algorithms on the quality of optimal refactoring solutions.
We learned some interesting things from our experimentation concerning the effectiveness of the created solutions and the use of performance antipatterns as an algorithmic
Fig. 1: Our multi-objective evolutionary approach
objective. In this regard, we demonstrated that we may achieve superior solutions in terms of performance and reliability by incorporating the detection of performance antipatterns into the optimization process. Making sure that our strategy did not decrease the reliability of the basic architecture was another crucial component of our investigation. Our tests revealed that, in most instances, we were able to boost the reliability of alternatives in comparison to the original architecture.
Future research will examine how settings (experiment and algorithm setups) affect the effectiveness of Pareto frontiers. We will examine the effects of denser populations, for instance, on calculation time and the accuracy of computed Pareto frontiers. Our research focuses on the impact of predicting the baseline refactoring factor using a more complex cost model, such as COCOMO-II [34], on the combination of refactoring activities. We are also interested in the influence that changes play. We want to expand the portfolio of refactoring activities, for instance by adding fault tolerance refactoring actions [35], and a fruitful inquiry will focus on the length of the sequence of refactoring actions, which is presently fixed to four refactoring actions. We will incorporate additional evolutionary algorithms into our approach to examine the role that various optimization methods play in the architecture refactoring.
## Acknowledgment
Daniele Di Pompeo is supported by the Centre of EXcellence on Connected, Geo-Localized and Cybersecure Vehicle (EX-Emerge), funded by the Italian Government under CIPE resolution n. 70/2017 (Aug. 7, 2017).
Michele Tucci is supported by the OP RDE project No. CZ.02.2.69/0.0/0.0/18_053/0016976 "International mobility of research, technical and administrative staff at the Charles University".
|
2303.09354 | The NCI Imaging Data Commons as a platform for reproducible research in
computational pathology | Background and Objectives: Reproducibility is a major challenge in developing
machine learning (ML)-based solutions in computational pathology (CompPath).
The NCI Imaging Data Commons (IDC) provides >120 cancer image collections
according to the FAIR principles and is designed to be used with cloud ML
services. Here, we explore its potential to facilitate reproducibility in
CompPath research.
Methods: Using the IDC, we implemented two experiments in which a
representative ML-based method for classifying lung tumor tissue was trained
and/or evaluated on different datasets. To assess reproducibility, the
experiments were run multiple times with separate but identically configured
instances of common ML services.
Results: The AUC values of different runs of the same experiment were
generally consistent. However, we observed small variations in AUC values of up
to 0.045, indicating a practical limit to reproducibility.
Conclusions: We conclude that the IDC facilitates approaching the
reproducibility limit of CompPath research (i) by enabling researchers to reuse
exactly the same datasets and (ii) by integrating with cloud ML services so
that experiments can be run in identically configured computing environments. | Daniela P. Schacherer, Markus D. Herrmann, David A. Clunie, Henning Höfener, William Clifford, William J. R. Longabaugh, Steve Pieper, Ron Kikinis, Andrey Fedorov, André Homeyer | 2023-03-16T14:32:50Z | http://arxiv.org/abs/2303.09354v3 | # The NCI Imaging Data Commons as a platform for reproducible research in computational pathology
###### Abstract
**Background and Objectives**: Reproducibility is a major challenge in developing machine learning (ML)-based solutions in computational pathology (CompPath). The NCI Imaging Data Commons (IDC) provides \(>\)120 cancer image collections according to the FAIR principles and is designed to be used with cloud ML services. Here, we explore its potential to facilitate reproducibility in CompPath research.
**Methods**: Using the IDC, we implemented two experiments in which a representative ML-based method for classifying lung tumor tissue was trained and/or evaluated on different datasets. To assess reproducibility, the experiments were run multiple times with separate but identically configured instances of common ML services.
**Results**: The AUC values of different runs of the same experiment were generally consistent. However, we observed small variations in AUC values of up to 0.045, indicating a practical limit to reproducibility.
**Conclusions**: We conclude that the IDC facilitates approaching the reproducibility limit of CompPath research (i) by enabling researchers to reuse exactly the same datasets and (ii) by integrating with cloud ML services so that experiments can be run in identically configured computing environments.
**Keywords**: reproducibility, computational pathology, FAIR, cloud computing, machine learning, artificial intelligence
## 1 Introduction
Computational pathology (CompPath) is a new discipline that investigates the use of computational methods for the interpretation of heterogeneous data in clinical and anatomical pathology to improve health care in pathology practice. A major focus area of CompPath is the computerized analysis of digital tissue images [(1)]. These images show thin sections of surgical specimens or biopsies that are stained to highlight relevant tissue structures. To cope with the high level of complexity and variability of tissue images, virtually all state-of-the-art methods use sophisticated machine learning (ML) algorithms such as Convolutional Neural Networks (CNN) [(2)].
Because CompPath is applicable in a wide variety of use cases, there has been an explosion of research on ML-based tissue analysis methods [(3; 4)]. Many methods are intended to assist pathologists in routine diagnostic tasks such as the recognition of tissue patterns for disease classification [(5; 6; 7; 8; 9)]. Beyond that, CompPath methods have also shown promise for deriving novel biomarkers from tissue patterns that can predict outcome, genetic mutations, or therapy response [(3)].
### Reproducibility challenges
In recent years, it has become increasingly clear that reproducing the results of published ML studies is challenging [(10; 11; 12; 13)]. Reproducibility is commonly defined as the ability to obtain "consistent results using the same input data, computational steps, methods, and conditions of analysis" [(14)]. Difficulties related to reproducibility prevent other researchers from verifying and reusing published results and are a critical barrier to translating solutions into clinical practice [(15)]. In most cases, reproducibility problems seem to stem not from a lack of scientific rigor, but from challenges to convey all details and set-up of complex ML methods [(12; 15; 16)]. In the following, we provide an overview of the main challenges related to ML reproducibility and the existing approaches to address them.
The first challenge is the specification of the analysis method itself. ML algorithms have many variables, such as the network architecture, hyperparameters, and performance metrics [(17; 18; 16)]. ML workflows usually consist of multiple processing steps, e.g., data selection, pre-processing, training, evaluation [(18)]. Small variations in these implementation details can have significant effects on performance. To make all these details transparent, it is crucial to publish the underlying source code [(15)]. Workflows should be automated as much as possible to avoid errors when performing steps manually. Jupyter notebooks have emerged as the de facto standard to implement and communicate ML workflows [(19)]. By combining software code, intermediate results and explanatory texts into "computational narratives" [(20)] that can be interactively run and validated, notebooks make it easier for researchers to reproduce and understand the work of others [(19)].
The second challenge to reproducibility is the specification and setup of the computing environment. ML workflows require significant computational resources including, e.g., graphics or tensor processing units (GPUs or TPUs). In addition, they often have many dependencies on specific software versions. Minor variations in the computing environment can significantly affect the results [(13)]. Setting up a consistent computational environment can be very expensive and time consuming. This challenge can be partially solved by embedding ML workflows in virtual machines or software containers like Docker [(21)]. Both include all required software dependencies so that ML workflows can be shared and run without additional installation effort. Cloud ML services, like Google Vertex AI, Amazon Sage-Maker, or Microsoft Azure Machine Learning, provide an even more comprehensive solution. By offering pre-configured computing environments for ML research in combination with the required high-performance hardware, such services can further reduce the setup effort and enable the reproduction of computationally intensive ML workflows even if one does not own the required hardware. They also typically provide web-based graphical user interfaces through which Jupyter notebooks can be run and shared directly in the cloud, making it easy for others to reproduce, verify, and reuse ML workflows [(21)].
The third challenge related to ML reproducibility is the specification of data and its accessibility. The performance of ML methods depends heavily on the composition of their training, validation and test sets [(13; 22)]. For current ML studies, it is rarely possible to reproduce this composition exactly as studies are commonly based on specific, hand-curated datasets which are only roughly described rather than explicitly defined [(17; 23)]. Also, the datasets are often not made publicly available [(15)], or the criteria/identifiers used to select subsets from publicly available datasets are missing. Stakeholders from academia and industry have defined the Findability, Accessibility, Interoperability, and Reusability (FAIR) principles [(24)], a set of requirements to facilitate discovery and reuse of data. FAIR data provision is now considered a "must" to make ML studies reproducible and the FAIR principles are adopted by more and more public data infrastructure initiatives and scientific jour
nals [25].
Reproducing CompPath studies is particularly challenging. To reveal fine cellular details, tissue sections are imaged at microscopic resolution, resulting in gigapixel whole-slide images (WSI) [26]. Due to the complexity and variability of tissue images [27], it takes many\(-\)often thousands\(-\)of example WSI to develop and test reliable ML models. Processing and managing such large amounts of data requires extensive computing power, storage resources, and network bandwidth. Reproduction of CompPath studies is further complicated by the large number of proprietary and incompatible WSI file formats that often impede data access and make it difficult to combine heterogeneous data from different studies or sites. The Digital Imaging and Communications in Medicine (DICOM) standard [28] is an internationally accepted standard for storage and communication of medical images. It is universally used in radiology and other medical disciplines, and has great potential to become the uniform standard for pathology images as well [29]. However, until now, there have been few pathology data collections provided in DICOM format.
### NCI Imaging Data Commons
The National Cancer Institute (NCI) Imaging Data Commons (IDC) is a new cloud-based repository within the US national Cancer Research Data Commons (CRDC) [30]. A central goal of the IDC is to improve the reproducibility of data-driven cancer imaging research. For this purpose, the IDC provides large public cancer image collections according to the FAIR principles.
Besides pathology images (brightfield and fluorescence) and their metadata, the IDC includes radiology images (e.g., CT, MR, and PET) together with associated image analysis results, image annotations, and clinical data providing context about the images. At the time of writing this article, the IDC contained 128 data collections with more than 63,000 cases and more than 38,000 WSI from different projects and sites. The collections cover common tumor types, including carcinomas of the breast, colon, kidney, lung, and prostate, as well as rarer cancers such as sarcomas or lymphomas. Most of the WSI collections originate from The Cancer Genome Atlas (TCGA) [31] and Clinical Proteomic Tumor Analysis Consortium (CPTAC) [32] projects and were curated by The Cancer Imaging Archive (TCIA) [33]. These collections are commonly used in the development of CompPath methods [34; 35; 7; 36].
The IDC implements the FAIR principles as follows:
Interoperability: While the original WSIs were provided in proprietary, vendor-specific formats, the IDC harmonized the data and converted them into the open, standard DICOM format [29]. DICOM defines data models and services for storage and communication of medical image data and metadata, as well as attributes for different real-world entities (e.g., patient, study) and controlled terminologies for their values. In DICOM, a WSI corresponds to a "series" of DICOM image objects that represent the digital slide at different resolutions. Image metadata are stored as attributes directly within the DICOM objects.
Accessibility: The IDC is implemented on the Google Cloud Platform (GCP), enabling cohort selection and analysis directly in the cloud. Since IDC data are provided as part of the Google Public Datasets Program, it can be freely accessed from cloud or local computing environments. In the IDC, DICOM objects are stored as individual DICOM files in Google Cloud Storage (GCS) buckets and can be retrieved using open, free, and universally implementable tools.
Findability: Each DICOM file in the IDC has a persistent universally unique identifier (UUID) [37]. DICOM files in storage buckets are referenced through GCS URLs, consisting of the bucket URL and the UUID of the file. Images in the IDC are described with rich metadata, including patient (e.g., age, sex), disease (e.g., subtype, stage), study (e.g., therapy, outcome), and imaging-related data (e.g., specimen handling, scanning). All DICOM and non-DICOM metadata are indexed in a BigQuery database [38] that can be queried programmatically using standard Structured Query Language (SQL) statements (see section "IDC data access"), allowing for an exact and persistent definition of cohorts for subsequent analysis.
Reusability: All image collections are associated with detailed provenance information but stripped of patient-identifiable information. Most collections are released under data usage licenses that allow unrestricted use in research studies.
### Objective
This paper explores how the IDC and cloud ML services can be used in combination for CompPath studies and how this can facilitate reproducibility. This paper is also intended as an introduction to how the IDC can be used for reproducible CompPath research. Therefore, important aspects such as data access are described in more detail in the Methods section.
## 2 Methods
### Overview
We implemented two CompPath experiments using data collections from the IDC and common ML services (Fig. 1). The IDC is a collection of 128 data collections from the IDC. The IDC is a collection of 128 data collections from the IDC. The IDC is a collection of 128 data collections from the IDC. The IDC is a collection of 128 data collections from the IDC. The IDC is a collection of 128 data collections from the IDC. The IDC is a collection of 128 data collections from the IDC. The IDC is a collection of 128 data collections from the IDC. The IDC is a collection of 128 data collections from the IDC. The IDC is a collection of 128 data collections from the IDC. The IDC is a collection of 128 data collections from the IDC. The IDC is a collection of 128 data collections from the IDC. The IDC is a collection of 128 data collections from the IDC. The IDC is a collection of 128 data collections from the IDC. The ID is a collection of 128 data collections from the IDC.
ure 1). Since the computing environments provided by cloud ML services are all virtualized, two identically configured instances may run different host hardware and software (e.g., system software versions, compiler settings) [13]. To investigate if and how this affects reproducibility, both experiments were executed multiple times, each in a new instance of the respective ML service.
The experiments are based on a basic CompPath analysis method that addresses a use case representative of common CompPath tasks [5; 6; 7; 8; 9]: the automatic classification of entire WSI of hematoxylin and eosin (H&E)-stained lung tissue sections into either non-neoplastic (normal), lung adenocarcinoma (LUAD), or lung squamous cell carcinoma (LSCC/LUSC).
Experiment 1 replays the entire development process of the method, including model training and validation. Experiment 2 performs inference with a trained model on independent data. The model trained in Experiment 1 was used as the basis for Experiment 2. The two experiments were conducted with different collections in the IDC: TCGA-LUAD/LUSC [39; 40] and CPTAC-LUAD/LSCC [41; 42], respectively. While both the TCGA and the CPTAC collections cover H&E-stained lung tissue sections of the three classes considered (Figure 2), they were created by different clinical institutions using different slide preparation techniques.
### Implementation
Both experiments were implemented as standalone Jupyter notebooks that are available open source [43]. To enable reproducibility, care was taken to make operations deterministic, e.g., by seeding pseudo-random operations, fixing initial weights for network training, and by iterating over unordered container types in a defined order. Utility functionality was designed as generic classes and functions that can be reused for similar use cases.
As the analysis method itself is not the focus of this paper, we adopted the algorithmic steps and evaluation design of a lung tumor classification method described in a widely cited study by Coudray et al. [7]. The method was chosen because it is representative of common CompPath tasks and easy to understand. Our implementation processed images at a lower resolution, which is significantly less computationally expensive.
In our analysis workflow, a WSI was subdivided into non-overlapping rectangular tiles, each measuring 256 pixels at a resolution of 1 um/px. Tiles containing less than 50% tissue, as determined by pixel value statistics, were discarded. Each tile was assigned class probabilities by performing multi-class classification using an InceptionV3 CNN [44]. The per-tile results were finally aggregated to a single classification of the entire slide. The workflow is visualized in Figure 3 and a detailed description is provided in the respective notebooks.
In Experiment 1, the considered slides were divided into training, validation, and test sets with proportions of 70%, 15%, and 15%, respectively. To keep the sets independent and avoid overoptimistic performance estimates [45], we ensured that slides from a given patient were assigned to only one set, which resulted in 705, 151 and 153 patients per subset. The data collections used did not contain annotations of tumor regions, but only one reference class value per WSI. Following the procedure used by Coudray et al., all tiles were considered to belong to the reference class of their respective slide. Training was performed using a categorical cross-entropy loss between the true class labels and the predicted class probabilities, and the RMSProp optimizer with minimal adjustments to the default hyperparameter values [46]. The epoch with the highest area under the receiver operating characteristic (ROC) curve (AUC) on the validation set was chosen for the final model.
### IDC data access
For most CompPath studies, one of the first steps is to select relevant slides using appropriate metadata. In the original data collections, parts of the metadata were stored in the image files and other parts in separate files of different formats (e.g., CSV, JSON files). In order to select relevant slides, the image and metadata first had to be downloaded in their entirety and then the metadata had to be processed using custom tools. With the IDC, data selection can be done by filtering a rich set of DICOM attributes with standard BigQuery SQL statements (Figure 4). The results are tables in which rows represent DICOM files and columns represent selected metadata attributes. As this facilitates the accurate and reproducible definition of the data subsets used in the analysis, these statements are described in more detail below.
An SQL query for selecting WSI in the IDC generally consists of at least a SELECT, a FROM and a WHERE clause. The SELECT clause specifies the metadata attributes to be returned. The IDC provides a wealth of metadata attributes, including image-, patient-, disease-, and study-level properties. The attribute "gcs_url" is usually selected because it stores the GCS URL needed to access the DICOM file. The FROM clause refers to a central table "dicom_all" which summarizes all DICOM attributes of all DICOM files. This table can be joined with other tables containing additional project-specific metadata. Crucial to reproducibility is that all
IDC data are versioned: Each new release of the IDC is represented as a new BigQuery dataset, keeping the metadata for the previous release and the corresponding DICOM files accessible even if they are modified in the new release. The version to use is specified via the dataset specifier in fully qualified table names. All experiments in this manuscript were conducted against IDC data version 11, i.e., the BigQuery table "bigquery-public-data.idc_v11.dicom_all". The WHERE clause defines which DICOM files are returned by imposing constraints for certain metadata attributes. To guarantee reproducibility, it is essential to not use SQL statements that are non-deterministic (e.g., those that utilize ANY_VALUE) and conclude the statement with an ORDER BY clause, which ensures that results are returned in a sorted order.
The two experiments considered in this paper also begin with the execution of a BigQuery SQL statement to select appropriate slides and required metadata from the IDC. A detailed description of the statements is given in the respective notebooks. Experiment 1 queries specific H&E-stained tissue slides from the TCGA-LUAD/LUSC collections, resulting in 2163 slides (591 normal, 819 LUAD, 753 LSCC). Experiment 2 uses a very similar statement to query the slides from the CPTAC-LUAD/LSCC collections, resulting in 2086 slides (743 normal, 681 LUAD, 662 LSCC).
Once their GCS URLs are known, the selected DICOM files in the IDC can be accessed efficiently using the open source tool "gustil" [47] or any other tool that supports the Simple Storage Service (S3) API. During training in Experiment 1, image tiles of different WSI had to be accessed repeatedly in random order. To speed up this process, all considered slides were preprocessed and the resulting tiles were extracted from the DICOM files and cached as individual PNG files on disk before training. In contrast, simply applying the ML method in Experiment 2 required only a single pass over the tiles of each WSI in sequential order. Therefore, it was feasible to access the respective DICOM files and iterate over individual tiles at the time they were needed for the application of the ML method.
### Cloud ML services
The two experiments were conducted with two different cloud ML services of the GCP--Vertex AI and Google Colaboratory. Both services offer virtual machines (VMs) preconfigured with common ML libraries and a JupyterLab-like interface that allows editing and running notebooks from the browser. They are both backed with extensive computing resources including state-of-the-art GPUs or TPUs. The costs of both services scale with the type and duration of use for the utilized compute and storage resources. To use any of them with the IDC, a custom Google Cloud project must be in place for secure authentication and billing, if applicable.
Figure 1: Overview of the workflows of both experiments and their interactions with the IDC.
Since training an ML model is much more computationally intensive than performing inference, we conducted Experiment 1 with Vertex AI and Experiment 2 with Google Colaboratory. Vertex AI can be attached to efficient disks for storage of large amounts of input and output data, making it more suitable for memory-intensive and long-running experiments. Colaboratory, on the other hand, offers several less expensive payment plans, with limitations in the provided computing resources and guaranteed continuous usage times. Colaboratory can even be used completely free of charge, with a significantly limited guaranteed GPU usage time (12 hours at the time of writing). This makes Colaboratory better suited for smaller experiments or exploratory research.
### Evaluation
Experiment 1 was performed using a common Vertex AI VM configuration (8 vCPU, 30 GB memory, NVIDIA T4 GPU, Tensorflow Enterprise 2.8 distribution). Experiment 2 was performed with Colaboratory runtimes (2-8 vCPU, 12-30 GB memory). When using Google Colaboratory for Experiment 2, we were able to choose between different GPU types, including NVIDIA T4 and NVIDIA P100 GPUs. Since it has been suggested that the particular type of GPU can affect results [48], all runs of Experiment 2 were repeated on both GPUs, respectively. Runs with NVIDIA T4 were performed with the free version of Colaboratory, while runs with NVIDIA P100 were performed in combination with a paid GCE Marketplace VM, which was necessary for guaranteed
Figure 2: Example tiles of the three classes considered from the TCGA and CPTAC datasets. The width of each tile is 256 μm. The black boxes marked with arrows in the whole slide images on top show the boundaries of the upper left tiles of the TCGA data set.
use of this GPU.
For each run of an experiment, classification accuracy was assessed in terms of class-specific, one vs. rest AUC values based on the slide-level results. In addition, 95% confidence intervals of the AUC values were computed by 1000-fold bootstrapping over the slide-level results.
To speed up Experiment 2, only a random subset of 300 of the selected slides (100 normal, 100 LUAD, 100 LSCC) was considered in the analysis, which was approximately the size of the test set in Experiment 1.
## 3 Results
The evaluation results of both experiments are summarized in Table 1. It became apparent that none of the experiments was perfectly reproducible and there were notable deviations in the results of repeated runs. In Experiment 1, AUC values differed by up to 0.045 between runs. In Experiment 2, there were also minimal deviations in the AUC values of the different runs, but none of these were greater than 0.001. These deviations occurred regardless of whether the runs were executed on the same GPU type or not.
The classification accuracy of the method trained in Experiment 1 appears satisfactory when evaluated on the TCGA test set and comparable to the results of a similar study based on the same TCGA collections [7]. When applied to the CPTAC test set in Experiment 2, the same model performed substantially worse (Figure 5).
Experiment 1 took an order of magnitude longer to complete (mean runtime of 1 d 18 h \(\pm\)1 h) than Experiment 2 (mean runtime of 1 h 54 min \(\pm\)23 min with
Figure 4: Generic example of a BigQuery SQL statement for compiling slide metadata. The result set is limited to slide microscopy images, as indicated by the value “SM” of the DICOM attribute “Modality”, from the collections “TCGA-LUAD” and “TCGA-LUSC”.
Figure 3: Illustration of the CompPath analysis method. Slides were subdivided into non-overlapping rectangular tiles discarding those with more background than tissue. Each tile was assigned class probabilities using a neural network performing multi-class classification. Slide-based class values were determined by aggregating the tile-based results.
NVIDIA T4 and mean runtime of 1 h 28 min \(\pm\)8 min with NVIDIA P100). The ML service usage charges for Experiment 1 were approximately US$ 32 per run. With the free version of Colaboratory, Experiment 2 was performed at no cost, while runs with the GCE Marketplace VM cost approximately US$ 2 per run.
## 4 Discussion
The aim of this study was to investigate how CompPath studies can be made reproducible through the use of cloud-based computing environments and the IDC as the source of input data. Although the same code was run with the same data using the same ML services and care was taken that operations were deterministic (see section "Implementation"), we observed small deviations in the results of repeated runs. We did not investigate whether the deviations originate from differences in the hardware and software used by the hosts of the virtual computing environments, or whether they are due to randomness resulting from parallel processing [13]. The greater variability in the results of Experiment 1 can possibly be explained by its higher computational complexity. Although the observed deviations appear negligible for many applications, they represent a practical upper limit for reproducibility. Such issues are likely to occur in any computing environment. As outlined below, we argue that the IDC can help to approach this reproducibility limit.
We chose Jupyter notebooks and cloud ML services to address the first two reproducibility challenges mentioned in the Introduction: specifying the analysis method and setting up the computing environment. With the IDC, we were able to tackle the third reproducibility challenge with respect to the special requirements of CompPath: specifying and accessing the data.
By providing imaging data collections according to the FAIR principles, the IDC facilitates precise definition of the datasets used in the analysis and ensures that the exact same data can be reused in follow-up studies. Since metadata on acquisition and processing can be included as DICOM attributes alongside the pixel data, the risk of data confusion can be greatly reduced. The IDC also facilitated the use of cloud ML services because it makes terabytes of WSI data efficiently accessible by on-demand compute resources. We consider our experiments to be representative of common CompPath applications. Therefore, the IDC should be similarly usable for other CompPath studies.
The results of Experiment 2 also reveal the transferability of the model trained in Experiment 1 to independent data. Although the majority of slides were correctly classified, AUC values were significantly lower, indicating that the model is only transferable to a limited extent and additional training is needed. Since all IDC data collections (both the image pixel data and the associated metadata) are harmonized into a standardized DICOM representation, testing transferability to a different dataset required only minor adjustments to our BigQuery SQL statement. In the same way, the IDC makes it straightforward to use multiple datasets in one experiment or to transfer an experimental design to other applications.
### Limitations
Using cloud ML services comes with certain trade-offs. Conducting computationally intensive experiments requires setting up a payment account and paying a fee based on the type and duration of the computing resources used. Furthermore, although the ML services are widely used and likely to be supported for at least the next few years, there is no guarantee that they will be supported in the long term and support the specific configuration of the computing environment used (e.g., software version, libraries). Those who do not want to make these compromises can also access IDC data collections without using ML services, both in the cloud and on-premises. Even if this means losing the previously mentioned advantages with regard to the first two reproducibility challenges, the IDC can still help to specify the data used in a clear and reproducible manner.
Independent of the implementation, a major obstacle to the reproducibility of CompPath methods remains their high computational cost. A full training run often takes several days, making reproduction by other scientists tedious. Performing model inference is generally faster and less resource intensive when compared to model training. Therefore, Experiment 2 runs well even with the free version of Google Colaboratory, enabling others to reproduce it without spending money. The notebook also provides a demo mode, which completes in a few minutes, so anyone can easily experiment with applying the inference workflow to arbitrary images from IDC.
At the moment, the IDC exclusively hosts public data collections. New data must undergo rigorous curation to de-identify (done by TCIA or data submitter) and harmonize images into standard representation (done by IDC), which can require a significant effort. Therefore, only data collections that are of general relevance and high quality are included in the IDC. As a result, the data in the IDC were usually acquired for other purposes than a particular CompPath application and cannot be guaranteed to be representative and free of bias [49].
Compiling truly representative CompPath datasets is very challenging [45]. Nevertheless, the data collections in the IDC can provide a reasonable basis for exploring and prototyping CompPath methods.
### Outlook
The IDC is under continuous development and its technical basis is constantly being refined, e.g., to support new data types or to facilitate data selection and access. Currently, DICOM files in the IDC can only be accessed as a whole from their respective storage buckets. This introduces unnecessary overhead when only certain regions of a slide need to be processed, and it may make it necessary to temporarily cache slides to efficiently access multiple image regions (see section "IDC data access"). Future work should therefore aim to provide efficient random access to individual regions within a WSI. For maximum portability, such access should ideally be possible via standard DICOM network protocols such as DICOMweb [29; 50].
The IDC is continuously being expanded to support even more diverse CompPath applications. For instance, images collected by the Human Tumor Atlas Network (HTAN) that provide rich, multispectral information on subcellular processes [51] have recently been added. The IDC is integrated with other components of the CRDC, such as the Genomic Data Commons [52] or the Proteomic Data Commons [53]. This opens up many more potential CompPath applications involving tissue images and different types of molecular cancer data [54].
### Conclusion
We demonstrated how the IDC can facilitate the reproducibility of CompPath studies. Implementing future studies in a similar way can help other researchers and peer reviewers to understand, validate and advance the analysis approach.
## 5 Author Contributions
DPS and AH conceived and carried out the study. AH and AF supervised the project. AF, MDH, DAC, HH, WC, WJRL, SP and RK supported the study in different ways, e.g., by providing data, supporting set-up of the computing infrastructure, interpretation of the results and giving general advice. AH and DPS drafted the manuscript. All authors critically revised the manuscript and expressed their consent to the final version.
## 6 Declaration of Competing Interest
The authors declare no conflicts of interest.
## 7 Acknowledgements
The authors thank Lars Ole Schwen for advice on deterministic implementations of machine learning algorithms and Tim-Rasmus Kiehl for advice on tissue morphology.
The results published here are in whole or part based upon data generated by the TCGA Research Network
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline & & & \multicolumn{2}{c}{**normal**} & \multicolumn{2}{c}{**LUAD**} & \multicolumn{2}{c}{**LSCC**} \\ \cline{4-9}
**Experiment** & **ML Service (GPU)** & **Run** & **AUC** & **CI** & **AUC** & **CI** & **AUC** & **CI** \\ \hline Experiment 1 & Vertex AI (T4) & 1 & 0.994 & [0.987, 0.999] & 0.942 & [0.914, 0.968] & 0.937 & [0.904, 0.964] \\ & & 2 & 0.981 & [0.964, 0.994] & 0.898 & [0.860, 0.937] & 0.914 & [0.875, 0.946] \\ & & 3 & 0.992 & [0.983, 0.999] & 0.939 & [0.909, 0.964] & 0.918 & [0.881, 0.949] \\ & & 4 & 0.994 & [0.986, 0.999] & 0.928 & [0.895, 0.958] & 0.910 & [0.865, 0.947] \\ & & 5 & 0.989 & [0.979, 0.997] & 0.930 & [0.895, 0.959] & 0.892 & [0.838, 0.934] \\ \hline Experiment 2 & Colaboratory (T4) & 1 & 0.811 & [0.746, 0.871] & 0.698 & [0.633, 0.759] & 0.850 & [0.802, 0.899] \\ & & 2 & 0.811 & [0.746, 0.871] & 0.698 & [0.633, 0.759] & 0.850 & [0.802, 0.899] \\ & & 3 & 0.811 & [0.747, 0.870] & 0.698 & [0.636, 0.758] & 0.851 & [0.800, 0.896] \\ & & 4 & 0.811 & [0.748, 0.869] & 0.698 & [0.632, 0.758] & 0.851 & [0.802, 0.896] \\ & & 5 & 0.811 & [0.748, 0.872] & 0.698 & [0.627, 0.759] & 0.851 & [0.799, 0.896] \\ & & 1 & 0.811 & [0.746, 0.874] & 0.698 & [0.630, 0.758] & 0.851 & [0.802, 0.896] \\ & & 2 & 0.811 & [0.747, 0.873] & 0.698 & [0.627, 0.760] & 0.850 & [0.802, 0.897] \\ & & 3 & 0.811 & [0.747, 0.873] & 0.698 & [0.627, 0.760] & 0.850 & [0.802, 0.897] \\ & & 4 & 0.811 & [0.747, 0.873] & 0.698 & [0.627, 0.760] & 0.850 & [0.802, 0.897] \\ & & 5 & 0.811 & [0.747, 0.873] & 0.698 & [0.627, 0.760] & 0.850 & [0.802, 0.897] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Class-specific, slide-based AUC values and 95% confidence intervals (CI) obtained through multiple runs of both experiments.
and the National Cancer Institute Clinical Proteomic Tumor Analysis Consortium (CPTAC).
This project has been funded in whole or in part with Federal funds from the National Cancer Institute, National Institutes of Health, under Task Order No. HHSN26110071 under Contract No. HHSN261201500003l.
|
2303.16274 | Accelerated wind farm yaw and layout optimisation with multi-fidelity
deep transfer learning wake models | Wind farm modelling has been an area of rapidly increasing interest with
numerous analytical as well as computational-based approaches developed to
extend the margins of wind farm efficiency and maximise power production. In
this work, we present the novel ML framework WakeNet, which can reproduce
generalised 2D turbine wake velocity fields at hub-height over a wide range of
yaw angles, wind speeds and turbulence intensities (TIs), with a mean accuracy
of 99.8% compared to the solution calculated using the state-of-the-art wind
farm modelling software FLORIS. As the generation of sufficient high-fidelity
data for network training purposes can be cost-prohibitive, the utility of
multi-fidelity transfer learning has also been investigated. Specifically, a
network pre-trained on the low-fidelity Gaussian wake model is fine-tuned in
order to obtain accurate wake results for the mid-fidelity Curl wake model. The
robustness and overall performance of WakeNet on various wake steering control
and layout optimisation scenarios has been validated through power-gain
heatmaps, obtaining at least 90% of the power gained through optimisation
performed with FLORIS directly. We also demonstrate that when utilising the
Curl model, WakeNet is able to provide similar power gains to FLORIS, two
orders of magnitude faster (e.g. 10 minutes vs 36 hours per optimisation case).
The wake evaluation time of wakeNet when trained on a high-fidelity CFD dataset
is expected to be similar, thus further increasing computational time gains.
These promising results show that generalised wake modelling with ML tools can
be accurate enough to contribute towards active yaw and layout optimisation,
while producing realistic optimised configurations at a fraction of the
computational cost, hence making it feasible to perform real-time active yaw
control as well as robust optimisation under uncertainty. | Sokratis Anagnostopoulos, Jens Bauer, Mariana C. A. Clare, Matthew D. Piggott | 2023-03-28T19:36:40Z | http://arxiv.org/abs/2303.16274v1 | Accelerated wind farm yaw and layout optimisation with multi-fidelity deep transfer learning wake models
###### Abstract
Wind farm modelling has been an area of rapidly increasing interest with numerous analytical as well as computational-based approaches developed to extend the margins of wind farm efficiency and maximise power production. In this work, we present the novel ML framework wakeNet, which can reproduce generalised 2D turbine wake velocity fields at hub-height over a wide range of yaw angles, wind speeds and turbulence intensities (TIs), with a mean accuracy of 99.8% compared to the solution calculated using the state-of-the-art wind farm modelling software FLORIS. As the generation of sufficient high-fidelity data for network training purposes can be cost-prohibitive, the utility of multi-fidelity transfer learning has also been investigated. Specifically, a network pre-trained on the low-fidelity Gaussian wake model is fine-tuned in order to obtain accurate wake results for the mid-fidelity Curl wake model. The robustness and overall performance of wakeNet on various wake steering control and layout optimisation scenarios has been validated through power-gain heatmaps, obtaining at least 90% of the power gained through optimisation performed with FLORIS directly. We also demonstrate that when utilising the Curl model, wakeNet is able to provide similar power gains to FLORIS, two orders of magnitude faster (e.g. 10 minutes vs 36 hours per optimisation case). The wake evaluation time of wakeNet when trained on a high-fidelity CFD dataset is expected to be similar, thus further increasing computational time gains. These promising results show that generalised wake modelling with ML tools can be accurate enough to contribute towards active yaw and layout optimisation, while producing realistic optimised configurations at a fraction of the computational cost, hence making it feasible to perform real-time active yaw control as well as robust optimisation under uncertainty.
_Keywords_ Wake modelling, Deep Learning, Transfer Learning, Multi-fidelity, Wind Farm Optimisation
## 1 Introduction
Renewable energy sources currently account for around 26% of global energy production [1]. However, in order to reach Net Zero targets by 2050, the globally installed capacity for renewable energy generation must grow rapidly. In particular, the International Energy Agency (IEA) estimate that 390 GW of wind power must be added to the global energy grid every year until 2030, in order to reach Net Zero targets [2]. For context, in the record-breaking year of 2020, less than 100GW of wind power was added to the global grid. To meet these wind energy targets, we therefore not only need to build new wind farms, but also make existing farms more efficient. Wakes from upstream wind turbines can significantly decrease power production at downstream turbines due to reduced wind speeds and higher turbulent intensities. Over an entire wind farm, this effect can amount to a potential reduction of around 20% in the annual energy production [3], resulting in substantial economic losses [4].
In recent years, there have been extensive efforts to model turbine wakes in order to optimise wind farm configurations and improve the efficiency of existing wind farms through wake steering. Wake steering refers to the process of misaligning the upstream turbines to the direction of the wind (by changing the yaw angle) in order to steer the turbulent wake away from the downstream turbines. This has been shown to lead to overall wind farm power gains of up to 7% [5]. A variety of different analytical models exist for efficient wake modelling including the so-called Larsen [6], Jensen [7], Curl [8] and Gaussian [9] models, which have been incorporated into tools such as the FLORIS (FLOw Redirection and Induction in Steady-state) software package [10]. FLORIS computes
steady-state wakes in wind farms using a highly simplified physical description of the flow physics and is widely used in wake steering and wind farm optimisation studies (e.g. 11, 12, 13). Wind power outputs from analytical models have been shown to compare relatively well with high fidelity numerical models such as Reynolds-averaged Navier-Stokes (RANS) (14) and Large Eddy Simulations (LES) (9) (minimum mean absolute error of 10-16%), which are multiple orders of magnitude slower to execute. Although analytic models are not able to capture a detailed representation of the turbine velocity deficit, practitioners often compromise on accuracy to achieve more practical computational costs. This is particularly true in layout optimisation where the model needs to be executed a large number of times, and further still in the case of wake steering where the best results are achieved if this is performed in real time.
During the last two decades, the exponential advancement of CPU, as well as GPU architectures, has opened up the possibility of using neural networks to predict wakes, instead of the more traditional analytical or numerical based models. This is part of a more significant trend in using machine learning techniques to optimise and control wind generation in wind farms (15). A suitable neural network trained on high fidelity data has the potential to provide reliable results within seconds, predicting flow properties of a wind farm that would otherwise require orders of magnitude higher computational times. In our work, we apply both fully connected neural networks (FCNNs) and convolutional neural networks (CNNs) to model turbine wakes. FCNNs have already been successfully used for wake representation by training on a large dataset of RANS simulation outputs to produce 3D wake profiles for wind turbines in a single row with varying wind speed and turbulence intensity (TI) (16), by training on wakes from the analytical Jensen model to optimise wind farm layouts (17) and for wind farm optimisation based on generalised outputs with variable yaw, wind speed and turbulence (18). However, because wake representation can also be viewed as an image generation problem, in certain cases CNNs might be more appropriate due to their ability to accurately reproduce spatial features (19). CNNs have successfully been used to model flow fields around airfoils (20), as well as to generate power response surface maps (21), where the CNN is trained on FLORIS outputs generated using the Gaussian analytical model. However, even though making predictions with neural networks is more efficient than traditional models, they still require large amounts of training data. For example, entire response surface maps have been generated in order to capture the power production for a given layout across all inflow conditions, requiring over 10 million training samples (21). Thus when generating appropriate training datasets, there is still a trade-off between cost and accuracy. More recent work has also investigated the use of multi-fidelity wake models to reduce the computational time of turbine wake evaluation (22, 23).
In this work, we seek to address the compromise between computational expense and high-fidelity accuracy by using a novel multi-fidelity approach with transfer learning (TL) where a new network is trained more efficiently using prior knowledge from a previous network (24). This multi-fidelity transfer learning approach has been successfully used for solving PDEs and modelling flow past a wing (25, 26), but to the best of our knowledge this is the first time such an approach has been applied to the problem of wind energy optimisation. Specifically, we train our neural network first on wind field outputs generated with the simple Gaussian analytical model within FLORIS, and then use transfer learning with the more complex and more computationally expensive Curl model also implemented within FLORIS. In this way, we need to perform fewer simulations of the Curl model for the generation of training data and thus save computational time. In order to maximise the power generated by a wind farm, we must also estimate the power output from this wind field. A further novelty of this work is that instead of using an analytical or empirical approximation, we train a second neural network to predict the power and local TI for each turbine. Our efficient neural network framework can rapidly evaluate many different yaw angles and turbine location choices, and thus readily perform accurate optimisation in order to find the best possible power output.
The aim of this work is thus to construct a neural network framework with multi-fidelity functionality for turbine wake modelling, capable of accurate active yaw control and layout optimisation for considerably lower computational costs than traditional models. The remainder of this work is structured as follows: initially we describe the wake models used for synthesising the training datasets and then we define the machine learning methods used in the wake modelling. We then present results which validate the adopted methodology and demonstrate promising computational gains which can contribute towards more efficient, generalised wake optimisation.
## 2 Generating Training data
In this work, we use the wind plant simulation and optimisation software FLORIS (27) to generate the wake fields used to train our machine learning framework. Within FLORIS, there are multiple models that can be used to simulate wakes and in this work we generate training data using both the Gaussian analytical model (9) and the Curl model (8).
### Gaussian analytical model
The Gaussian analytical model is based on the principle of mass and momentum conservation and assumes that the wake deficit is normally distributed [9]. The normalised wake deficit is given by
\[\frac{\Delta u(x,y,z)}{u_{0}}=\left(1-\sqrt{1-\frac{C_{t}}{8K}}\right)\exp\left(- \frac{1}{2K}\left(\left(\frac{z-z_{h}}{d_{0}}\right)^{2}+\left(\frac{y}{d_{0}} \right)^{2}\right)\right), \tag{1}\]
where \(\Delta u(x,y,z)\) is the wind speed wake deficit, \(u_{0}\) the free-stream velocity, \(C_{t}\) the thrust coefficient, \(z_{h}\) the hub height, \(d_{0}\) the rotor diameter and
\[K=\left(k^{*}\frac{x}{d_{0}}+\epsilon\right)^{2}, \tag{2}\]
where \(k^{*}\) defines the growth of the wake determined from experimental and LES data, and \(\epsilon\) is the mass flow deficit rate at the rotor.
### Curl model
The Curl model is also included as part of FLORIS and is more complex and computationally expensive than the Gaussian analytical model. This model is derived by first considering the Reynolds-averaged Navier-Stokes (RANS) equation in the streamwise direction,
\[u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}+w\frac{\partial u }{\partial z}=-\frac{1}{\rho}\frac{\partial p}{\partial x}+\nu_{\rm eff}\left( \frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}+ \frac{\partial^{2}u}{\partial z^{2}}\right), \tag{3}\]
where all variables are time-averaged and \(u\) is the streamwise velocity, \(v\) the time-averaged spanwise velocity, \(w\) the wall-normal velocity, \(p\) the pressure, \(\rho\) the fluid density and \(\nu_{\rm eff}\) the effective viscosity. The velocities are then deconstructed into base components and perturbations, i.e. \(u=U+u^{\prime}\), where (\(\cdot\))\({}^{\prime}\) denotes the perturbation around the base solution component denoted by a capital letter. Linearising leads to
\[U\frac{\partial u^{\prime}}{\partial x}+V\frac{\partial(U+u^{\prime})}{ \partial y}+W\frac{\partial(U+u^{\prime})}{\partial z}=-\frac{1}{\rho}\frac{ \partial p}{\partial x}+\nu_{\rm eff}\left(\frac{\partial^{2}u^{\prime}}{ \partial x^{2}}+\frac{\partial^{2}(U+u^{\prime})}{\partial y^{2}}+\frac{ \partial^{2}(U+u^{\prime})}{\partial z^{2}}\right), \tag{4}\]
and making various simplifications including assuming the pressure gradient is zero (see [8] for more details), we obtain the final simplified equation,
\[U\frac{\partial u^{\prime}}{\partial x}+V\frac{\partial u^{\prime}}{\partial y }+W\frac{\partial(U+u^{\prime})}{\partial z}=\nu_{\rm eff}\left(\frac{ \partial^{2}u^{\prime}}{\partial x^{2}}+\frac{\partial^{2}u^{\prime}}{ \partial y^{2}}+\frac{\partial^{2}u^{\prime}}{\partial z^{2}}\right). \tag{5}\]
The latter is the equation solved by the Curl model to determine the wake deficit.
### Dataset
The training process of the core wake prediction Machine Learning module is based on a dataset of 2000 wakes generated using the Gaussian model and 2000 wakes generated using the Curl model. The range of initial wind speed, TI and yaw angle are [3, 15] m/s, [0.01, 0.2] and [-35, 35] degrees, respectively. The range of initial wind speeds is based on the power curve of the NREL 5-MW wind turbine where the lower operational limit is 3 m/s and little further power gains from yaw optimisation can be made for wind speeds larger than 12 m/s [28]. The TI limits are based on measured values [29]. Examples of the wakes generated using this method are shown in Figure 1. All 2000 wakes in the dataset were used for model training and 200 distinct wakes were used as a validation set.
Figure 1: Indicative Gaussian wakes from the 2000 wake dataset with their corresponding input vector (wind speed, TI, yaw angle).
Neural Network methods
In this work, we introduce a novel machine learning framework called _WakeNet_, which expands on our previous model [18] and aims at simulating multi-fidelity wake effects for yaw and layout optimisation. _WakeNet_ is composed of three core neural network modules: a Deep Decoder Network (DDN) and a Convolutional Neural Network (CNN), which are trained to reproduce the downstream wake profile for a given turbine, and additional Fully Connected Neural Networks (FCCNs), which are used for the prediction of the local turbulence intensity (TI) and the generated power. The code is available at [https://github.com/soanagno/wakenet](https://github.com/soanagno/wakenet).
### Wake profiles
#### 3.1.1 Deep Decoder Network (DDN)
The first component in WakeNet is the fully-connected DDN. Its architecture is shown in Figure 2 and consists of at least two hidden layers of 200 neurons each, depending on the wake model, and outputs an \(m\) by \(n\) grid of pixels that represent the downstream velocity domain of the wake, while the input of the network is a vector of three wake parameters, namely the free-stream wind speed, the TI and the yaw angle of the turbine which have been normalised using mean/standard deviation normalisations. Additionally, a batch-normalisation layer has been applied after each hidden layer, because re-normalising the training batch in that manner significantly increases the performance of the trained network [30]. For the activation functions, we use \(\tanh\) for the first two layers and a linear activation function for the output.
The DDN is trained on a 2D horizontal slice from the 3D FLORIS wake outputs (the methods used to generate these outputs are described in Section 2). The advantage of this is that it makes it significantly faster to train the model. Moreover given that the training data comes from FLORIS which implicitly considers the effect of the sea/land surface on the atmospheric boundary layer, this means that, despite being a 2D slice, our output layer contains information about the velocity boundary layer of the site.
We note further that the default resolution of FLORIS is a \(200\times 200\) grid, and thus we adopt this resolution for the output of the DDN. However, for higher resolution demands, the user can specify the desired resolution before training the model which will also produce an analytical wake dataset of that same resolution.
#### 3.1.2 Convolutional Neural Network (CNN)
In addition to the DDN, the WakeNet framework also incorporates a CNN architecture, which can also be deployed to generate the 2D wake velocity field. The CNN consists of a combination of fully connected and deconvolutional layers which reconstructs the flow field around a single wind turbine (Figure 3.1.2). The first layer is a fully connected layer, for which the input is the wind speed, TI and yaw angle of the turbine at hub height. The output of this layer is then reshaped into a \(3\times 3\) array and passed to the deconvolutional layers. There are six deconvolutional layers, each consisting of ConvTranspose2d with leaky ReLU activation [31] and followed by batch normalisation. The output of the CNN model is a two-dimensional array with \(200\times 200\) pixels, which is a reconstruction of the flow field having the same dimensions as DDN. The number of pixels is set by the network architecture and training data, and thus can be easily changed. We note that the adopted CNN architecture was chosen using a hyperparameter search, where different layer configurations were tested.
Figure 2: Deep Decoder Network (DDN) architecture of the WakeNet module. The latent vector of wake parameters is decoded to the downstream velocity domain.
#### 3.1.3 Wake superposition
The neural networks above provide a method to predict wakes. Most 'analytical' wake models include an independent approach for the superposition of individual wakes in order to model an array of multiple wind turbines, as well as the interactions between them [32, 33]. For this study, a superposition algorithm based on the sum of squares (SOS) model is deployed for the combination of multiple individual wakes produced. The superposition is modified with an approximate method of calculating a uniform velocity at the hub of each turbine. The final domain represents a collage of the individual wakes that comprise the examined wind farm. The SOS model has the following form:
\[u_{i}=\left(1-\sqrt{\sum_{j=0}^{n}\left(1-\frac{u_{i,j}}{u_{\text{hub},i}} \right)^{2}}\right)u_{\infty}, \tag{6}\]
where \(u_{i}\) is the wind speed of the combined wakes at turbine \(i\), \(u_{i,j}\) is the wind speed at turbine \(i\) which is influenced by the wake generated by the turbine \(j\). \(u_{\text{hub},i}\) is the hub wind speed at turbine \(j\), \(n\) is the number of wakes impacting the location and \(u_{\infty}\) is the wind velocity at the wind park inlet.
### TI and Power prediction networks
For the final component of the WakeNet framework, we predict the power and TI from the flow fields generated in Section 3.1. In the optimisation model, FLORIS, the power generated by a wind turbine is calculated from the three-dimensional flow field. However, the DDN and CNN outlined above generate a two-dimensional horizontal slice of the wind field, meaning that the power generated by the three-dimensional turbine needs to be predicted using two-dimensional data. To solve this problem, a FCNN is trained to predict the power output of a wind turbine from the wind speed along a horizontal line upstream the turbine (Figure 4). The TI of the flow field varies within the wind farm due to the turbulent wakes. Therefore, a second FCNN is used to predict the local TI at the turbines. As a result, through these FCNNs, it is possible to infer three-dimensional power and TI data from the two-dimensional flow field. The inputs to the TI and power predictor FCNNs are the wind speeds along a horizontal line that is 50 metres upstream the turbine, the inflow TI and turbine yaw angle. The wind speed line is parallel to the projected turbine rotor area and stretches along the whole diameter of the blades. The two FCNNs were trained on wind speeds produced by WakeNet and the corresponding FLORIS power output or local TI, respectively. The training data is generated by four example wind farms, consisting of six turbines. The power and local TI are calculated from the third-dimensional flow field using FLORIS while the extracted flow data is from a two-dimensional slice.
Figure 4: TI and power predictor (FCNN) architecture.
Figure 3: Convolutional Neural Network (CNN) architecture.
### Training
For all neural networks used in the WakeNet framework, we take the standard approach of training by minimising the mean squared error (MSE) function using the Adam optimiser [30] to optimise this loss. The trainings are peformed on an Nvidia GeForce RTX 3060 and require anywhere between 10 and 30 minutes, depending on the wake dataset size.
When training a neural network, there are a number of parameters that can be tuned to optimise training. We first consider the size of the mini-batch, which splits the training dataset into small batches used for each back-propagation step of the optimiser. Figure 5 shows that a mini-batch size equal to 1/4 of the full dataset performs better than the full dataset, achieving higher training and validation accuracy within fewer epochs (2000 vs 500 epochs, respectively).
Another parameter that can be optimised is the learning rate of the optimiser. Smaller learning rates may require more epochs to train, but larger learning rates may result in the solution being'missed'. The optimal learning rate for both the DDN and CNN is 0.01. The optimal learning rate for the power predictor network is 0.003 for 150 epochs and for the TI predictor network is 0.0065 for 80 epochs. For both TI and power prediction networks, a learning rate scheduler was used to reduce the learning rate by a factor of 0.8, if the loss on the validation dataset did not decrease after 15 steps. This helped to further reduce the error on validation and test sets.
### Multi-fidelity transfer learning
The ultimate goal of this work is to create a framework that can be deployed on any wind farm site to produce rapid and accurate optimised predictions for the yaw setting of the turbines. The higher the fidelity of the wake model used in the final training, the more accurate the prediction. However, producing a large wake dataset (at least 2000 wakes) of high-fidelity CFD results for any turbine type is not a viable option, as that would require immense computational resources. In this work we demonstrate the capabilities of multi-fidelity transfer learning between the Curl model (playing the role of our higher fidelity model) and the Gaussian wake model (a computationally cheaper model).
Transfer learning models focus on leveraging the pre-gained knowledge obtained during training on a problem and applying it to a similar problem. One of the applications of this technique is to fine-tune a pre-trained neural network in order to produce accurate results when the dataset is limited. The main focus of this work is to transfer the knowledge from a low-fidelity computationally cheap wake model, such as Gaussian, to the higher-fidelity Curl model, as shown in Figure 6. To perform the transfer learning, WakeNet is first trained on a dataset of 2000 wakes produced using the Gaussian model. The first two layers of the DNN are then 'frozen', meaning their weights are unchanged by the back-propagation algorithm during training, whilst the last layer is trained based on the new information contained in the Curl dataset.
Figure 5: Training loss/accuracy curves with and without mini-batching.
## 4 Results
To demonstrate the capabilities of WakeNet, several indicative cases are examined both for single and multiple wakes. The metric of mean absolute error (%) between FLORIS and each neural network submodule is used to assess the accuracy in evaluating the wake profile.
### Wake field comparison
Figures 7 and 8 show a single and a multiple turbine wind farm indicative cases, respectively. The left column shows the evaluations produced after training on a 2000 Gaussian wake dataset while the right column is the evaluations after training on a 2000 wake Curl dataset. The absolute relative error (%) between the analytical and neural results is shown in Figures 7 and 8, and is given by:
\[\mathrm{Error}_{\%}=\left|\frac{u_{\mathrm{WakeNet}}-u_{\mathrm{FLORIS}}}{u_{ \mathrm{stream}}}\right|\times 100. \tag{7}\]
For the single wake case (Fig. 7), WakeNet is able to reproduce the exact solution from FLORIS for both wake models (Gaussian and Curl), with an average error of less than 2% for the Gaussian model and less than 1% for the Curl model, while the velocity profiles across three \(y\)-transects along the horizontal cross-sections of the physical domain agree with the exact transects.
The multiple wake case (Fig. 8) selected tests the performance of WakeNet on a dense wind farm configuration with high turbine yaw settings. As expected, the absolute error increases (up to 10%) in certain regions of the wake velocity domain (Fig. 8c), since for this case the superposition method also affects the results. However, the mean absolute error is less than 5%, and the \(y\)-transects for both the Gaussian and Curl models in Figure 8d agree well with only a few exceptions, mainly in regions where multiple wakes are superimposed.
### TI and Power networks results
The power generated by every turbine and the local TI at the turbine hub is calculated using the FCNNs as described in section 3.2. Figure 9(a) shows the power curve for a single turbine generated by FLORIS and the predicted power by the FCNN. The mean error of the power FCNN is 1.17% demonstrating its ability to accurately predict the power output. Figure 9(b) compares the power and local TI predictions of the FCNNs with FLORIS for the wind parks shown in Figure 8. The FCNN is able to accurately predict the power generated by the individual turbines, even when the turbine lies within a wake of an upstream turbine. The average power percentage error on the test set is 2.8%, and the local TI prediction of the FCNN has a mean TI prediction error of 7.6% on the test data. For 200 randomly generated velocity and TI inflow conditions, the average percentage error for the power and TI prediction of the wind park shown in Figure 8 is 3.9% and 8.5%, respectively.
Figure 8: Example multiple wake comparison for Gaussian (left) and Curl (right) models.
The concept of correcting for the TI value fluctuation downstream is illustrated in Figure 10, where four sequential turbines are separated by a very short distance of 2.5 wind turbine diameters. The neural network model without the TI prediction network gives a systematic error in the velocity deficit after each turbine, which leads to a 30% error within the last wake evaluation (Fig. 10(a)), while in the corrected case the TI network is able to maintain the error at a constant level under 3%.
Finally, the correlation of the power-gain contours produced by placing two turbines 5D apart in the downstream direction and varying their yaw angles is shown in Figure 11. Before introducing the TI and power prediction networks, the WakeNet contour does not match with the exact solution produced by FLORIS (not shown for brevity), but with the corrections of the TI and power network predictions, both the maximum power value and angular phase space position match, namely at \(\sim\)3.3 MW with 15 and 0 degrees of yaw (front and back turbines, respectively).
Figure 10: Wind farm wake prediction using Gaussian DDN of four sequential turbines with no TI network (a) and with the TI network enabled (b).
Figure 9: WakeNet Power and TI predictions to Gaussian FLORIS comparison; (a) turbine power curve; (b) wind farm predictions against turbine number (ordered by ascending x location, if turbines have the same x location, they are ordered by ascending y location).
### Multi-fidelity transfer learning wake results
The advantage of using a multifidelity transfer learning approach is shown in Figure 12, where the accuracy of different training dataset sizes on the model is compared between using only the Curl data or the transfer learning approach. In Figure 12, each curve represents the minimum loss (Fig. 12) or the maximum accuracy (Fig. 12) achieved during training while varying the Curl dataset size from 20 to 100 wakes. The pretrained model achieves an accuracy of over 99.8% (99.5% for the CNN and 99.8% for the DDN) when using only 100 wakes. An important observed characteristic of the training was that most trained models with accuracies lower than 99% have either noisy wind fields or inaccurate free stream velocities, both of which would render the resulting wakes not suitable for participation in wind farm optimisations. For a genetic optimisation process involving the production of thousands of candidate turbine wakes, an accuracy of at least 99% was required in order for the solutions to converge within a reasonable amount of time. If noise is present, an optimisation process might not converge or might take significantly more iterations than that of the FLORIS optimiser to obtain an optimal solution of the yaw settings or the turbine placement. A final note is that generation of a Curl dataset of 2000 wakes requires over 7 hours to compute on a Ryzen 9 5900HK processor. Since only 100 wakes are required to achieve an accuracy of over 99.8%, using transfer learning reduces that computational cost by a factor of 20. A full CFD dataset of at least 2000 wakes would require multiple days to produce, thus an order of magnitude reduction of computational time would significantly improve the ability to produce machine learning optimisation frameworks based on even higher fidelity wake results.
Figure 13 shows qualitatively, the advantage of transfer learning, (where WakeNet is trained on a reduced Curl dataset of 100 wakes) and the improvement in accuracy on a single turbine wake domain from using transfer learning is clear. The WakeNet transfer learning model is able to capture a significantly improved field, both for
Figure 11: Total power produced by two turbines 5D apart in the downstream direction, while their yaw varies; predictions of the neural network trained on Gaussian data (a) and FLORIS using Gaussian model (b).
Figure 12: Validation loss and validation accuracy plots of DDN training using the Curl model with (a) and without (b) transfer learning.
the velocity decrease within the turbine wake and the free stream domain. Moreover, the WakeNet model without transfer learning is not accurate enough to produce meaningful results for wind farm total power evaluation, meaning it is inappropriate for optimisation scenarios.
### Computational time scaling
The performance of WakeNet is further tested in terms of the computational time scaling for the superposition of up to 24 turbines. Fig. 14(a) shows the log-log time comparisons between FLORIS and WakeNet for the Gaussian model and Fig. 14(b) shows the same for the Curl models. As expected, for the Gaussian model, both WakeNet and FLORIS scale similarly, and WakeNet offers no computational time gains as both FLORIS and WakeNet can compute/evaluate a 24-turbine wind farm in under one second. However, for the Curl model, WakeNet is able to evaluate the wake field more than two orders of magnitude faster (0.5 s vs 40 s for the 24-turbine case). As before, the rate at which the computational time increases per superimposed turbine is the same for both WakeNet and FLORIS. We note that the computational times include the time take taken to calculate the total wind farm power. However, to do this, WakeNet is also producing the 2D velocity field array that can easily be visualised with no significant time cost. FLORIS does not do this, and if the FLORIS computational time included the process to generate this 2D velocity field array in FLORIS, the computational time gains would be significantly higher, even for the Gaussian wake case.
A similar performance is achieved by the CNN module and therefore, for simplicity reasons, the DNN module is selected for the remaining results presented in this study.
### Optimisation
The yaw angle and layout optimisation scenarios considered in this study are performed on two different wind farm layouts, one 6-turbine layout (case A) and one dense 15-turbine layout (case B), as shown in Figure 15. This work uses SciPy's SLSQP optimiser [34] to optimise the total power generated by the wind farm. The exact
Figure 14: Logarithmic computational time scaling for the simulation of up to 24 turbines using Floris and WakeNet for the Gaussian (a) and Curl (b) models.
Figure 13: Indicative wake comparison using a limited Curl dataset of 100 wakes before (left) and after TL (right).
power output produced for the initial configurations, the optimal yaw settings and the optimal turbine positions in the domain are always calculated by FLORIS so that the performance between WakeNet and FLORIS can be compared.
#### 4.5.1 Gaussian-based optimisation
In order to assess the capability of the neural network to obtain the optimal power gain, Figure 16 shows the corresponding total farm power heatmaps for the Gaussian-based optimisation. The heatmaps can be used to compare FLORIS optimised results (Exact) with those from WakeNet after both have been used to perform yaw optimisation for both farms A and B, as well as layout optimisation on farm A, under a range of TI and wind speed values. As evident from these heatmaps, the region with the highest potential power gain is around the bottom right corner, which is defined by low TI and high inlet speeds.
For the two yaw cases, WakeNet optimisation finds optimal yaw settings which yield at least 90% of the power gain as the FLORIS optimiser. For the layout optimisation case the overall performance of WakeNet is similar to FLORIS, with some noise visible in the power gain region produced by both optimisers. Furthermore, the average computational time cost for each optimisation heatmap is of the same order of magnitude for both FLORIS and WakeNet optimisers (Table 1), as expected based on the time scaling of Figure 14.
#### 4.5.2 Curl-based optimisation
The Curl-based WakeNet is used for producing similar optimisation heatmaps for layout A. Due to the high computational demands of the Curl model, the Curl-based optimisation was performed within the range of high potential power gain regions that were previously identified by the Gaussian-based optimisation heatmaps. Figure 17 shows that the optimisation produced by FLORIS exact solutions has a very high correlation with both WakeNet (trained on 2000 Curl wakes) and WakeNet TL (trained on 100 Curl wakes), where both neural network models provide optimised yaw angles which yield more that 90% of the potential power of the exact solution. The WakeNet model trained on 100 wakes without TL, however, produces less than 30% of that power gain, which is expected based on the lower training accuracy and the qualitative results of section 4.3.
Nonetheless, the significance of these results lies in the corresponding computational time heatmaps and their time averaged values in Table 2. While FLORIS requires 25 minutes on average for each optimisation (pixel) of the presented heatmaps, WakeNet requires 35 seconds on average.
Moreover, two indicative layout optimisations are performed for both layouts A and B of Figure 15, with a fixed wind speed of 11 m/s and ti 0.05 (Table 3). The optimal layouts of FLORIS and WakeNet optimisers are shown in Figure 18. Even though the optimal solution is not unique (hence the different converged layouts between the two optimisers), the adopted strategy is similar: the turbines are spread in the downstream and vertical directions to minimise blockage losses. Based on the optimisation results of Table 3, the WakeNet optimiser captures 3.5% out of 4.4% and 60.83% out of 61.28% of the potential power gain percentage obtained by FLORIS. To provide these power gains for layout A 18(a), WakeNet requires less than a minute compared to the two hours required by FLORIS. For layout B, the time required by FLORIS optimizer is 36 hours compared to 13 minutes required by WakeNet optimiser. Hence the total computational time gains are higher than two orders of magnitude, which is consistent with the computational gains in the time-scaling study of section 4.4. Note that the layout optimisation time grows exponentially as the number of turbines increases due to the increasing combinations of wake interactions.
In summary, we believe that the computational time gains presented for the Curl-based optimisation scenarios constitute an important stepping stone towards real-time yaw optimisation and more complex layout optimisation, e.g. optimisation under uncertainty, using high-fidelity wake models.
\begin{table}
\begin{tabular}{|c|c|} \hline ModelCase/Type & A/Av. time (s) \\ \hline FLORIS & 1500 \\ \hline WakeNet 100 & 38 \\ \hline WakeNet 100-TL & 32 \\ \hline WakeNet & 41 \\ \hline \end{tabular}
\end{table}
Table 2: Curl average yaw optimisation timings.
Figure 17: Yaw optimisation heatmaps. A comparison between the power gain and the computational cost between the exact solution and WakeNet with Curl datasets of 2000 wakes, 100 wakes with TL and 100 wakes without TL.
## 5 Conclusions
The novel ML framework WakeNet, can reproduce generalised 2D turbine wake fields at hub-height over a wide range of yaw settings, wind speeds and turbulence intensities, with a mean accuracy of 99.8% compared to the downstream velocity domain produced by either the Gaussian or Curl models of FLORIS. Two FCCNs are deployed to approximate the complex 3D quantities of local TI and generated power from the 2D flow predictions of WakeNet. These additional networks are capable of predicting the local TI and power values with a mean accuracy of 98%. All of the ML modules making up WakeNet are validated through multiple superposition examples.
The computational cost scaling of WakeNet as the number of superimposed turbines increases is of the same order of magnitude as FLORIS with the Gaussian wake model. However, when WakeNet is trained on the more sophisticated Curl wake model, the computational time gains become two orders of magnitude higher that FLORIS with the Curl wake model. The wake evaluation time of WakeNet when trained on ever higher-fidelity CFD dataset is expected to be similar, thus further increasing the computational time gains. This will be the topic of future research. Furthermore, the robustness and overall performance of WakeNet on various yaw and layout optimisation scenarios across a range of wind speeds and TIs has been validated through power-gain heatmaps. When trained on the Gaussian model, WakeNet is able to reproduce similar optimal configurations (obtaining at least 90% of power gained by FLORIS optimisation), at a similar order of computational cost, as expected. However, when considering the Curl model, WakeNet provides similar power gains at least two orders of magnitude faster than FLORIS on average, which is also indicated by the computational-cost scaling results.
Finally, multi-fidelity transfer learning has been deployed for fine-tuning the network weights that have been pre-trained on the low-fidelity wake model (Gaussian dataset of 2000 wakes) to obtain accurate wake results for a high-fidelity wake model using a limited dataset (Curl dataset of 100 wakes). The trained transfer learning network has been deployed for indicative yaw and layout optimisation, where it finds the optimal configurations two orders of magnitude faster than FLORIS on average, accelerating optimisations that took 1-2 hours down to a few seconds.
These promising results show that generalised wake modelling with machine learning tools can be accurate
Figure 18: Curl-based optimised layout produced by FLORIS and WakeNet optimisers for 6-turbine (a) and 15-turbine wind farms (b). The dashed line indicates the spatial constraint.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline InfoCase/Model & A/FLORIS & A/WakeNet & B/FLORIS & B/WakeNet \\ \hline Initial Power (MW) & 4.06 & 4.06 & 42.2 & 42.2 \\ \hline Power Gain (\%) & 47.1 & 46.35 & 61.28 & 60.83 \\ \hline Comp. Cost (min) & 121 & 0.5 & 2160 & 13 \\ \hline \end{tabular}
\end{table}
Table 3: Curl layout optimiser information.
enough to contribute towards active yaw and complex layout optimisation applications. Furthermore, the neural network models can be trained on high-fidelity wake models to produce more realistic optimised configurations at a fraction of the computational time, rendering real-time applications in active yaw optimisation possible. Multi-fidelity transfer learning techniques can be useful in producing similar flow and optimisation results using limited high-fidelity datasets, when the solution times would not allow for the creation of a large dataset (e.g. 2000 wakes produced by LES solvers is a computationally extremely expensive task). The proposed methodology could enable maximising wind farm power gains at a minimal installation/operation cost. Planned future implementations include the addition of extra network inputs predictors such as veer and wind direction; the introduction of higher-fidelity TL steps using CFD datasets to further improve realistic wake approximations; testing of our framework on real wind-farm scenarios; parallel computing of forward evaluations for faster optimisation.
|
2305.14935 | Modeling Appropriate Language in Argumentation | Online discussion moderators must make ad-hoc decisions about whether the
contributions of discussion participants are appropriate or should be removed
to maintain civility. Existing research on offensive language and the resulting
tools cover only one aspect among many involved in such decisions. The question
of what is considered appropriate in a controversial discussion has not yet
been systematically addressed. In this paper, we operationalize appropriate
language in argumentation for the first time. In particular, we model
appropriateness through the absence of flaws, grounded in research on argument
quality assessment, especially in aspects from rhetoric. From these, we derive
a new taxonomy of 14 dimensions that determine inappropriate language in online
discussions. Building on three argument quality corpora, we then create a
corpus of 2191 arguments annotated for the 14 dimensions. Empirical analyses
support that the taxonomy covers the concept of appropriateness
comprehensively, showing several plausible correlations with argument quality
dimensions. Moreover, results of baseline approaches to assessing
appropriateness suggest that all dimensions can be modeled computationally on
the corpus. | Timon Ziegenbein, Shahbaz Syed, Felix Lange, Martin Potthast, Henning Wachsmuth | 2023-05-24T09:17:05Z | http://arxiv.org/abs/2305.14935v1 | # Modeling Appropriate Language in Argumentation
###### Abstract
Online discussion moderators must make ad-hoc decisions about whether the contributions of discussion participants are _appropriate_ or should be removed to maintain civility. Existing research on offensive language and the resulting tools cover only one aspect among many involved in such decisions. The question of what is considered appropriate in a controversial discussion has not yet been systematically addressed. In this paper, we operationalize appropriate language in argumentation for the first time. In particular, we model appropriateness through the absence of flaws, grounded in research on argument quality assessment, especially in aspects from rhetoric. From these, we derive a new taxonomy of 14 dimensions that determine inappropriate language in online discussions. Building on three argument quality corpora, we then create a corpus of 2191 arguments annotated for the 14 dimensions. Empirical analyses support that the taxonomy covers the concept of appropriateness comprehensively, showing several plausible correlations with argument quality dimensions. Moreover, results of baseline approaches to assessing appropriateness suggest that all dimensions can be modeled computationally on the corpus.
## 1 Introduction
People have varying degrees of sensitivity to controversial issues and may be triggered by different emotional responses dependent on the issue and the opponents' arguments (Walton, 2010). This often makes it hard to maintain a constructive discussion. In competitive debates, a moderator ensures that participants argue _appropriately_. Debating culture, dating back to the 18th century, demands appropriate behavior, such as staying on topic and avoiding overly emotional language (Andrew, 1996). Accordingly, Wachsmuth et al. (2017) define arguments to be appropriate if they support credibility and emotions and match the issue.
Similarly, in many online forums, moderators ensure a certain level of civility in the discussions. What arguments are considered civil may differ from community to community. The task of discussion moderation thus requires ad-hoc decisions about the appropriateness of any contributed argument, calling out the inappropriate ones--a challenging task to master. Moreover, the amount of moderation required on the web necessitates automation of this task, as the resources for manual moderation are usually insufficient.
Figure 1 shows two exemplary arguments, assessed by human annotators. The inappropriate
Figure 1: Two arguments from the corpus introduced in this paper, one appropriate and one inappropriate. The used colors match the taxonomy concepts we present in Section 3: toxic intensity (dark red), unclear meaning (orange), and missing openness (light purple).
argument appeals excessively to emotions, is not easily understandable, and shows little interest in the opinion of others. Note that the last sentence of the argument is also a personal attack, a special case of inappropriate emotional language. Hence, multiple inappropriateness aspects can occur at the same time. The appropriate argument, on the other hand, does not contain any of these issues.
Most previous work on automatic content moderation has focused on detecting offensive content (Schmidt and Wiegand, 2017; Poletto et al., 2021). However, to create a climate in which controversial issues can be discussed constructively, combating only offensive content is not enough, since there are also many other forms of inappropriate arguments (Habernal et al., 2018). While the notion of appropriateness is treated in argumentation theory as an important subdimension of argument quality (see Section 2), there has been no systematic study of appropriateness, let alone a clear definition or operationalization. These shortcomings hinder the development of automatic moderation tools.
In this paper, we present a taxonomy of 14 inappropriateness dimensions, systematically derived from rhetoric (Burkett, 2011) and argument quality theory (Wachsmuth et al., 2017), along with a corpus annotated for the dimensions. Matching elements of the concept of reasonableness by van Eemeren (2015), we argue appropriateness to be a minimal quality property that is necessary for any argument to consider it valuable in a debate.
We motivate the 14 dimensions empirically in Section 3 by analyzing interactions of low appropriateness with other quality issues of arguments, and we further refine the dimensions on this basis. To operationalize the taxonomy, we create a new corpus of 2191 arguments from debates, question-answering forums, and reviews (Section 4). The arguments are compiled from three existing argument quality corpora (Habernal and Gurevych, 2016; Wachsmuth et al., 2017; Ng et al., 2020), such that they cover both a variety of topics and selected topics in depth. All arguments are manually labeled for the dimensions in a human annotation study.
Given the new corpus, we analyze correlations between the 14 dimensions and the argument quality dimensions in the source corpora in Section 5. Several plausible correlations support that our taxonomy successfully aligns with the theoretical and practical quality aspects modeled in previous work. To gain insights into how well the proposed dimensions can be predicted automatically, we also evaluate first baseline approaches to the computational assessment of appropriateness (Section 6). The results do not fully compete with the average human performance. However, they show large improvements over basic baselines on all dimensions while suggesting that a semantic understanding of arguments is required for the task.
Altogether, this paper's main contributions are:1
Footnote 1: The corpus and experiment code can be found under: [https://github.com/webis-de/ACL-23](https://github.com/webis-de/ACL-23)
* A theory-based taxonomy that specifies inappropriate language in online discussions
* A corpus with 2191 arguments from three different genres, manually annotated for the 14 taxonomy dimensions
* Empirical insights into the relation of appropriateness to previously studied quality dimensions and into its computational predictability
## 2 Related Work
The notion of appropriateness has been explored in several sub-disciplines of linguistics. In communicative competence research, Hymes et al. (1972) considered the knowledge about cultural norms as a requirement to produce appropriate speech, which is a central part of acquiring communicative competence. Defining sociolinguistics, Ranney (1992) linked appropriateness to the notion of politeness that is required in various social settings. Later, Schneider (2012) argued that appropriateness is a more salient notion than politeness as it explicitly accounts for the context. Some of these cultural speech properties were identified as linguistic etiquette by Jdetawy and Hamzah (2020), including correct, accurate, logical, and pure language.
Regarding the discussion of controversial issues, debating culture has required participants since its origins to stay on topic and to avoid offensive and overly emotional formulations (Andrew, 1996). Likewise, Blair (1988) differentiate between good and bad bias in argumentation, where the latter exhibits close-mindedness, distortion of the conversation, or an imbalance of pro and con arguments. Similarly, Walton (1999) introduced the concept of dialectical bias, explicitly addressing the context in which an argument is judged to be appropriate. This perspective on argumentation is also described by Burkett (2011) as "[...] making appropriate choices in light of situation and audience."
As a sub-dimension of argument quality, appropriateness was first studied in NLP by Wachsmuth et al. (2017), a significant inspiration for our work. The authors derived appropriateness as one of the rhetorical argument quality dimensions based on the work of Aristotle Aristotle (2007). While several of the quality dimensions they proposed were addressed explicitly in previous work, the appropriateness dimension has not been systematically assessed until now. Wachsmuth et al. (2017) only provided a relatively shallow definition of appropriateness that requires a simultaneous assessment of three properties, namely the creation of _credibility_ and _emotions_ as well as _proportionality_ to the issue. In contrast, we model these properties individually (in addition to several other dimensions) to better understand what exactly impacts appropriateness.
Computationally, only Wachsmuth and Werner Werner (2020) tried to predict appropriateness alongside all the other quality dimensions of Wachsmuth et al. (2017). However, their models relied on a rather small sample of 304 arguments. In comparison, our corpus consists of 2191 arguments spanning three argumentative genres, providing deeper insights into the appropriateness of an argument. Related to this notion is the convincingness of arguments studied by Habernal and Gurevych (2016, 2016) which correlates with appropriateness Wachsmuth et al. (2017), as well as the effectiveness of arguments Ng et al. (2020); Lauscher et al. (2020).
In the context of appropriateness, Walton (2010) explored the notion of emotional fallacies in reasoning, some of which were later assessed computationally Habernal et al. (2017); Alhindi et al. (2022); Jin et al. (2022); Goffredo et al. (2022). Although we consider some of these fallacies in our work, we also consider other dimensions and exclude some irrelevant to appropriateness (i.e., logical fallacies) because of their more technical nature.
We model _toxic emotions_ based on the emotional fallacies identified by Walton (2010): ad populum, ad misercordiam, ad baculum, and ad hominem. We merged these four into a single sub-dimension called _emotional deception_ based on the results of a pilot annotation study (Section 4). Additionally, we define a sub-dimension _excessive intensity_ to address overly intense emotions. In particular, our analysis revealed the presence of a subset of propaganda errors, including loaded language, flag-waving, repetition, exaggeration, and minimization Da San Martino et al. (2020).
## 3 Modeling Appropriateness
This section explains how we established the relevant dimensions of appropriateness by systematically analyzing research on argument quality.
### Appropriateness and Argument Quality
To learn what makes an argument _(in)appropriate_, we analyzed the interaction of appropriateness with other quality dimensions in the 304 arguments of Wachsmuth et al. (2017). We selected the dimensions that correlated most with appropriateness according to Pearson's \(r\). These include the four sub-dimensions of rhetorical effectiveness (besides appropriateness), namely, _credibility_ (.49), _emotional appeal_ (.30), _clarity_ (.45), and _arrangement_ (.48), as well as _local acceptability_ (.54) (sub-dimension of logical cogency) and _global acceptability_ (.59) (sub-dimension of dialectical reasonableness). We then counted the number of arguments with the lowest quality rating for both appropriateness and the other dimensions as we expected the most notable differences in those instances.
Figure 2 illustrates the absolute cooccurrence of flawed arguments for the selected dimensions. Uniquely, appropriateness flaws always occur with at least one other flawed rhetorical dimension in all 43 cases, and low acceptability in nearly all cases.
Consequently, we manually analyzed arguments by contrasting pairs of arguments with and without low appropriateness to find patterns that describe what drives the low appropriateness levels within these dimensions. For example, to model the overlap of appropriateness with credibility, we compared the 29 arguments with only low credibility in Figure 2 (a) to the 39 (\(=2+1+6+14+7+9\)) arguments with low appropriateness and credibility.
Figure 2: Venn diagrams showing the absolute counts of low-quality arguments in the corpus of Wachsmuth et al. (2017) in terms of appropriateness and other dimensions: (a) The sub-dimensions of rhetorical effectiveness. (b) Local acceptability and global acceptability.
Concretely, we compared them incrementally, starting from arguments that do not have low values in any quality dimension except appropriateness and credibility, proceeding to those with exactly one other low value, and so forth until we reach the 14 arguments that have low values in all dimensions.
### Defining Inappropriateness
The findings from our analysis led to four core inappropriateness dimensions in our taxonomy: We deem an argument _inappropriate_ (in light of its discussion context) if it is _missing commitment_ of its author to the discussion, uses _toxic emotions_, is _missing intelligibility_, or seems inappropriate for _other reasons_. We detailed each in the following:
Toxic EmotionsWe model _toxic emotions_ based on the emotional fallacies identified by Walton (2010): ad populum, ad misercordiam, ad baculum, and ad hominem. We merged these four into a single sub-dimension called _emotional deception_ based on the results of a pilot annotation study (Section 4). Additionally, we define a sub-dimension _excessive intensity_ to address overly intense emotions. In particular, our analysis revealed the presence of a subset of propaganda errors, including loaded language, flag-waving, repetition, exaggeration, and minimization Da San Martino et al. (2020).
Missing CommitmentThis dimension resembles the _credibility_ dimension of Wachsmuth et al. (2017), but it differs in that we do not mandate arguments to come from or include a trusted source. Rather, the arguments should demonstrate the participant's general interest in participating in the debate. To formalize this concept, we drew on the five rules for "A Good Dialogue" (Walton, 1999) to create two sub-dimensions of commitment, _missing seriousness_ and _missing openness_, by examining the extent to which they apply to the arguments identified in the overlap analysis.
Missing IntelligibilityThe core dimension _missing intelligibility_ results from the overlap analysis of the _clarity_ and _arrangement_ dimensions of Wachsmuth et al. (2017). We found that the main point of an argument was partly unclear either due to (un)intentional vagueness or overly (un)complex language, which we refer to in our taxonomy as the sub-dimension _unclear meaning_. Also, derailing a discussion to another issue is a common issue (represented by the sub-dimension _missing relevance_). Finally, in some cases the individual claims and premises were intelligible but not their connection. We refer to this as a _confusing reasoning_.
Other ReasonsThis dimension accounts for reasons that do not fit into the other core-dimensions. As part of this, we observed that some arguments have a _detrimental orthography_, limiting intelligibility in some cases (spelling or grammatical errors) or increasing emotions in others (capital letters, repeated exclamation points). We leave any other case of inappropriateness as _reason unclassified_.
Figure 3 depicts the final taxonomy of all 14 dimensions we propose. We hierarchically decompose _inappropriateness_ into the four core dimensions and those further into the nine discussed sub-dimensions to obtain a nuanced understanding of inappropriateness. The argument-centric focus of our taxonomy allows annotators to quickly formulate reasons for inappropriateness in the form "\(a\) is inappropriate because of \(\sigma\)", where \(a\) is an argument and \(\sigma\) a specific sub-dimension from the taxonomy. We define each dimension below.
### A Hierarchical Taxonomy
Since _appropriateness_ itself is already discussed in the literature, we refrain from redefining it here. Instead, we build on Wachsmuth et al. (2017) who state that an argument "has an appropriate style if the used language supports the creation of credibility and emotions as well as if it is proportional to
Figure 3: Proposed taxonomy of inappropriate language in argumentation, with 14 dimensions and sub-dimensions. The colors are aligned with the argument quality dimensions used to derive them (Figure 2).
the issue." Their annotation guidelines further suggest that "the choice of words and the grammatical complexity should [...] appear suitable for the topic discussed within the given setting [...], matching the way credibility and emotions are created [...]".
While our goal is to model appropriate language in argumentation, we decided to define when an argument is _not_ appropriate (as indicated above) to maintain freedom of speech as much as possible. Therefore, we define the four core dimensions and their sub-dimensions from Figure 3 in a "reverse" way, clarifying what is considered _in_appropriate:
Toxic Emotions (TE)An argument has toxic emotions if the emotions appealed to are deceptive or their intensities do not provide room for critical evaluation of the issue by the reader.
* _Excessive Intensity (EI)_. The emotions appealed to by an argument are unnecessarily strong for the discussed issue.
* _Emotional Deception (ED)_. The emotions appealed to are used as deceptive tricks to win, derail, or end the discussion.
Missing Commitment (MC)An argument is missing commitment if the issue is not taken seriously or openness other's arguments is absent.
* _Missing Seriousness (MS)_. The argument is either trolling others by suggesting (explicitly or implicitly) that the issue is not worthy of being discussed or does not contribute meaningfully to the discussion.
* _Missing Openness (MO)_. The argument displays an unwillingness to consider arguments with opposing viewpoints and does not assess the arguments on their merits but simply rejects them out of hand.
Missing Intelligibility (MI)An argument is not intelligible if its meaning is unclear or irrelevant to the issue or if its reasoning is not understandable.
* _Unclear Meaning (UM)_. The argument's content is vague, ambiguous, or implicit, such that it remains unclear what is being said about the issue (it could also be an unrelated issue).
* _Missing Relevance (MR)_. The argument does not discuss the issue, but derails the discussion implicitly towards a related issue or shifts completely towards a different issue.
* _Confusing Reasoning (CR)_. The argument's components (claims and premises) seem not to be connected logically.
Other Reasons (OR)An argument is inappropriate if it contains severe orthographic errors or for reasons not covered by any other dimension.
* _Determined Orthography (DO)_. The argument has serious spelling and/or grammatical errors, negatively affecting its readability.
* _Reason Unclassified (RU)_. There are any other reasons than those above for why the argument should be considered inappropriate.
## 4 The Appropriateness Corpus
This section details the data acquisition and annotation process of our _Appropriateness Corpus_ and provides statistics of the collected annotations. Statistics of our corpus split by argument source are found in Appendix F.
### Data Acquisition
Studying the applicability of our taxonomy requires a set of arguments that is both diverse and sufficiently large. We rely on manually labeled examples of reasonable quality to ensure that our corpus only contains argumentative texts. In particular, we collected all 2191 arguments on 1154 unique issues from existing corpora (Habernal and Gurevych, 2016; Wachsmuth et al., 2017; Ng et al., 2020).2 All corpora are used in research on argument quality assessment (Habernal and Gurevych, 2016; Wachsmuth and Werner, 2020; Lauscher et al., 2020) and contain annotations that we identified as related to appropriateness:
Footnote 2: The arguments from Wachsmuth et al. (2017b) are a subset of those from Habernal and Gurevych (2016b) with additional annotations. We include each argument once only.
* The Dagstuhl-15512 ArgQuality corpus (Wachsmuth et al., 2017b) covers appropriateness and its most correlated dimensions.
* The UKPConvArg2 (Habernal and Gurevych, 2016a) corpus has reason labels for why argument \(a\) is more convincing than argument \(b\).
* The GAQCorpus (Ng et al., 2020) covers four argument quality dimensions, including effectiveness, the "parent" of appropriateness.
We carefully selected the source corpora such that about \(50\%\) of the arguments belong to only 16
issues while the rest covers the remaining 1138 issues, making our corpus valuable both vertically (issues with many arguments allow deeper analyses) and horizontally (large number of issues promotes generalizability). The average sentence length of arguments is 4.8. The corpus includes arguments of three genres, 1590 from debate portals, 500 from question answering forums, and 101 reviews.
### Annotation Process
We designed a task-specific annotation interface that leverages the hierarchical structure of the taxonomy in Figure 3. Specifically, annotators needed to label sub-dimensions, only if the respective core dimension was labeled before as given for an argument. Following Wachsmuth et al. (2017), we used an ordinal scale for the _inappropriateness_ dimension described as (1) fully inappropriate, (2) partially (in)appropriate, and (3) fully appropriate.
Likewise, a binary yes/no scale was used for all the other dimensions, where _yes_ means inappropriateness in terms of the respective dimension. Annotators were required to select a reason (core dimension) from the taxonomy only for partially or fully inappropriate arguments. We provided a coherent and self-descriptive interface (see Appendix D) to reduce the cognitive load on the annotators. The annotators also had the opportunity to provide their own reasons for the _reason unclassified_ dimension.
We conducted two rounds of annotation to find qualified annotators. In the first round, eight native English speakers hired on _Upwork_ and two authors of this paper (5 female, 5 male in total) each annotated 100 arguments, randomly sampled from our corpus. Based on the results and feedback on the annotation interface and the guidelines, we refined our taxonomy, most notably reducing the number of dimensions from 18 to 14. For the second round, we selected the three Upwork annotators with the highest expert correlations (2 female, 1 male). We paid $13 per hour for annotating all 2191 arguments, as we did in the first round. To mitigate the cognitive overload entailed by prolonged reading, we divided the annotation into 14 batches of roughly 150 arguments each and limited the number of batches to be annotated per day to one.
### Corpus Statistics and Agreement
To combine the annotators' labels in our corpus, we first use MACE (Hovy et al., 2013) in order to consider the annotators' reliability. We then compute Krippendorff's \(\alpha\) between the MACE labels and those obtained with either of three combination strategies: _Liberal_ considers an argument appropriate if at least one annotator marked it as such. _Majority_ considers the label for which at least two annotators agree. _Conservative_, finally, considers an argument inappropriate if at least one annotator marked it as such. Table 2 shows that the MACE labels correlate best with the conservative labels in all cases. Consequently, to obtain the final corpus annotations, we combined the three labels of each argument following the conservative strategy. This strategy also seems most consistent with the current belief system in many societies around the world, that is, to accommodate minorities in language.
\begin{table}
\begin{tabular}{l l r r r r r r r r r r r r} \hline \hline & & \multicolumn{3}{c}{**(a) Count**} & \multicolumn{3}{c}{**(b) Agree.**} & \multicolumn{6}{c}{**(c) Kendall’s + Correlation**} \\ \cline{3-13}
**Dimension** & **Yes** & **No** & **Full** & \(\alpha\) & **In** & **TE EIED** & **MC MS MO** & **MI UM MR CR** & **OR DO RU** \\ \hline
**In** & **Inappropriateness** & **1182** & 1009 & 60\% &.45 & &.56 &.38 &.44 &.59 &.35 &.47 &.62 &.41 &.42 &.25 &.21 &.18 &.10 \\
**TE** & **Toxic Emotions** & 594 & 1597 & 77\% &.36 &.56 & **.66 &.78** &.35 &.11 &.35 &.13 &.01 &.12 &.06 &.00 &.
Table 1(a) presents the corpus distribution of the annotations aggregated conservatively. For readability, we binarized the overall inappropriateness in the table, considering both fully and partially inappropriate arguments as inappropriate. 1182 arguments were considered at least partially (in)appropriate (540 of them fully inappropriate).
Among the reasons given, _missing intelligibility_ is the most frequent core dimension (774 arguments) and _missing openness_ the most frequent sub-dimension (658), matching the intuition that a missing openness to others' opinions is a key problem in online discussions. The least frequent core dimension is _other reasons_ (108), and the least frequent sub-dimension _reason unclassified_ (32). That is, our annotators rarely saw additional reasons, indicating the completeness of our taxonomy.
Table 1(b) shows inter-annotator agreement. For _inappropriateness_, the annotators had full agreement in 60% of all cases, suggesting that stricter settings than our conservative strategy can also be applied without limiting the number of annotations too much. The Krippendorff's \(\alpha\) agreement is limited but reasonable given the subjectiveness of the task. It ranges from \(.11\) to \(.51\) among the dimensions (not considering _reason unclassified_), with \(.45\) for overall _inappropriateness_. These values are similar to those of Wachsmuth et al. (2017b).
## 5 Analysis
Building on existing corpora on theoretical and practical argument quality, we now report the correlations of our proposed dimensions and the quality dimensions of Wachsmuth et al. (2017b) and Habernal and Gurevych (2016a). Correlations with Ng et al. (2020) are found in Appendix E (only one dimension is directly related to appropriateness).
### Relations between Corpus Dimensions
Table 1(c) presents the Kendall's \(\tau\) correlations between all inappropriateness dimensions. Among the core dimensions, we find _missing intelligibility_ to be most (\(.62\)) and _other reasons_ to be least (\(.21\)) correlated with _inappropriateness (In)_. In case of the sub-dimensions, _missing openness_ is most (\(.47\)) and _not classified_ least (\(.10\)) correlated with it.
The sub-dimensions are mostly correlated with their direct parent, with values between \(.41\) and \(.88\), which is expected due to our annotation study setup. However, there are clear differences between sub-dimensions of the same parent; for example, _excessive intensity_ and _emotional deception_ are highly correlated with _toxic emotions_ (\(.66\) and \(.78\)) but have low correlation with each other (\(.22\)). Cross-dimensional correlations among the core- and sub-dimensions are highest between _toxic emotions_ and _missing intelligibility_ (\(.35\)) and _excessive intensity_ and _missing openness_ (\(.28\)) respectively. This suggests that overly intense emotions sometimes signify a rejection of others' opinions and vice versa.
### Relation to Theory of Argument Quality
Table 3 shows the Kendall's \(\tau\) correlations between our dimensions and the theoretical quality dimensions of Wachsmuth et al. (2017b). We observe the highest correlation for the two _(in)appropriateness_ dimensions (\(.41\)), showing that our annotation guideline indeed captures the intended information for the annotated arguments. Furthermore, seven of our dimensions correlate most strongly with _appropriateness_ in the Dagstuhl-15512 ArgQuality corpus, and all 14 dimensions have the highest correlation with one of the seven argument quality dimensions that we used to derive the taxonomy.
The values of _reason unclassified (RU)_ are low (between \(.02\) and \(.14\)), speaking for the completeness of our taxonomy. However, its most correlated quality dimension is _cogency_, possibly indicating a minor logical component of appropriateness.
### Relation to Practice of Argument Quality
Table 4 shows the correlations between our dimensions and the convincingness comparison reasons of Habernal and Gurevych (2016a). We see that
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{**Krippendorff’s \(\alpha\)**} \\ \cline{2-4}
**Dimension** & **Liberal** & **Majority** & **Conservat.** \\ \hline
**Inappropriateness** & 0.16 & 0.54 & **0.95** \\
**Toxic Emotions** & 0.14 & 0.45 & **1.00** \\ Excessive Intensity & -0.08 & 0.30 & **1.00** \\ Emotional Deception & 0.05 & 0.41 & **1.00** \\
**Missing Commitment** & -0.03 & 0.30 & **1.00** \\ Missing Seriousness & 0.27 & 0.54 & **1.00** \\ Missing Openness & -0.12 & 0.22 & **0.96** \\
**Missing Intelligibility** & -0.03 & 0.41 & **1.00** \\ Unclear Meaning & -0.07 & 0.19 & **1.00** \\ Missing Relevance & -0.04 & 0.22 & **1.00** \\ Confusing Reasoning & -0.04 & 0.19 & **0.95** \\
**Other Reasons** & 0.08 & 0.31 & **1.00** \\ Detrimental Orthography & 0.13 & 0.42 & **1.00** \\ Reason Unclassified & -0.01 & -0.01 & **1.00** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Krippendorff’s \(\alpha\) agreement between MACE labels and the manual labels obtained by each evaluated combination strategy (liberal, majority, conservative).
attacking/abusive behavior is most correlated with our _inappropriateness_ (In, \(.86\)), _missing commitment_ (MC, \(.70\)) and _toxic emotions_ (TE, \(.70\)) dimensions. _Missing seriousness (MS)_ and _missing intelligibility (MI)_ are mostly correlated with humor/sarcasm (\(.69\)) and not addressing (derailing) the topic (\(.75\)) respecitvely. _Confusing reasoning (CR)_ is most correlated with an argument being hard to follow (\(.36\)), and _unclear meaning (UM)_ with insufficient reasoning (\(.57\)).
We find that _detrimental orthography (DO)_ renders an argument unclear and difficult to follow (\(.47\)). Finally, the _reason unclassified (RU)_ dimension is most correlated with making a reader think about an argument. Manual inspection of the reasons for these annotations reveals that annotators chose _reason unclassified_, if they were unsure which of the other dimensions they should assign.
## 6 Experiments
The corpus from Section 4 is meant to enable the computational treatment of inappropriate language in argumentation. As an initial endeavor, this section reports baselines for classifying all 14 dimensions in the taxonomy from Section 3.
\begin{table}
\begin{tabular}{l l
### Experimental Setup
In line with Table 1, we treat all annotations as binary labels. We performed five repetitions of 5-fold cross-validation (25 folds in total) and ensured a similar distribution of the labels in each fold. For each folding, we used 70% for training, 10% for selecting the best-performing approach in terms of the mean macro-F\({}_{1}\) score, and 20% for testing.
ModelsFor classification, we employed the recent model _DeBERTaV3-large_He et al. (2021), with an argument prepended by the discussion issue as input. Besides, we tested two "ablations": _DeBERTaV3-w/o-issue_ receives only the argument to gain insight into how effective it is to provide the issue as context. _DeBERTaV3-shuffle_ receives the argument and the issue with all words shuffled, to analyze the impact of proper syntactic and semantic formulations. We trained our models to predict all 14 dimensions via a multi-label prediction loss, accounting for data imbalance by assigning weights to all dimensions (more details in Appendix A).
Lower and Upper BoundsTo quantify the impact of learning, we compare against a _random baseline_ that chooses a label pseudo-randomly and a _majority baseline_ that takes the majority label for each dimension. As an upper bound, we measure _human performance_ in terms of the average of each human annotator in isolation on the dataset.
### Results
Table 5 presents the mean F\({}_{1}\)-score for all 14 inappropriateness dimensions averaged over all folds. _DeBERTaV3-large_ performs best in terms of macro F\({}_{1}\)-score (.69), significantly beating both _DeBERTaV3-w/o-issue_ (.68) and _DeBERTaV3-shuffle_ (.65) in a Wilcoxon signed-rank test (\(p<.05\)). The gain over _DeBERTaV3-w/o-issue_ is small though, suggesting that the context of a discussion (here, the issue) may be of limited importance for predicting inappropriateness. Plausible reasons are that (1) most arguments are (in)appropriate regardless of their context, or (2) the context of the argument is explicitly or implicitly contained within most arguments. _DeBERTaV3-w/issue_ clearly outperforms the random baseline and majority baseline on all dimensions, and it achieves about 92% of human performance in terms of macro F\({}_{1}\) (.75). These results suggest the possibility of automating the task of predicting appropriateness, however, encouraging further improvements.
## 7 Conclusion
Online discussions of controversial topics mostly turn out fruitful only, when the participants argue _appropriately_, a dimension of argumentative language that has received no systematic investigation so far. Therefore, we have presented a taxonomy of 14 dimensions to model inappropriate language in argumentation, derived from rhetoric and argumentation theory. To enable computational research on appropriateness, we compiled a corpus of 2191 arguments from three genres, carefully annotated for all dimensions.
Our extensive corpus analyses confirm correlations with both theoretical and practical dimensions of argument quality from the literature. The taxonomy covers inappropriateness comprehensively according to human annotators. While a DeBERTa-based baseline already comes rather close to human performance in classifying inappropriate language, our corpus allows for developing more sophisticated models in future work that may serve an automatic (or semi-automatic) content moderation.
To make content moderation successful and accepted, we think that providing clear reasons supporting the moderation is important, so the participants can better frame their arguments in online discussions. The defined taxonomy dimensions lay out how such reasons may look like.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline
**Approach** & **In** & **TE** & **EI** & **ED** & **MC** & **MS** & **MO** & **MI** & **UM** & **MR** & **CR** & **OR** & **DO** & **RU** & **Macro** \\ \hline Random baseline &.49 &.47 &.45 &.45 &.49 &.39 &.47 &.48 &.45 &.47 &.39 &.37 &.37 &.34 &.43 \\ Majority baseline &.32 &.42 &.45 &.45 &.40 &.48 &.41 &.39 &.44 &.43 &.48 &.49 &.49 &.50 &.44 \\ DeBERTaV3-large & **.75** & **.74** & **.69** & **.70** & **.75** & **.73** & **.72** & **.72** & **.69** &.68 & **.62** & **.65** & **.67** & **.52** & **.69\({}^{\dagger\ddagger}\) \\ DeBERTaV3-w/o-issue & **.75** &.73 &.68 & **.70** & **.75** & **.73** &.71 & **.72** &.68 & **.69** &.61 &.63 &.66 &.51 &.68\({}^{\ddagger}\) \\ DeBERTaV3-shuffle &.72 &.69 &.64 &.64 &.71 &.65 &.68 &.70 &.66 &.65 &.57 &.59 &.57 &.50 &.64 \\ \hline Human performance &.78 &.79 &.73 &.77 &.73 &.82 &.70 &.76 &.73 &.72 &.74 &.78 &.80 &.70 &.75 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Evaluation of appropriateness classification: F\({}_{1}\)-score of each approach in 5-times repeated 5-fold cross validation on all 14 proposed dimensions. The best value in each column is marked bold. We marked significant macro F\({}_{1}\)-score gains over _DeBERTaV3-w/o-issue_ (\(\dagger\)) and _DeBERTaV3-shuffle_ (\(\ddagger\)) at \(p<.05\).
Acknowledgments
This project has been partially funded by the German Research Foundation (DFG) within the project OASiS, project number 455913891, as part of the Priority Program "Robust Argumentation Machines (RATIO)" (SPP-1999). We would like to thank the participants of our study and the anonymous reviewers for the feedback and their time.
## 9 Limitations
Aside from the still-improvable performance of the classification models we evaluated, our work is limited in two ways: the nature of what is considered appropriate as well as the difficulties that arise during corpus creation in NLP in general.
We point to the subjectivity in perception regarding appropriateness, which is also displayed and discussed in the paper by the inter-annotator agreement. Many sociocultural factors can influence this perception within cultures, such as age, gender, education, or ethnicity. We sought to account at least for gender by including both male and female annotators for all arguments. However, we encourage further studies that focus on other factors, as we expect appropriateness to be seen differently, primarily across cultures with varying styles of debates. Since our corpus contains only arguments written in English and is annotated by native English speakers, it may also be insufficient to generalize across languages.
Moreover, appropriateness perception is likely subject to change over time. Although we collected arguments from different years, we see long-time limitations to our corpus. In general, it also depends on the expectations of the discussion participants, which are to some extent predetermined by the context (e.g., a sales pitch vs. a discussion with friends). In that regard, the context of our corpus is solely that of discussing controversial issues with strangers on the web. Finally, the size of the created corpus we propose in the paper may limit the generalizability of approaches that build on it and should be investigated further in future work.
## 10 Ethical Considerations
The corpus and the computational baselines presented in this paper target a sensitive issue: what is considered appropriate to say in a discussion. We suggest differentiating between freedom of speech, hate speech, and inappropriate speech. We believe inappropriate speech is an extension of hate speech that leads to a less free but more healthy climate in speech exchange. While freedom of speech in many countries is limited by hate speech in law, the extension to inappropriate speech is not. Consequently, automating the detection of inappropriateness and dealing with it in the same way hate speech is addressed (often by removal) may be perceived as hurting individuals' freedom of speech and, thus, must be handled with great care.
However, we see no strong immediate ethical concerns regarding the computational methods specific to our work, as they only detect inappropriateness and do not recommend any actions. We stress, though, that they are not meant yet for real-life applications. Apart from the outlined limitations, we also do not see notable ethical concerns regarding our taxonomy, as we derived it systematically from existing literature and always encouraged our annotators to add their own reasons.
Finally, we aimed to ensure fair payment. As discussed in the paper, our annotators were paid about $13 per hour, which exceeds the minimum wage in most US states and is also conform to the standards in the regions of our host institutions.
|
2305.00968 | The monadic theory of order | We deal with the monadic (second-order) theory of order. We prove all known
results in a unified way, show a general way of reduction, prove more results
and show the limitation on extending them. We prove (CH) that the monadic
theory of the real order is undecidable. Our methods are model-theoretic, and
we do not use automaton theory.
This is a slightly corrected version of a very old work. | Saharon Shelah | 2023-05-01T17:55:37Z | http://arxiv.org/abs/2305.00968v1 | # The monadic theory of order
###### Abstract.
We deal with the monadic (second-order) theory of order. We prove all known results in a unified way, show a general way of reduction, prove more results and show the limitation on extending them. We prove (CH) that the monadic theory of the real order is undecidable. Our methods are model-theoretic, and we do not use automaton theory.
This is a slightly corrected version of a very old work.
Version 2023-05-01. See [https://shelah.logic.at/papers/42/](https://shelah.logic.at/papers/42/) for possible updates _Key words and phrases._ monadic order, decidability First typed: September 1975 Research supported by the United States-Israel Binational Science Foundation. Publication sh:42.
continued in this direction, in [4], showing that also the monadic theory (i.e., quantification is possible over arbitrary sets) of \(\omega\) is decidable; and in [10] he showed the decidability of the weak monadic theory of ordinals. In [1, p. 96]he proved the decidability of the monadic theory of countable ordinals. Rabin [11] proved a very strong and difficult result, implying the decidability of the monadic theory of countable orders. Buchi [1] showed the decidability of the monadic theory of \(\omega_{1}\) and of \(\{\alpha:\alpha<\omega_{2}\}\).
Meanwhile Lauchli [12], using methods of Ehrenfeucht [1] and Fraisse [13] and continuing works of Galvin (unpublished) and Lauchli and Leonard [14], proved the decidability of the weak monadic theory of order. He did not use automaton theory. Pinus [15] strengthened, somewhat, those results. Our results have been announced in [11], [12]
By our notation Lauchli used \(Th^{n}_{\bar{k}}\) only for \(\bar{k}=\langle 1,1,1,\ldots\rangle\) (changed for the quantification over finite sets).
Remark: We are not interested here in results without the axiom of choice. See Siefkes [16] which shows that the result on \(\omega\) is provable in ZF. This holds also for \(\alpha<\omega^{*}\). Litman [10] pointed out some mistakes in [1, 6] (theorems without AC); proved connected results, and showed in ZF that \(\omega_{1}\) is always characterizable by a sentence.
In Section 7 we prove (CH) the undecidability of the monadic theory of the real order and of the class of orders, and related problems. It can be read independently, and has a discussion on those problems. Gurevich finds that our proof works also for the lattice of subsets of a Cantor discontinuum, with the closure operation, and similar spaces. Hence Grzegorczyk's [12] question is answered (under CH)1.
Footnote 1: Gurevich meanwhile has proved more and has a paper in preparation.
Our work continues [12], but for well ordering we use ideas of Buchi and Rabin. We reduce here the decision problem of the monadic theories of some (classes of) orders [e.g., well orderings; the orders which do not embed \(\omega_{1}\) not \(\omega_{1}^{*}\)] to problems more combinatorial in nature. So we get a direct proof for the decidability of countable orders (answering a question of Buchi [1, p.35] Our proof works for a wider class, thus showing that the countable orders cannot be characterized in monadic theory, thus answering a question of Rabin [11](p.12). Moreover, there are uncountable orders which have the same monadic theory as the rationals (e.g., dense Specker order; see [13] for their existence; and also some uncountable subsets of the reals). We also show that the monadic theory of \(\{\alpha:\alpha<\lambda^{+}\}\) is recursive in that of \(\lambda\), generalizing results of Buchi for \(\omega\) and \(\omega_{1}\). Unfortunately, even the monadic theory of \(\omega_{2}\) contains a statement independent of ZFC. For a set \(A\) of ordinals, let \(F(A)=\{\alpha:\alpha\) is a limit ordinal of cofinality \(>\omega,\alpha<\sup A\), and \(\alpha\cap A\) is a stationary subset of \(\alpha\}\).
Now Jensen [1] proved the following:
**Theorem 0.1**.: \((V=L)\)_. A regular cardinal \(\kappa\) is weakly compact if and only if for every stationary \(A\subseteqq_{8}^{\prime}\kappa\), such that \((\forall\alpha\in A)[\mathrm{cf}(\alpha)=\omega],F(A)\neq\varnothing\)._
_As the second part is expressible in the monadic theory of order, the Hanf number of the monadic theory of order is high. Clearly also the monadic theory of the ordinals depends on an axiom of large cardinals._
_Now, Baumgartner_[1]_ _shows that if ZFC+ (there is a weakly compact cardinal) is consistent, then it is consistent with ZFC that_
* _for any stationary_ \(A\subseteqq_{2}\)_, if_ \((\forall\alpha\in A)[\mathrm{cf}(\alpha)=\omega]\)_, then_ \(F(A)\neq\varnothing\) _(and in fact is stationary)._ _So ZFC does not determine the monadic theory of_ \(\omega_{2}\)_. This partially answers_ _[_1_]__(pp.34-43; p.38, problem 2)._ _We can still hope that the number of possible such theories is small, and each decidable, but this seems unlikely. We can also hope to find the sentences true in every model of ZFC. A more hopeful project is to find a decision procedure assuming_ \(V=L\)_. We show that for this it suffices to prove only the following fact. Let_ \(D_{\omega_{2}}\) _be the filter of closed unbounded subsets of_ \(\omega_{2}\)_. (Magidor disproves (**) in_ \(V=L\)_, but it may still be consistent with ZFC.)_
* _if_ \(A\subseteqq\{\alpha<\omega_{2}:\mathrm{cf}(\alpha)=\omega\},F(A)=B\cup C,A\) _is stationary,_ then _there are_ \(A_{1},A_{2}\)_, such that_ \(A=A_{1}\cup A_{2},A_{1}\cap A_{2}=\varnothing,A_{1},A_{2}\) _are stationary and_ \(F(A_{1})=B(\mathrm{mod}D_{\omega_{2}}),F(A_{2})=C(\mathrm{mod}D_{\omega_{2}})\)_._ _We prove, in fact, more: that the monadic theory of_ \(\omega_{2}\) _and the first order theory of_ \(\langle\underline{P}(\omega_{2})/D_{\omega_{2}},\cap,\cup,F\rangle\) _are recursive one in the other._
_Conjecture 0.2_.: \((V=L)\). The monadic theory of \(\omega_{2}\) (and even \(\omega_{n}\)) is decidable.
_Conjecture 0.3_.: \((V=L+\) there is no weakly compact cardinal). The monadic theory of well orders is decidable.
Lauchli and Leonard [12] define a family \(\underline{M}\) of orders as follows: It is the closure of \(\{1\}\) by
1. \(M+N\),
2. \(M\cdot\omega\) and \(M\cdot\omega^{*}\),
3. \(\sum_{i<n}^{*}M_{i}\) which is \(\sum_{a\in Q}M_{a}\) and \(\{a\in Q:M_{a}=M_{i}\}\) is a dense subset of the rationals, and each \(M_{a}\in\{M_{i}:i<n\}\).
(See Rosenstein [11] and Rubin [12] for generalization.)
Lauchli [13] proved that every sentence from the weak monadic language of order has a countable model if and only if it has a model in \(\underline{M}\). Easy checking of Section 4 shows this holds also for the monadic language. On the other hand, looking at the definition of \(\underline{M}\), we can easily see that for every \(M\in\underline{M}\) there is a monadic sentence \(\psi\) such that \(M\models\psi\), and \(\|N\|\leqq_{0},N\models\psi\) imply \(N\cong M\).
In this way we have a direct characterization of \(\underline{M}\).
**Theorem 0.4**.: \(M\in\underline{M}\) _if and only if \(M\) is countable and satisfies some monadic sentence which is \((\leqq\aleph_{0})\)-categorical._
_Also for other classes whose decidability we prove, we can find subclasses analogous to \(\underline{M}\). This theorem raises the following question:_
_Conjecture 0.5_.: For every \(N\in\underline{M}\) there is a monadic sentence \(\psi\) such that \(M\models\psi\) implies that \(M\) and \(N\) have the same monadic theory. (It suffices to prove this for the rational order.)
Related questions are:
_Conjecture 0.6_.: There is a monadic sentence \(\psi\) such that \(R\models\psi\) and \(M\models\psi\) imply that \(M\) and \(R\) have the same monadic theory.2
Footnote 2: Confirmed by Gurevich
_Conjecture 0.7_.: There is an order \(M\) which has the same monadic theory as \(R\), but is not isomorphic to \(R\).3
Footnote 3: Refuted by Gurevich
_Conjecture 0.8_.: There are orders with the same monadic theories, whose completions do not have the same monadic theories.4
Footnote 4: Confirmed by Gurevich
The characterization of \(\underline{M}\) gives us also
_Conclusion 0.9_.: The question whether a sentence in the first-order (or even monadic) theory of order is \((\leqq\aleph_{0})\)-categorical (or \(\aleph_{0}\)-categorical) is decidable.
A natural question is whether the monadic theory of \(\mathfrak{M}\) is more "complex" than that of the ordinals (the orders in \(\mathfrak{M}\) are countable unions of scattered types; see Laver [11, SS3], which includes results of Galvin). To answer this, we have the
**Definition 0.10**.: For a model \(M\) with relations only, let \(M^{\sharp}\) be the following model:
1. its universe is the set of finite sequences of elements of \(M\);
2. its relations are 1. \(<\), where \(\bar{a}<\bar{b}\) means \(\bar{a}\) is a initial segment of \(\bar{b}\), 2. for each \(n\)-place predicate \(R\) from the language of \(M\), \(R^{M^{\sharp}}=\{\langle\langle a_{1},\ldots,a_{m-1},b^{1}\rangle,\langle a_{ 1},\ldots,a_{m-1},b^{2}\rangle,\ldots,\langle a_{1},\ldots,a_{m-1},b^{n} \rangle\rangle\ :\ a_{i},b^{i}\) are elements of \(M\), and \(M\models R[b^{1},\ldots,b^{n}]\}\).
The author suggested a generalization of Rabin's automaton from [12], proved the easy parts: the lemmas on union and intersection, and solved the emptiness problem. Then J.Stup elaborated those proofs, and proved the complementation lemma. Thus a generalization of the theorem and proof of [12] gives
**Theorem 0.11**.: _The monadic theory of \(M^{\sharp}\) is recursive in the monadic theory of \(M\)._
_Thus, using [11, SS3] notation, we get, e.g.,_
Conclusion 0.12_.: The monadic theory of \(\{M:M\in\mathfrak{M},\|M\|\leqq\lambda\}\) is recursive in the monadic theory of \(\lambda\).
Because by Section 2 the monadic theory of \(\sigma_{\lambda^{+},\lambda^{+}}\) is recursive in the monadic theory of \(\lambda\), by 0.6 the monadic theory of \(\eta_{\lambda^{+},\lambda^{+}}\) is recursive in the monadic theory of \(\lambda\), and so we finish, as by [10, 3.2(iv),3.4]\(\eta_{\lambda^{+},\lambda^{+}}\) is a universal member of \(\{M\in\mathfrak{M}:\|M\|\leqq\lambda\}\).
Also useful are the following (Le Tourneau [14] proved parts (1),(2) at least):5
Footnote 5: Le Tourneau only claimed the result. Lately also Routenberg and Vinner proved this theorem.
**Theorem 0.13**.: _Let \(L\) be a language with one one one-place function symbol, equality and one place predicates._
1. _The monadic theory of_ \(L\) _is decidable._
2. _If a monadic sentence_ \(\psi\) _of_ \(L\) _has a model, it has a model of cardinality_ \(\leqq\aleph_{0}\)_._
3. _In (2) we can find_ \(n=n(\psi)<\aleph_{0}\) _and a model_ \(M\) _such that_ \(|\{b\in|M|:f(b)=a\}|\leqq n\) _for any_ \(a\in|M|\)_._
_This is because, if \(M_{\lambda}\) is the model whose universe is \(\lambda\), and whose language contains equality only, in \(M_{\lambda}^{\sharp}\) we can interpret a universal \(L\)-model (see Rabin [15]). This implies (1). Note that all \(M_{\lambda}\) (\(\lambda\) an infinite cardinal) have the same monadic theory. This proves (2). For (3) note that if \(M_{\aleph_{0}}\models\psi\), then for all big enough \(n,M_{n}\models\psi\)._
Remark (1): Rabin [15] prove the decidability of the countable Boolean algebras, in first-order logic expanded by quantification over ideals. By the Stone representation theorem, each countable Boolean algebra can be represented as the Boolean algebra generated by the intervals of a countable order. By the method of Section 3 we can prove that the theory of countable linear orders in monadic logic expanded by quantification over such ideals, is decidable, thus reproving Rabin's result. (The only points is that methods of Section 2 apply.)
_Conjecture 0.14_.: The monadic theory of orders of cardinality \(\leqq\aleph_{1}\) is decidable when \(\aleph_{1}<2^{\aleph_{0}}\).
_Conjecture 0.15_.: The theory of Boolean algebras of cardinality \(<\lambda\) or in first-order logic expanded by allowing quantification over ideals is decidable when \(\lambda\leqq 2^{\aleph_{0}}(\lambda=\aleph_{2}\leqq 2^{\aleph_{0}})\).
Remark: We can prove Conclusion 0.7 by amalgamating the methods of Section 4,5, and 6.
## 1. Ramsey theorem for additive coloring
A _coloring_ of a set \(I\) is a function \(f\) from the set of unordered pairs of distinct elements of \(I\), into a finite set \(T\) of colors. We write \(f(x,y)\) instead of \(f(\{x,y\})\), assuming usually that \(x<y\). The coloring \(f\) is additive if for \(x_{i}<y_{i}<z_{i}\in I\) (\(i=1,2\)).
\[f(x_{1},y_{1})=f(x_{2},y_{2});f(y_{1},z_{1})=f(y_{2},z_{2})\]
imply \(f(x_{1},z_{1})=f(x_{2},z_{2})\). In this case a (partial) operation \(+\) is defined on \(T\), such that for \(x<y<z\in I,f(x,z)=f(x,y)+f(y,z)\). A set \(J\subseteq I\) is homogeneous (for \(f\)) if there is a \(t_{0}\in T\) such that for every \(x<y\in J,f(x,y)=t_{0}\).
Ramsey's theorem [10] states, in particular, that if we color an infinite set with a finite set of colors, then there is an infinite homogeneous subset. This theorem has many generalization and applications. It was used in [1] for a coloring which was, in fact, additive. Using an idea of Rabin, Buchi [1, 12, p.58] offered an alternative proof (using, in fact, additivity) and in [1, p.111] straightforwardly generalized it to \(\omega_{1}\) (the result for \(\omega_{1}\) is not true for coloring in general). We give the natural extension to arbitrary ordinals (which is immediate, and included for completeness) and a parallel theorem for dense orders.
**Theorem 1.1**.: _If \(\delta\) is a limit ordinal, \(f\) an additive coloring of \(\delta\) (by a set \(T\) of \(n\) colors), then there is an unbounded homogeneous subset \(J\) of \(\delta\)._
Remarks:
1. If the cofinality of \(\delta\) is \(\geqq\omega_{1}\) we can assume that if \(a,b<c^{\prime},f(a,c^{\prime})=f(b,c^{\prime})\), then \(a,b<c\in J\) implies \(f(a,c)=f(b,c)\).
2. Instead of \(|T|<\aleph_{0}\), we need assume only \(|T|<\operatorname{cf}(\delta)\).
_Conclusion 1.2_.: Under the condition of 1.1, there are a closed unbounded subset \(J\) of \(\delta\), and \(J_{k},J^{\ell},1\leqq k,\ell\leqq|T|\) and \(t_{k}^{\ell}\in T\) such that \(J=\cup_{k}J_{k}=\cup_{\ell}J^{\ell}\), the \(J_{k}\)'s are disjoint, the \(J^{\ell}\)'s are disjoint, and if \(a<b\in J,a\in J_{k},b\in J^{\ell}\) then \(f(a,b)=t_{k}^{\ell}\).
**Theorem 1.3**.: _If \(f\) is an additive coloring of a dense set \(I\), by a finite set \(T\) of \(n\) colors, then there is an interval of \(I\) which has a dense homogeneous subset._
_Conclusion 1.4_.: Under the hypothesis of 1.3, there is an interval \((a,b)\) of \(I\), and \((a,b)=\cup_{k=1}^{|T|}J_{k}=\cup_{\ell=1}^{|T|}J^{\ell}\) and colors \(t_{k}^{\ell}\in T\) such that for \(x<y,x\in J_{k},y\in J^{\ell},f(x,y)=t_{k}^{\ell}\).
Remark: We can choose the \(J_{0},J_{k},J^{\ell}\)'s so that they are definable by first-order formulas with parameters in the structure \((\delta,<,f)\) (or \((I,<,f)\)).
Proof of Theorem 1.1: Define: For \(x,y\in\delta,x\sim y\) if there is a \(z\) such that \(x,y<z<\delta\), and \(f(x,z)=f(y,z)\); clearly this implies by the additivity
of \(f\) that for any \(z^{\prime},z<z^{\prime}<\delta,f(x,z^{\prime})=f(y,z^{\prime})\). It is easy to verify that \(\sim\) is an equivalence relation with \(\leqq|T|\) equivalence classes. So there is at least one equivalence class \(I\), which is an unbounded subset of \(\delta\). Let \(x_{0}\) be the first element of \(I\). Let, for \(t\in T,I_{t}=\{y:x_{0}\neq y\in I,f(x_{0},y)=t\}\). Clearly \(I-\{x_{0}\}=\cup_{t\in T}I_{t}\), hence for some \(s,I_{s}\) is an unbounded subset of \(\delta\). Let \(\langle a_{i}:i<\operatorname{cf}(\delta)\rangle\) be an increasing unbounded sequence of elements of \(\delta\). Define by induction on \(i\) elements \(y_{i}\in I\). If for all \(j<i(i<\operatorname{cf}(\delta),y_{j}\) have been defined, let \(y_{i}<\delta\) be such that \(y_{i}>y_{j},y_{i}>a_{j},y_{i}>x_{0}\) and \(f(x_{0},y_{i})=f(y_{j},y_{i})\) for any \(j<i\), and \(y_{i}\in I_{s}\). Now \(J=\{y_{i}:i<\operatorname{cf}(\delta)\}\) is the desired set. Clearly it is unbounded. If \(y_{j}<y_{i}\) (hence \(j<i\)) then
\[f(y_{j},y_{i})=f(x_{0},y_{i})=s.\]
So \(J\) is homogeneous.
Proof of Conclusion 1.2: If the cofinality of \(\delta\) is \(\aleph_{0}\), then the \(J\) from 1.1 is also closed (trivially). So assume \(\operatorname{cf}(\delta)>\aleph_{0}\), let \(T=\{t_{1},\ldots,t_{n}\}\), and let \(J\), \(y_{j}\) be as defined in the proof of 1.1; and let \(J^{*}\) be the closure of \(\{y_{j+1}:j<\operatorname{cf}(\delta)\}\). Then \(J^{*}=\{y^{j}:j<\operatorname{cf}(\delta)\}\) is increasing, continuous, and \(y^{j+1}=y_{j+1}\). Let \(J^{\prime}=\{y^{j}:j\text{ is a limit ordinal}\}\),
\(J_{k}=\{y^{j}:j\text{ is a limit ordinal},\,f(y^{j},y^{j+1})=t_{k}\}\),
\(J^{\ell}=\{y^{j}:j\text{ is a limit ordinal},\,(\forall i<j)(\exists\alpha)(i< \alpha<j\wedge f(y^{\alpha+1},y^{j})=t_{\ell})\)
but this does not fold for any \(\ell^{\prime}<\ell\}\).
Now clearly \(J^{\prime}=\cup_{k}J_{k}=\cup_{\ell}J^{\ell}\), and if \(x\in J_{k},z\in J^{\ell},x<z\) then \(x=y^{i},z=y^{j},i<j,i,j\) are limit ordinals and there is an \(\alpha,i<\alpha<j\), such that \(f(y^{\alpha+1},y^{j})=t_{\ell}\). Hence
\[f(x,z)=f(y^{i},y^{j})=f(y^{i},y^{i+1})+f(y^{i+1},y^{\alpha+1})+f(y^{\alpha+1},y ^{j})\]
\[=t_{k}+f(y_{i+1},y_{\alpha+1})+t_{\ell}=t_{k}+s+t_{\ell}\stackrel{{ \text{def}}}{{=}}t_{k}^{\ell}.\]
Clearly all the demands are satisfied.
Proof of Theorem 1.3: Remember that \(J\subseteqq I\) is dense in an interval \((a,b)\) if for every \(x,y\in I,a<x<y<b\), there is a \(z\in J\) such that \(x<z<y\). It is easy to see that if \(J\subseteqq I\) is dense in an interval \((a,b)\) and \(J=\cup_{k=1}^{m}J_{k}\) (\(m>1\)) then there are \(k\) and \(a^{\prime},b^{\prime}\) such that \(a<a^{\prime}<b^{\prime}<b,1\leqq k\leqq m\) and \(J_{k}\) is dense in \((a^{\prime},b^{\prime})\).
Define for any \(a\in I,J\subseteqq I\)
\[F(a,J)=\{t:t\in T,(\forall x>a)(\exists y\in J)(a<y<x\wedge f(a,y)=t)\}.\]
Notice, that since \(T\) is finite, for any \(a\in I\), and any \(J\subseteqq I\) there is a \(b,a<b\in I\) such that:
\(t\in F(a,J)\) if and only if there is a \(y\in J,a<y<b,f(a,y)=t\).
We define by induction on \(m\leqq n2^{n}+2\) intervals \((a_{m},b_{m})\), sets \(J_{m}\) dense in \((a_{m},b_{m})\), and (for \(m>0\)) sets \(D_{m}\subseteqq T\).
For \(m=0\), let \((a_{0},b_{0})\) be any interval of \(I\), and \(J_{0}=\{x\in I:a_{0}<x<b_{0}\}\). Suppose \((a_{m},b_{m}),J_{m}\) are defined. For any \(D\subseteqq T\) let
\(F(a,J_{m})=D\). Clearly \(J_{m}=\cup_{D\subseteqq T}J_{m}(D)\) and as there are only finitely many possible \(D\)'s (\(\leqq 2^{n}\)), there is an interval \((a_{m+1},b_{m+1})\) and \(D_{m+1}\subseteqq T\) such that \(J_{m}(D_{m+1})\) is dense in \((a_{m+1},b_{m+1})\), and \(a_{m}<a_{m+1}<b_{m+1}<b_{m}\). Let \(J_{m+1}=(a_{m+1},b_{m+1})\cap J_{m}(D_{m+1})\). Clearly \(J_{m}\supseteqq J_{m+1}\), and \(m>k\) implies \(J_{k}\supseteqq J_{m}\), and \((a_{m},b_{m})\) is a subinterval of \((a_{k},b_{k})\).
As there are only \(\leqq 2^{n}\) possible \(D_{m}\), there are a \(D\subseteqq T\) and \(0\leqq m_{0}<\ldots<m_{n}\leqq n2^{n}+1\) such that \(D_{m_{i}+1}=D\). Define, for \(0\leqq k\leqq n,a^{k}=a_{m_{k}},b^{k}=b_{m_{k}},J^{k}=J_{m_{k}}\).6
Footnote 6: In fact \(D_{m}(T)\supseteqq D_{m}(T)\), hence we can replace \(n2^{n}+2\) by \(n^{2}+2\).
It is easy to check that if \(0\leqq k<l\leqq n,x\in J^{\ell}\) then \(x\in J_{m_{\ell}}\subseteqq J_{m_{k+1}}\), hence \(F(x,J_{k})=F(x,J_{m_{k}})=D_{m_{k+1}}=D\). It is clear that \(J^{0}\supseteqq J^{1}\supseteqq\ldots\supseteqq T\).
Choose \(x_{0}\in J^{n}\). Then there is \(x_{1},x_{0}<x_{0}<x_{1}<b^{n}\), such that \(x_{0}<y<x_{1},y\in J^{0}\) implies \(f(x_{0},y)\in F(x_{0},J^{0})=D\). Hence \(t\in D\) if and only if there is \(y\in J^{n-1},x_{0}<y<x_{1},f(x_{0},y)=t\), if and only if there is \(y\in J_{0},x_{0}<y<x_{1},f(x_{0},y)=t\). Clearly
\[J^{n}\cap(x_{0},x_{1})=\cup_{t\in T}\{y:y\in J^{n},x_{0}<y<x_{1},f(x_{0},y)=t\}.\]
Hence there are \(a,b,t_{0}\) such that \(x_{0}<a<b<x_{1}\) and
\[J^{*}=\{y:y\in J^{n},a<y<b,f(x_{0},y)=t_{0}\}\]
is dense in \((a,b)\). Clearly \(t_{0}\in D\).
It is easy to check that for \(t,s\in D,t+s\) is defined and \(\in D\), so for \(t\in D,m\geqq 1\) defined \(mt\in T\), by induction on \(m:1t=t,(m+1)t=mt+t\). As \(T\) has \(n\) elements, \(1t_{0},2t_{0},\ldots,(n+1)t_{0}\) cannot be pairwise distinct. So there are \(i,j,1\leqq i<(i+j)\leqq n+1\) such that \(it_{0}=(i+j)t_{0}\). Define
\[J=\{y:a<y<b,f(x_{0},y)=jt_{0},y\in J^{n-j+1}\}.\]
We shall show that \(J\) is the desired set.
1. \(J\) is dense in \((a,b)\). Suppose \(a<a^{\prime}<b^{\prime}<b\), and we shall find \(z\in J,a^{\prime}<z<b^{\prime}\). As \(J^{*}\) is dense in \((a,b)\) there are \(z^{n}\in J^{*}\subseteqq J^{n},a^{\prime}<z^{n}<b^{\prime}\). We define by downward induction \(z^{k}\) for \(n{-}{-}j+1\leqq k\leqq n\) such that \(z^{k}\in J^{k},a^{\prime}<z^{k}<b^{\prime}\). For \(k=n,z^{k}\) is defined. Suppose \(z^{k+1}\) is defined, then as \(z^{k+1}\in J^{k+1}\) is follows that \(F(z^{k+1},j^{k})=D\). As \(t_{0}\in D\) there is \(z^{k}\in J^{k}\), such that \(z^{k+1}<z^{k}<b^{\prime}\) and \(f(z^{k+1},z^{k})=t_{0}\). Clearly \[x<z^{n}<z^{n-1}<\ldots<z^{n-j+1},\] \[f(x_{0},z^{n})=t_{0},\quad f(z^{i+1},z^{i})=t_{0}.\] Hence \(f(x_{0},z^{n-j+1})=t_{0}+\ldots+t_{0}=jt_{0}\), so \(z^{n-j+1}\in J,a^{\prime}<z^{n-j+1}<b^{\prime}\).
2. \(J\) is homogeneous. Suppose \(a<y<z<b,y,z\in J\). Then \(y\in J^{n-j+1}\). Now define by downward induction \(y^{k}\in J^{k}\) for \(0\leqq k\leqq i,y\leqq y^{k}<z\). Let \(y^{i}=y(y^{i}\in J^{i}\) because \(y^{i}=y\in J^{n-j+1}\), and as \(i+j\leqq n+1,i\leqq n-j+1\) hence \(J^{n-j+1}\subseteq J^{i}).\) If \(y^{k+1}\) is defined then \(F(y^{k+1},J^{k})=D\), hence there are \(y^{k}\in J^{k},y^{k+1}<y^{k}<z\) such that \(f(y^{k+1},y^{k})=t_{0}\). It follows that \(x_{0}<y=y^{i}<y^{i-1}<\ldots<y^{0}<z\) and \[f(y^{k},y^{k-1})=t_{0}.\] Hence \[f(y,y^{0})=f(y^{i},y^{0})=it_{0}.\] So \[f(y,z)=f(y,y^{0})+f(y^{0},z)=it_{0}+f(y^{0},z)\] \[=(i+j)t_{0}+f(y^{0},z)=jt_{0}+it_{0}+f(y^{0},z)\] \[=f(x_{0},y)+f(y,y^{0})+f(y^{0},z)=f(x_{0},z)=jt_{0}.\] This proves the homogeneity of \(J\).
Proof of Conclusion 1.4: Let \((a,b),J\) and \(t_{0}\) be as in the proof of 1.3. Let \(T=\{t_{1},\ldots,t_{n}\}\). Let
\[J_{k}=\{y:y\in(a,b),t_{k}\in F(y,J),t_{1},\ldots,t_{k-1}\notin F(y,J)\},\]
\[J^{\ell}=\{y:y\in(a,b),t_{\ell}\in F^{\prime}(y,J),t_{1},\ldots,t_{\ell-1} \notin F^{\prime}(y,J)\}\]
where \(F^{\prime}\) is defined just as \(F\) is, but for the reversed order.
Clearly \((a,b)=\cup_{k}J_{k}=\cup_{\ell}J^{\ell}\). Suppose \(x<y,x\in J_{k},y\in J_{\ell}\). Then we can find \(x^{\prime},y^{\prime}\)\(x<x^{\prime}<y^{\prime}\in J\), such that \(f(x,x^{\prime})=t_{k},f(y^{\prime},y)=t_{\ell}\). Hence
\[f(x,y)=f(x,x^{\prime})+f(x^{\prime},y^{\prime})+f(y^{\prime},y)=t_{k}+t_{0}+t _{\ell}\stackrel{{\rm def}}{{=}}t_{k}^{\ell}.\]
## 2. The monadic theory of generalized sums
Feferman and Vaught [20] proved that the first order theory of sum, product, and even generalized products of models depends only on the first-order theories of the models. Their theorem has generalizations to even more general products (see Olmann) and to suitable infinitary languages (\(L_{\alpha}\), see Malitz [16]).
On the other hand, it is well-known that for second order theory this is false even for sum (as there is a sentence true in the sum of two models if and only if they are isomorphic, for fixed finite language, of course). Also for monadic (second-order) theory this is false for products of models (there is a sentence true in a direct product of two models of the theory of linear order if and only if the orders are isomorphic). We notice here that the monadic theory of generalized sum depends only on the monadic theories of the summands and notice also generalization of known refinement (see
Fraisse [11]). We can prove them using natural generalization of Ehrenfeucht games (see [1]). Lauchli [12] uses some particular cases of those theorems for the weak monadic theory. As there is no new point in the proofs, we skip them. We should notice only that a subset of sum of models is the union of subsets of the summands. The results of [10] can be applied directly by replacing \(M\) by \((|M|\cup\underline{P}(M),M,\in)\).
_Notation 2.1_.: \(L\) will be first-order language with a finite number of symbols, \(L^{M}\) the corresponding monadic language, \(L(M)\) the first-order, language corresponding to the model \(M\), the universe of \(M\), is \(|M|\). Let \(x,y,z\) be individual variables; \(X,Y,Z\) set variables; \(a,b,c\) elements; \(P,Q\) sets; \(\underline{P}(M)=\{P:P\subseteqq|M|\}\). Bar denotes that this is a finite sequence, e.g., \(\bar{a}\); \(\ell(\bar{a})\) its length, \(\bar{a}=\langle\ldots,a_{i},\ldots\rangle_{i<\ell(\bar{a})}\), and let \(\bar{a}(i)=a_{i}\). We write \(\bar{a}\in A\) instead of \(a_{i}\in A\) and \(\bar{a}\in M\) instead of \(\bar{a}\in|M|\). \(K\) is a class of \(L(K)\) models (\(L(K)=L(M)\) for any \(M\in K\)). Let
\[K^{m}=\{(M,\bar{P}):\bar{P}\in\underline{P}(M)^{m}\},K^{\infty}=\cup_{m<\omega }K^{m}.\]
Let \(k,\ell,m,n,p,q,r\) denote natural numbers.
**Definition 2.2**.: For any \(L\)-model \(M,\bar{P}\in\underline{P}(M),\bar{a}\in|M|,\Phi\) a finite set of formulas \(\varphi(X_{1},\ldots,x_{1},\ldots)\in L\), a natural number \(n\), and a sequence of natural numbers \(\bar{k}\) of length \(\geqq n\), define
\[t=th_{\bar{k}}^{n}((M,\bar{P},\bar{a}),\Phi)\]
by induction on \(n\):
For \(n=0\):
\[t=\{\varphi(X_{\ell_{1}},\ldots,x_{j_{1}},\ldots):\varphi(X_{1},\ldots,x_{1}, \ldots)\in\Phi,M\models\varphi[P_{\ell_{1}},\ldots,a_{j_{1}},\ldots]\}.\]
For \(n=m+1\):
\[t=\{th_{\bar{k}}^{m}(M,\bar{P},\bar{a}\bar{\neg}\bar{b}):\bar{b}\in|M|^{\bar{ k}(m)}\}.\]
**Definition 2.3**.: For any \(L\)-model \(M,\bar{P}\in\underline{P}(M)\), a finite set \(\Phi\) of formulas \(\varphi(X_{1},\ldots,x_{1}\ldots)\in L,n,\bar{k}\) of length \(\geqq n+1\), define \(T=Th_{\bar{k}}^{n}(M,\bar{P}),\Phi)\) by induction on \(n\):
For \(n=0\):
\[T=th_{\bar{k}}^{1}((M,\bar{P}),\Phi).\]
For \(n=m+1\):
1. If \(\Phi\) is the set of atomic formulas we shall omit it and write \(Th_{\bar{k}}^{n}(M,\bar{P})\).
2. We always assume \(\bar{k}(i)\geqq 1\) for any \(i<\ell(\bar{k})\), and \(\bar{k}(0)\geqq m_{R}\) if \(R\in L(M)\) is \(m_{R}\)-place.
3. If we write \(\bar{k}(i)\) for \(i\geqq\ell(\bar{k})\), then we mean \(1\), and when we omit \(\bar{k}\) we mean \(\langle\max\{m_{R}:R\in L(M)\},1,\ldots\rangle\).
4. We could have mixed Definition 2.2, and 2.3, and obtained a similar theorem which would be more refined.
**Lemma 2.4**.:
1. _For every formula_ \(\psi(\bar{X})\in L^{M}(M)\) _there is an_ \(n\) _such that from_ \(Th^{n}_{\bar{k}}(M,\bar{P})\) _we can find effectively whether_ \(M\models\psi[\bar{P}]\)_._
2. _For every_ \(L,\bar{k},n,\Phi\subseteqq L\)_, and_ \(m\) _there is a set_ \(\Psi=\{\psi_{\ell}(\bar{X}):\ell<\ell_{0}(<\omega),\ell(\bar{X})=m\}(\psi_{ \ell}\in L^{M})\) _such that for any_ \(L\)_-models_ \(M,N\) _and_ \(\bar{P}\in\underline{P}(M)^{m},\bar{Q}\in\underline{P}(N)^{m}\) _the following hold:_ 1. \(Th^{n}_{\bar{k}}((N,\bar{Q}),\Phi)\) _can be computed from_ \(\{\ell<\ell_{0}:N\models\psi_{\ell}[\bar{Q}]\}\)_._ 2. \(Th^{n}_{\bar{k}}((N,\bar{Q}),\Phi)=Th^{n}_{\bar{k}}((M,\bar{P}),\Phi)\) _if and only if for any_ \(\ell<\ell_{0},M\models\psi_{\ell}[\bar{P}]\Leftrightarrow N\psi_{\ell}[\bar{Q}]\)_._
Proof: Immediate. In (A) it suffices to take for \(n\) the quantifier depth of \(\psi\).
**Lemma 2.5**.:
1. _For given_ \(L,n,m,\bar{k}\)_, each_ \(Th^{n}_{\bar{k}}(M,\bar{P})\) _is hereditarily finite, and we can compute the set of formally possible_ \(Th^{n}_{\bar{k}}(M,\bar{P}),\ell(\bar{P})=m,M\) _an_ \(L\)_-model. The same holds for_ \(\Phi\)_._
2. _If_ \(\bar{\ell}(0)\geqq\bar{k}(0),1=p_{0}<p_{1}<p_{2}<\ldots<p_{n}\leqq m\) _and for_ \(1\leqq i\leqq n,\;\bar{k}(i)\leqq\sum_{p_{i-1}\leqq j\leqp_{i}}\bar{\ell}(j)\) _then from_ \(Th^{m}_{\bar{\ell}}((M,\bar{P}),\Phi)\) _we can effectively compute_ \(Th^{n}_{\bar{k}}((M,\bar{P}),\Phi)\)_._
3. _For every_ \(n,\bar{k},\bar{\ell}\) _we can compute_ \(m\) _such that from_ \(Th^{m}_{\bar{\ell}}((M,\bar{P}),\Phi)\) _we can effectively compute_ \(Th^{n}_{\bar{k}}((M,\bar{P}),\Phi)\)_._
4. _Suppose in Definition_ 2.3 _we make the following changes: We restrict ourselves to partition_ \(\bar{P}\)_, and let_ \(\bar{Q}\) _be a partition refining_ \(\bar{P}\)_, which divides each_ \(P_{i}\) _to_ \(2^{\bar{k}(m)}\) _parts. What we get we call_ \(pTh^{n}_{\bar{k}}((M,\bar{P}),\Phi)\)_. Then from_ \(pTh^{n}_{\bar{k}}((M,\bar{P}),\Phi)\) _we can effectively compute_ \(Th^{n}_{\bar{k}}((M,\bar{P}),\Phi)\)_, and vice versa._
5. _Let_ \(K,n,\Phi\) _be given. If for every_ \(\bar{k}\) _there is an_ \(\bar{\ell}\) _such that for every_ \(m,M,N\in K^{m}\)_,_ \[Th^{n}_{\ell}(M,\Phi)=Th^{n}_{\bar{\ell}}(N,\Phi)\Rightarrow Th^{n+1}_{\bar{k} }(M,\Phi)=Th^{n+1}_{\bar{k}}(N,\Phi)\] _then for every_ \(m,\bar{k}\) _there is an_ \(\bar{\ell}\) _such that for any_ \(n^{\prime},M,N\in K^{m}\)__ \[Th^{n}_{\bar{\ell}}(M,\Phi)=Th^{n}_{\ell}(N,\Phi)\Rightarrow Th^{n^{\prime}}_{ \bar{k}}(N,\Phi)=Th^{n^{\prime}}_{\bar{k}}(M,\Phi).\]
Remark: This is parallel to elimination of quantifiers.
(F) In (E), if in the hypothesis \(\bar{\ell}\) can be found effectively from \(\bar{k}\) then in the conclusion, \(\bar{\ell}\) can be found effectively from \(m,\bar{k}\). If in addition \(\{Th^{n}_{\bar{k}}(M,\Phi):M\in K^{m}\}\) is recursive in \(\bar{k},m\) then \(\{Th^{p}_{\bar{k}}(M,\Phi):M\in K\}\) is recursive in \(p,\bar{k}\).
Proof: Immediate.
The following generalizes the ordered sum of ordered sets (which will be our main interest) to the notion of a generalized sum of models. (Parts (1),(2),(3) of the definition are technical preliminaries.)
**Definition 2.6**.: Let \(L_{1},L_{2},L_{3}\) be first-order languages, \(M_{i}\) an \(L_{1}\)-model (for \(i\in|N|),N\) an \(L_{2}\)-model, and we shall define the \(L_{3}\)-model \(M=\sum_{i\in|N|}^{\sigma}M_{i}\) (the generalized sum of the \(M_{i}\)'s relative to \(\sigma\)).7
Footnote 7: We assume, of course, that the \(|M_{i}|\)’s are pairwise disjoint.
1. An \(n\)-condition \(\tau\) is a triple \(\langle E,\Phi,\Psi\rangle\) where: 1. \(E\) is an equivalence relation on \(\{0,1,\ldots,n-1\}\). 2. \(\Phi\) is a finite set of formulas of the form \(\varphi(x_{j_{1}},\ldots,x_{j_{k}})\) where \(j_{1},\ldots,j_{k}\) are \(E\)-equivalent and \(<n\); and \(\varphi\in L_{1}\). 3. \(\Psi\) is a finite set of formulas of the form \(\psi(x_{j_{1}},\ldots,x_{j_{k}})\) where \(j_{1},\ldots,j_{k}<n,\psi\in L_{2}\).
2. If \(a_{0},\ldots,a_{n-1}\in\bigcup_{i\in|N|}M_{i},\tau=\langle E,\Phi,\Psi\rangle\) is an \(n\)-condition, \(a_{\ell}\in M_{i(\ell)}\), then we say \(\langle a_{0},\ldots,a_{n-1}\rangle\) satisfies \(\tau\) if: 1. \(i(\ell)=i(m)\Leftrightarrow\ell Em\); 2. \(\varphi(x_{j_{1}},\ldots,x_{j_{k}})\in\Phi\Rightarrow M_{i(j_{1})}\models \varphi[a_{j_{1}},\ldots,a_{j_{k}}]\); 3. \(\psi(x_{j_{1}},\ldots,x_{j_{k}})\in\Psi\Rightarrow N\models\psi[i(j_{i}), \ldots,i(j_{k})]\).
3. The rule, \(\sigma\) is \(\langle L_{1},L_{2},L_{3},\sigma^{*}\rangle\) where \(\sigma^{*}\) is a function whose domain is the set of predicates of \(L_{3}\); if \(R\) is an \(n\)-place predicate in \(L_{3},\sigma^{*}(R)\) will be a finite set of \(n\)-conditions.
4. \(M=\sum_{i\in|N|}^{\sigma}M_{i}\) is an \(L_{3}\)-model, whose universe is \(\cup_{i\in|N|}|M_{i}|\), and for every predicate \(R\in L_{3},R^{M}=\{\langle a_{0},\ldots,a_{n-1}\rangle\) satisfies some \(\tau\in\sigma^{*}(R)\}\). Let \(\Phi(\sigma)\;(\Psi(\sigma))\) be the set of all formulas \(\varphi_{j}\in L_{1}(\sigma)\;(\psi_{p}\in L_{2}(\sigma))\) appearing in the \(\sigma(R)\)'s, \(R\in L_{3}(\sigma)\), and the equality.
Remarks:
1. We use the convention that \(\sum_{i\in N}^{\sigma}(M_{i},\bar{P}^{i})=(\sum_{i\in N}^{\sigma}M_{i},\cup_{ i\in N}\bar{P}^{i})\) where for \(\bar{P}^{i}=\langle P_{1}^{i},\ldots,P_{m}^{i}\rangle,\bigcup_{i}\bar{P}_{i}= \langle\bigcup_{i}P_{1}^{i},\ldots,\bigcup_{i}P_{m}^{i}\rangle\).
2. We could have defined the sum more generally, by allowing the universe and the equality to be defined just as the other relations.
**Lemma 2.7**.: _For any \(\sigma,n,m,\bar{k}\), if for \(\ell=1,2,\bar{P}_{1}^{\ell}\in\bar{P}(M_{i}^{\ell})^{m}\) and for every \(i\in N\),_
\[Th_{\bar{k}}^{n}((M_{i}^{1},\bar{P}_{i}^{1}),\Phi(\sigma))=Th_{\bar{k}}^{n}((M _{i}^{2},\bar{P}_{i}^{2},\bar{P}_{i}^{2}),\Phi(\sigma)),\]
_then_
\[Th_{\bar{k}}^{n}(\sum_{i\in N}^{\sigma}(M_{i}^{1},\bar{P}_{i}^{1}))=Th_{\bar{k} }^{n}(\sum_{i\in N}^{\sigma}(M_{i}^{2},\bar{P}_{i}^{2})),\]
**Theorem 2.8**.: _For any \(\sigma,n,m,\bar{k}\) we can find an \(\bar{r}\) such that: if \(M=\sum_{i\in N}^{\sigma}M_{i},t_{i}=Th_{\bar{k}}^{n}((M_{i},\bar{P}_{i}),\Phi( \sigma))\), and \(Q_{t}=\{i\in N:t_{i}=t\},\ell(\bar{P}_{i})=m\), then from \(Th_{\bar{r}}^{n}((N,\ldots,Q_{t},\ldots),\Psi(\sigma))\) we can effectively compete \(Th_{\bar{k}}^{n}(M,\bigcup_{i}\bar{P}_{i})\) (which is uniquely determined)._
**Definition 2.9**.:
1. For a class \(K\) of models \[Th_{\bar{k}}^{n}(K,\Phi)=\{Th_{\bar{k}}^{n}(M,\Phi):M\in K\}.\]
2. The monadic theory of \(K\) is the set of monadic sentences true in every model in \(K\).
3. For any \(\bar{\sigma},K_{1},K_{2}\), let \(C\ell^{\bar{\sigma}}(K_{1},K_{2})\) be the minimal class \(K\) such that 1. \(K_{1}\subseteqq K\), 2. if \(j<\ell(\bar{\sigma}),M_{i}\in K,N\in K_{2}\) then \(\sum_{i\in|N|}^{\bar{\sigma}(i)}M_{i}\in K\).
_Conclusion 2.10_.: Suppose \(\bar{\sigma},n,\bar{k},m\) are given. \(L_{1}(\sigma_{i})=L_{3}(\sigma_{i})=L,L_{2}(\sigma_{i})=L_{2};L,L_{2}\) are finite and each \(\Psi(\sigma_{i}),\Psi(\sigma_{i})\) is a set of atomic formulas. There is an \(\bar{r}\) such that for every \(K_{1},K_{2}\), from \(Th^{n}_{\bar{r}}(K_{2}^{\bar{r}(n+1)}),Th^{n}_{\bar{k}}(K_{1}^{m})\) we can effectively compute \(Th^{n}_{\bar{k}}(K^{m})\) where \(K=C\ell^{\bar{\sigma}}(K_{1},K_{2})\) (remember \(K_{1}^{m}=\{(M,\bar{P}):M\in K_{1},\bar{P}\in\underline{P}(M)^{m})\) (\(K_{1}\) should be a class of \(L\)-models, \(K_{2}\) a class of \(L_{2}\)-models).
Proof: For every \(j<\ell(\bar{\sigma})\) let \(\bar{r}^{j}\) relate to \(\bar{\sigma}(j),n,\bar{k},m\) just as \(\bar{r}\) relates to \(\sigma,n,k,m\) in Theorem 2.8. Now choose an \(\bar{r}\) such that for every \(\ell\leqq n,\bar{r}(\ell)\geqq r^{j}(\ell)\).
Let \(T\) be the set of formally possible \(Th^{n}_{\bar{k}}(M,\bar{P})\), for \(M\) and \(L\)-model, \(\ell(\bar{P})=m\), and we can define \(r(n+1)=|T|\). Let \(T=\{t(0),\ldots,t(p-1)\}\) (so \(p=|T|=r(n+1)\)).
Clearly, by the definition of \(\bar{r}^{j}\), and by (a trivial case of) 2.3(B), if \(M=\sum_{i\in N}^{\bar{\sigma}(j)}M_{i},t_{i}=Th^{n}_{\bar{k}}(M_{i},\bar{P}_{i }),Q_{\ell}=\{i\in N:t_{i}=t(\ell)\},\ell(\bar{P}_{i})=m\), then from \(t=Th^{n}_{\bar{r}}(N,\ldots,Q_{1},\ldots)_{\ell<p}\) we can effectively compute \(Th^{n}_{\bar{k}}(M,\bigcup_{i},\bar{P}_{i})\), and denote it by \(g(t)\).
Now define by induction on \(\ell,T_{\ell}\subseteqq T\).
Let \(T_{0}=Th^{n}_{\bar{k}}(K_{\ell}^{m})\), and if \(T_{q}\) is defined let \(T_{q+1}\) be the union of \(T_{q}\) with the set of \(t\in T\) satisfying the following condition:
* There is a \(t^{*}\in Th^{n}_{\bar{r}}(K_{2}^{r(n+1)})\) such that \(t=g(t^{*})\), and if \(t^{*}\) implies that \(Q_{\ell}\) is not empty, then \(t(\ell)\in T_{q}\).
Remark: Clearly if \(t^{*}=Th^{n}_{\bar{r}}(N,\ldots,Q_{\ell},\ldots)\) then from \(t^{*}\) we can compute \(Th^{0}_{\bar{r}}(N,\ldots,Q_{\ell},\ldots)\) and hence know whether \(Q_{\ell}\neq\varnothing\).
Clearly \(T_{0}\subseteqq T_{1}\subseteqq T_{2},\ldots\subseteqq T\) so, as \(|T|=p\), for some \(q\leqq p,T_{q}=T_{q+1}\).
Now let
\[K_{*}=\{M\in K:\ \mbox{for every}\ \bar{P}\in(\underline{P}(|M|)^{m}Th^{n}_{k}(M, \bar{P})\in T_{q}\}.\]
Clearly \(Th^{n}_{\bar{k}}(k^{m}_{*})\subseteqq T_{q}\), and we can effectively find \(T_{q}\). Now if \(N\in K_{2},M_{i}\in K_{*}\) for \(i\in N\), and \(M=\sum_{i\in N}^{\sigma(j)}M_{i}\), then for any \(\underline{P}\in\bar{P}(|M|)^{m},Th^{n}_{\bar{k}}(M,P)\in T_{q+1}=T_{q}\) by the definition of \(T_{q+1}\), and \(M\in K\) by the definition of \(K\), hence \(M\in K_{*}\). As clearly \(K_{1}\subseteqq K_{*}\subseteqq K\), by the definition of \(K=C\ell^{\bar{\sigma}}(K_{1},K_{2})\) necessarily \(K_{*}=K\). So it suffices to prove that \(Th^{n}_{\bar{k}}(K_{*}^{m})\supseteqq T_{\ell}\). (Take \(\ell=q\).) This is done by induction on \(\ell\).
**Lemma 2.11**.: _If \(M\) is a finite model, then for any \(\Phi,n,\bar{k}\) we can effectively compute \(Th^{n}_{\bar{k}}(M,\Phi)\) from \(M\)._
_Remark 2.12_.: Naturally we can ask whether we can add to (or replace the) monadic quantifiers (by) other quantifiers, without essentially changing the conclusions of this section. It is easily seen that, e.g., the following quantifiers suitable:
1. \((\exists^{f}X)\) -there is a finite set \(X\)
2. \((\exists^{\lambda}X)\) -there is a set \(X,|X|<\lambda\) (\(\lambda\) a regular cardinal). when dealing with ordered sums of linear order, also
3. \((\exists^{wo}X)\) -there is a well-ordered set \(X\)
4. \((\exists_{\lambda}X)\) -there is a set \(X\), with no increasing not decreasing sequence in it of length \(\lambda\) (\(\lambda\) a regular cardinal).
If we add some of those quantifiers, we should, in the definition of \(Th^{0}_{n}((M,\bar{P}),\Phi)\) state which Boolean combinations of the \(P_{\ell}\)'s are in the range of which quantifiers. If we e.g., replace the monadic quantifier by \((\exists^{\lambda}X)\), we should restrict the \(P\)'s to sets of cardinality \(<\lambda\).
Another possible generalization is to generalized products. Let \(M=\prod_{i\in N}^{\sigma}M_{i}\) (where \(L(M_{i})=L_{1}(\sigma),L(N)=L_{2}(\sigma),L(M)=L_{3}(\sigma)\)) means: \(|M|=\prod_{i\in N}|M_{i}|\), and if \(f_{1},\ldots,f_{n}\in M,M\models R[f_{1},\ldots,f_{n}]\) if and only if \(N\models\psi_{R}[\ldots,P_{\ell},\ldots]\) where
\[P_{\ell}=\{i\in N:M_{i}\models\varphi_{\ell}^{R}[f_{1}(i),\ldots,f_{n}(i)]\}\]
(and \(\varphi_{\ell}\) is a first order sentence from \(L_{1}(\sigma),\psi_{R}\) a monadic sentence from \(L_{3}(\sigma)\)). Then, of course, we use \(Th^{n}_{\bar{k}}(N,\underline{P}),th^{n}_{\bar{k}}(M,\bar{a})\). All our theorems generalize easily, but still no application was found.
If not specified otherwise, we restrict ourselves to the class \(K_{\text{ord}}\) of models of the theory of order (sometimes with one-place relations which will be denoted, e.g., \((M,\bar{P})\)). \(\sigma=\sigma_{\text{ord}}\) is the ordered sum of ordered sets and is omitted. Therefore \(\Psi(\sigma)\) and \(\Phi(\sigma)\) are the set of atomic formulas. For the sum of two orders we write \(M_{1}+M_{2}\). The ordinals, the reals \(R\), and the rationals \(Q\) have their natural orders. If \(M=\sum_{i\in|N|}M_{i}\) we write \(Th^{n}_{\bar{k}}(M,\bar{P})=\sum_{i\in|N|}Th^{n}_{\bar{k}}(M_{i},\bar{P}_{i})\) where \(\bar{P}=\bigcup_{i}\bar{P}_{i}\). Let \(T(n,m,\bar{k})\) be the set of formally possible \(Th^{n}_{\bar{k}}(M,\bar{P}),M\) an order, \(\ell(\bar{P})=m\).
**Corollary 2.13**.: _For any \(n,m,\bar{k}\) there is \(\bar{r}=\bar{r}(n,m,\bar{k})\) such that if \(P_{t}=\{i\in N:t_{i}=t\}\) for \(t\in T(n,m,\bar{k})\) then \(\sum_{i\in N}t_{i}\) can be effectively computed from \(Th^{n}_{\bar{r}}(N,\ldots,P_{t},\ldots)\)._
## 3. Simple application for decidability
Using Section 2 we shall prove here some theorems, most of them known. We prove the decidability of the theories of the finite orders, the countable ordinals [1] and show that from the monadic theory of \(\lambda\) we can compute effectively the monadic theory of \(K=\{\alpha:\alpha<\lambda^{+}\}\) (this was shown for \(\lambda=\omega,\lambda=\omega_{1}\) in [1] We do not try to prove the results on definability and elimination of quantifiers. For finite orders this can be done and the method becomes similar to that of automaton theory. For \(\omega,\{\alpha:\alpha<\omega_{1}\},\omega_{1}\) this
can be done by using the previous cases (e.g., for \(\omega\) using the result on the finite orders). We can prove the decidability of the weak monadic theory (with \(\exists^{f}\) only) of the \(n\)-successors theory by the method of this section (Doner [1] proved it). It would be very interesting if we could have proved in this way that the monadic theory of the \(2\)-successor theory is decidable (Rabin [10] proved it).
In order to use Section 1 we should note
**Lemma 3.1**.: _For any \(m,\bar{k},(N,\bar{P})\), the coloring \(f_{\bar{k}}^{n}\) on \(N\) is additive where_
\[f_{\bar{k}}^{n}(a,b)=Th_{\bar{k}}^{n}((N,\bar{P})\!\upharpoonright\![a,b)),\]
_where \((N,\bar{P})\!\upharpoonright\![a,b)\) is a submodel of \((N,\bar{P})\) with the universe \([a,b)=\{x\in N:a\leqq x<b\}\)._
Proof: By lemma 2.7.
Let us list some immediate claims.
**Lemma 3.2**.:
1. _If for any_ \(n,\bar{k}\) _we can compute effectively_ \(Th_{\bar{k}}^{n}(K)\)_, then the monadic theory of_ \(K\) _is decidable; and vice-versa._
2. _If the monadic theory of_ \(K\) _is decidable then so is the monadic theory of_ \(K^{\prime}\) _where_ \(K^{\prime}\) _is the class of:_ 1. _submodels of_ \(K\)_,_ 2. _initial segments of orders from_ \(K\)_,_ 3. _orders which we get by adding (deleting) first (last) elements from orders of_ \(K\)_,_ 4. _converses of orders from_ \(K\)_,_ 5. \((M,\bar{P}),M\in K,\bar{P}\in\underline{P}(M)^{m}\)_._
Proof: Immediate.
**Theorem 3.3**.: _The monadic theory of the class \(K_{\rm fin}\) of finite orders is decidable._
Proof: Let \(K_{n}\) be the class of orders of cardinality \(n\); up to isomorphism \(K_{n}\) has only one element, \(n\). Hence by Lemma 2.11 we can compute \(Th_{\bar{k}}^{n}(K_{i})\). Hence by Conclusion 2.10, for every \(n,\bar{k}\) we can compute \(Th_{\bar{k}}^{n}(K)\) where \(K=C\ell(K_{1},K_{2})\). But clearly \(K\) is the class of finite orders. So by 3.2(A) we finish.
**Theorem 3.4**.: _The monadic theory of \(\omega\) is decidable._
Proof: We shall compute \(\{Th_{\bar{k}}^{n}(\omega,\bar{P}):\bar{P}\in\underline{P}(\omega)^{m}\}\) by induction on \(n\), for every \(\bar{k},m\) simultaneously.
For \(n=0\) is it easy.
Suppose we have done it for \(n-1\) and we shall do it for \(n,m,\bar{k}\). By the induction hypothesis we can compute \(Th_{\ell}^{n}(\omega)\) for every \(\bar{\ell}\), in particular for
\(\bar{r}=\bar{r}(n,m,\bar{k})\) (see 2.13). Now for any \(M=(\omega,P_{1},\ldots,P_{m})\), by 1.1 we can find an \(f_{\bar{k}}^{n}\)-homogeneous set \(\{a_{i}:i<\omega\}(a_{i}<a_{i+1})\). So letting
\[t=T_{\bar{k}}^{n}((\omega,\bar{P})\!\!\!\restriction\![0,a_{0})),\ s=Th_{k}^{n}(( \omega,\bar{P})\!\!\restriction\![a_{i},a_{j}))\ \mbox{for}\ i<j;\]
we have
\(Th_{\bar{k}}^{n}(\omega,\bar{P})=Th_{\bar{k}}^{n}((\omega,\bar{P})\!\!\restriction \![0,a_{0}))+\sum_{i<\omega}Th_{\bar{k}}^{m}((\omega,\bar{P})\!\!\restriction\! [a_{i},a_{i+1}))=t+\sum_{i<\omega}s\).
As \(Th_{r}^{n}(\omega)\) is known, by 2.13, we can compute \(Th_{\bar{k}}^{n}(M,\bar{P})\) from \(s,t\). Now for any \(t,s\in Th_{\bar{k}}^{n}(K_{\rm fin}^{m}),s\neq Th_{\bar{k}}^{n}(0,\bar{P}), \bar{P}\in\underline{P}(\varnothing)^{m}\), there is an \((\omega,\bar{P})\) such that \(Th_{\bar{k}}^{n}(\omega,\bar{P})=t+\sum_{i<\omega}s\).
As we know \(Th_{\bar{k}}^{n}(K_{\rm fin}^{m})\) by 3.3, and can easily find whether \(s\in Th_{\bar{k}}^{n}(K_{\rm fin}^{m})--Th_{\bar{k}}^{n}(\{0\})\), we finish.
**Theorem 3.5**.:
* _From the monadic theory of_ \(\lambda\) _(_\(\lambda\) _a cardinal) we can compute effectively the monadic theory of_ \(K=\{\alpha:\alpha<\lambda^{+}\}\)_._
* _Moreover every monadic sentence which has model_ \(\alpha<\lambda^{+}\)_, has a model_ \(\beta<\lambda^{\omega}\)_._
* 1. _For every_ \(\alpha<\lambda^{+}\) _there is a_ \(\beta<\lambda^{\omega+1}+\lambda^{\omega}\) _which has the same monadic theory_
* _if_ \(\mu\leqq\lambda\) _and for every regular_ \(\chi\leqq\lambda\) _there is a_ \(\chi^{\prime}\leqq\mu\) _such that_ \(\chi,\chi^{\prime}\) _have the same monadic theory, then we can choose_ \(\beta<\lambda^{\omega}\mu+\lambda^{\omega}\)_._8__ Footnote 8: In fact, \(\beta<M^{\omega+1}+M^{\omega}\).
* _If we could always find_ \(\chi<\mu\) _then_ \(\beta<\lambda^{\omega}\mu\)_, and if_ \(\lambda=\omega,\beta<\lambda^{\omega}+\lambda^{\omega}\)_._9__ Footnote 9: In the first case \(\beta<M\).
* _Also, for every_ \(\alpha<\lambda^{+}\)_, there are_ \(n<\omega,\lambda_{1},\ldots,\lambda_{n}\leqq\lambda\)_, such that the monadic theory of_ \(\alpha\) _is recursive in the monadic theories of_ \(\lambda_{1},\ldots,\lambda_{n}\)_, and_ \(\lambda_{i}\) _is a regular cardinal._
* _In general, the bounds in (B),(C) cannot be improved._
Remark: Buchi [1] already proved (B),(C) for \(\lambda=\omega\) and (B) for \(\lambda+\omega_{1}\).
Proof:
* Define \(K_{1}=K_{2}=\{\alpha:\alpha\leqq\lambda\}\); by 3.2(A)(i) and 3.2(B) we can compute \(Th_{\bar{k}}^{n}(K_{i})\) for every \(n,\bar{k}\) and \(i=1,2\) (from the monadic theory of \(\lambda\), of course). Hence by 2.10 we can compute \(Th_{\bar{k}}^{n}(K^{\prime})\) for every \(n,\bar{k}\) where \(K^{\prime}=C\ell(K_{1},K_{2})\). Clearly every member of \(K^{\prime}\) is well-ordered and has cardinality \(\leqq\lambda\). So up to isomorphism \(K^{\prime}\subseteqq K\). We should prove now only that equality holds. If not, let \(\alpha\) by the first ordinal not in \(K^{\prime}\), and \(\alpha<\lambda^{+}\). If \(\alpha\) is a successor ordinal, \(\alpha-1\in K^{\prime};\ 1,2\in K^{\prime}\) hence \(\alpha=(\alpha-1)+1\in K^{\prime}\), a contradiction. If \(\alpha\) is a limit ordinal, its cofinality is \(\leqq\lambda\). Let \(\alpha=\sum_{i<i_{0}}\alpha_{i},i_{0}\leqq\lambda,\alpha_{i}<\alpha\); then \(i_{0},\alpha_{i}\in K^{\prime}\) so \(\alpha\in K^{\prime}\), a contradiction.
2. Let us first show that * For every \(n,\bar{k}\) there is \(q=q(n,\bar{k})<\omega\) such that if \(\alpha,\beta<\lambda^{+},\operatorname{cf}(\alpha)=\operatorname{cf}(\beta)\), and \(\alpha,\beta\) are divisible by \(\lambda^{q}\), then \(Th^{n}_{\bar{k}}(\alpha)=Th^{n}_{\bar{k}}(\beta)\). For \(n=0\) it is immediate, and we prove it for \(n\). By the pigeonhole principle there are \(1<\ell<p\leqq 2|T(n,0,\bar{k})|+1\) such that \(Th^{n}_{\bar{k}}(\lambda^{\ell})=Th^{n}_{\bar{k}}(\lambda^{p})\). Clearly, \[\lambda^{\ell+2}=\sum_{i<\lambda}(\lambda^{\ell+1}+\lambda^{\ell}).\] Hence \[\begin{array}{ll}Th^{n}_{\bar{k}}(\lambda^{\ell+2})&=Th^{n}_{\bar{k}}[\sum_{ i<\lambda}(\lambda^{\ell+1}+\lambda^{\ell})]=\sum_{i<\lambda}Th^{n}_{\bar{k}}( \lambda^{\ell+1}+\lambda^{\ell})\\ &\sum_{i<\lambda}[Th^{n}_{\bar{k}}(\lambda^{\ell+1})+Th^{n}_{\bar{k}}(\lambda ^{\ell})]=\sum_{i<\lambda}[Th^{n}_{\bar{k}}(\lambda^{p})\\ &\sum_{i<\lambda}Th^{n}_{\bar{k}}(\lambda^{\ell})=Th^{n}_{\bar{k}}(\sum_{i< \lambda}\lambda^{\ell})=Th^{n}_{\bar{k}}(\lambda^{\ell+1}).\end{array}\] Hence we prove by induction on \(m,\ell<m<\omega\) that \(Th^{n}_{\bar{k}}(\lambda^{m})=Th^{n}_{\bar{k}}(\lambda^{\ell+1})\); choose \(q=q(n,\bar{k})=\ell+1\). Let \(\alpha,\beta<\lambda^{+}\) be divisible by \(\lambda^{q}\) and have the same cofinality, and we shall prove \(Th^{n}_{\bar{k}}(\alpha)=Th^{n}_{\bar{k}}(\beta)\). Clearly it suffices to prove \(Th^{n}_{\bar{k}}(\alpha)=Th^{N}_{\bar{k}}(\lambda^{q}\mu)\) where \(\mu=\operatorname{cf}(\alpha)\). Let us prove it by induction on \(\alpha\), and let \(\alpha=\lambda^{q}\gamma\). If \(\gamma=\gamma_{1}+1\), then for \(\gamma_{1}=0\) i is trivial, and for \(\gamma_{1}>0\) \[\begin{array}{ll}Th^{n}_{\bar{k}}(\alpha)&=Th^{n}_{\bar{k}}(\lambda^{q} \gamma_{1}+\lambda^{q})=Th^{n}_{k}(\lambda^{q}\gamma_{1})+Th^{n}_{\bar{k}}( \lambda^{q})\\ &=Th^{n}_{\bar{k}}[\lambda^{q}\circ\operatorname{cf}(\lambda^{q}\gamma_{1})]+ Th^{n}_{\bar{k}}(\lambda^{q+2})\\ &=Th^{n}_{\bar{k}}[\lambda^{q}\circ\operatorname{cf}(\lambda^{q}\gamma_{1})+ \lambda^{q+2}]=Th^{n}_{\bar{k}}(\lambda^{q+2})=Th^{n}_{\bar{k}}(\lambda^{q} \circ\lambda)\\ &=Th^{n}_{\bar{k}}[\lambda^{q}\circ\operatorname{cf}(\alpha)].\end{array}\] If \(\gamma\) is a limit ordinal \(\gamma=\sum_{i<\operatorname{cf}(\gamma)}\gamma_{i},\gamma_{i}<\gamma\) a successor, \[\begin{array}{ll}Th^{n}_{\bar{k}}(\alpha)&=Th^{n}_{\bar{k}}[\lambda^{q}(\sum_ {i<\operatorname{cf}(\gamma)}\gamma_{i})]=Th^{n}_{\bar{k}}(\sum_{i< \operatorname{cf}(\gamma)}\lambda^{q}\gamma_{i})\\ &=\sum_{i<\operatorname{cf}(\gamma)}Th^{n}_{\bar{k}}(\lambda^{q}\gamma_{i})\\ &=\sum_{i<\operatorname{cf}(\gamma)}Th^{n}_{\bar{k}}[\lambda^{q}\circ \operatorname{cf}(\lambda^{q}\gamma_{i})]\\ &\sum_{i<\operatorname{cf}(\gamma)}Th^{n}_{\bar{k}}(\lambda^{q+1})=\sum_{i< \operatorname{cf}(\gamma)}Th^{n}_{\bar{k}}(\lambda^{q})\\ &=Th^{n}_{\bar{k}}[\lambda^{q}\circ\operatorname{cf}(\gamma)].\end{array}\] So we have proved *). Let us prove (B). Let \(\alpha<\lambda^{+}\) be a model of a sentence \(\psi\). Choose by 2.2(A),(OR 3.2?) \(n,\bar{k}\) such that from \(Th^{n}_{\bar{k}}(\beta)\) we know whether \(\beta\models\psi\), and let \(q=q(n,\bar{k})\), and let \(\alpha=\lambda^{q}\beta+\gamma,\gamma<\lambda^{q}\). Then \(Th^{n}_{\bar{k}}(\alpha)=Th^{n}_{\bar{k}}[\lambda^{q}\circ\operatorname{cf}( \lambda^{q}\beta)+\gamma],\text{ and }\lambda^{q}\circ\operatorname{cf}(\lambda^{q}\beta)+\gamma< \lambda^{q+2}\).
3. Divide \(\alpha\) by \(\lambda^{\omega}\) so \(\alpha=\lambda^{\omega}\alpha_{1}+\alpha_{2},\alpha_{2}<\lambda^{\omega}\). Let \(\alpha_{1}^{\prime}\) be \(1\) if \(\alpha_{1}\) is a successor, and \(\operatorname{cf}(\alpha_{1})\) otherwise. Then \(\lambda^{\omega}\alpha_{1},\lambda^{\omega}\alpha_{1}^{\prime}\) are divisible by \(\lambda^{q(n\bar{k})}\) for every \(n,\bar{k}\) and have equal cofinality. So by the proof of (B), for every \(n,\bar{k},Th^{n}_{\bar{k}}(\lambda^{\omega}\alpha_{1})=Th^{n}_{\bar{k}}( \lambda^{\omega}\alpha_{1}^{\prime})\). Hence \(\lambda^{\omega}\alpha_{1}+\alpha_{2},\lambda^{\omega}\alpha_{1}^{\prime}+ \alpha_{2}\)
has the same monadic theory, and \(\lambda^{\omega}\alpha_{1}^{\prime}+\alpha_{2}<\lambda^{\omega}\lambda+\lambda^{ \omega}=\lambda^{\omega+1}+\lambda^{\omega}\). This proves (C)(i). If \(\chi^{\prime}\leqq\mu\) has the same monadic theory as \(\alpha_{1}^{\prime}\) then \(\lambda^{\omega}\alpha_{1}+\alpha_{2},\lambda^{\omega}\alpha_{1}^{\prime}+ \alpha_{2}\) and \(\lambda^{\omega}\chi^{\prime}+\alpha_{2}\) (which is \(<\lambda^{\omega}\mu+\lambda^{\omega}\)) have the same monadic theories. If \(\chi^{\prime}<\mu\) clearly \(\lambda^{\omega}\chi^{\prime}+\alpha_{2}<\lambda^{\omega}\mu\). If \(\lambda=\omega\) then \(\operatorname{cf}(\lambda)^{\omega}\alpha_{1})=\omega\) in any case, hence \(\alpha=\omega^{\omega}\alpha_{1}+\alpha_{2}\), and \(\omega^{\omega}+\alpha_{1}<\omega^{\omega}+\omega^{\omega}\) has the same monadic theory. Every \(\alpha<\lambda^{+}\) we can uniquely represent as \[\alpha=\lambda^{\omega}\alpha^{\prime}+\lambda^{n}\alpha_{n}+\ldots+\lambda^{ 1}\alpha_{1}+\alpha_{0};\alpha_{i}<\lambda.\] The monadic theory of \(\alpha\) is recursive in the monadic theories of \(\lambda,\operatorname{cf}(\lambda)^{\omega}\alpha^{\prime}),\alpha_{n},\ldots, \alpha_{0}\). So we can prove inductively (C)(iv).
2. Suppose \(\lambda>\omega,\lambda\) is regular, and there is a sentence \(\psi\) such that \(\alpha\models\psi\) if \(\alpha=\lambda\). Then there are sentences \(\psi_{n}\) such that \(\alpha\models\psi_{n}\) if and only if \(\alpha=\lambda^{n}\), sentences \(\varphi_{n}\) such that \(\alpha\models\varphi_{n}\) if and only if \(\alpha\) is divisible by \(\lambda^{n}\), and sentence \(\varphi\) such that \(\alpha\models\varphi\) if \(\operatorname{cf}(\alpha)=\lambda\). Then \(\lambda^{\omega+1}\) is a model of \(\{\varphi,\varphi_{n}:n<\omega\}\). If \(\alpha\) is also a model of \(\{\varphi,\varphi_{n}:n<\omega\}\) then \(\lambda^{n}\) divides \(\alpha\) for every \(n\), hence \(\lambda^{\omega}\) divides \(\alpha\), so \(\alpha=\lambda^{\omega}\beta\). If \(\beta\) is a successor, \(\operatorname{cf}(\alpha)=\omega\) but \(\alpha\models\varphi\) so \(\beta\) is a limit hence \(\operatorname{cf}(\alpha)=\operatorname{cf}(\beta)\), so \(\operatorname{cf}(\beta)=\lambda\), so \(\beta\geqq\lambda\) hence \(\alpha\geqq\lambda^{\omega}\circ\lambda=\lambda^{\omega+1}\). Similarly \(\lambda^{\omega+1}+\lambda^{n}\) is the smallest model of its monadic theory.
**Lemma 3.6**.:
1. _In_ 3.5_(A) it suffices to know the monadic theory of_ \(\{\mu:\mu\text{ a regular cardinal}\leqq\lambda\}\)_. So if_ \(\lambda\) _is singular it suffices to know the monadic theory of_ \(\{\alpha:\alpha<\lambda\}\)_._
2. _For every sentence_ \(\psi\)_,_ 1. _there is a sentence_ \(\varphi\) _(all in the monadic theory of order) such that_ \(\alpha\models\varphi\) _if and only if_ \(\alpha\) _is a limit and_ \(\operatorname{cf}(\alpha)\models\psi\)_,_ 2. _there is a sentence characterizing the first ordinal which satisfies_ \(\psi\) _and_ 3. _for every_ \(n<\omega\) _there is_ \(\varphi_{n}\) _such that_ \(\alpha\models\varphi_{n}\) _if and only if_ \(\varphi\) _is the_ \(n^{\mathrm{th}}\) _regular cardinal satisfying_ \(\psi\)_._
3. _There are monadic sentences_ \(\varphi_{n}\) _such that_ \(\alpha\models\varphi_{n}\) _if and only if_ \(\alpha=\omega_{n}\)_. If_ \(V=L\) _there are monadic sentences_ \(\varphi_{n}^{1}\) _such that_ \(\alpha\models\varphi_{n}^{1}\) _if and only if_ \(\alpha\) _is the_ \(n^{\mathrm{th}}\) _weakly compact cardinal._
Proof:
1. Immediate by 3.5(C)(iv).
2. Let \(\varphi\) say that there is no last element, and for any unbounded \(P\) there is an unbounded \(Q\subseteqq P\) which satisfies \(\psi\) (if \(\operatorname{cf}(\alpha)\models\neg\psi\) we can choose \(Q\) as a set of order-type \(\operatorname{cf}(\alpha)\); so \(\alpha\models\varphi\). If \(\operatorname{cf}(\alpha)\models\neg\psi\), let \(P\) be a subset of \(\alpha\) of order-type \(\operatorname{cf}(\alpha)\); hence any unbounded \(Q\subseteqq P\) has order-type \(\operatorname{cf}(\alpha)\), so \(\alpha\models\neg\varphi\)).
3. Immediate.
3. We use (1) and (2) to define \(\varphi_{n}\) inductively. Let \(\varphi_{0}\) say that \(\alpha\) is the first ordinal whose cofinality satisfies \(\psi\). Let \(\varphi_{n+1}\) say that \(\alpha\) is the first ordinal whose cofinality satisfies \(\psi\wedge\neg\varphi_{0}\wedge\ldots\wedge\neg\varphi_{n}\).
4. For \(\varphi_{n}\) use (B)(3) for \(\psi\) sating \(\alpha\) is an infinite ordinal. For \(\varphi_{n}^{1}\) use (B)(3) and Theorem 0.1 (of Jensen).
## 4. The monadic theory of well-orderings
If \(a\in(M,\bar{P})\) let
\[th(a,\bar{P})=\{x\in X_{i}:a\in P_{i}\}\cup\{x\notin X_{i}:a\notin P_{i}\}\]
(so it is set of formulas).
Let \(D_{\alpha}\) denote the filter of (generated by) the closed unbounded subset of \(\alpha,\operatorname{cf}(\alpha)>\omega\).
**Lemma 4.1**.: _If the cofinality of \(\alpha>\omega\), then for every \(\bar{P}\in\underline{P}(\alpha)^{m}\) there is a closed unbounded subset \(J\) of \(\alpha\) such that: for each \(\beta<\alpha\), all the models_
\[\{(\alpha,\bar{P})\!\upharpoonright\![\beta,\gamma):\gamma\in J,\operatorname {cf}(\gamma)=\omega,\gamma>\beta\}\]
_have the same monadic theory._
Remark: Buchi [1, 6.1,p.110] proved Lemma 4.1 for \(\alpha=\omega_{1}\), by a different method.
Proof: For every \(n,\bar{k}\) there is, by 1.1, 3.1 a homogeneous unbounded \(I_{\bar{k}}^{n}\subseteq\alpha\), by the coloring \(f_{\bar{k}}^{n}\) of \((\alpha,\bar{P})\), so there is \(t_{\bar{k}}^{n}\) such that for every \(\beta<\gamma\in I_{\bar{k}}^{n}\), \(Th_{\bar{k}}^{n}((\alpha,\bar{P})\!\upharpoonright\![\beta,\gamma))=t_{\bar{k}} ^{n}\). Let \(J_{\bar{k}}^{n}\) be the set of accumulation points of \(I_{\bar{k}}^{n}\), and \(J=\bigcap_{n,\bar{k}}J_{\bar{k}}^{n}\). Clearly \(J\) is a closed and unbounded subset of \(\alpha\).
Let \(\beta<\alpha\), and \(\beta_{\bar{k}}^{n}\) be the first ordinal \(>\beta\) in \(I_{\bar{k}}^{n}\). Then for any \(\gamma\in J,\gamma>\beta,\operatorname{cf}(\gamma)=\omega\), and for every \(n,\bar{k}\) we can find \(\gamma_{\ell}\in I_{\bar{k}}^{n},\gamma_{\ell}<\gamma_{\ell+1},\lim_{\ell \rightarrow\omega}\gamma_{\ell}=\gamma\) and \(\gamma_{0}=\beta_{\bar{k}}^{n}\). Therefore
\[\begin{array}{ll}Th_{\bar{k}}^{n}((\alpha,\bar{P})\!\upharpoonright\![\beta, \gamma))&=Th_{\bar{k}}^{n}((\alpha,\bar{P})\!\upharpoonright\![\beta,\beta_{ \bar{k}}^{n}))+\sum_{\ell<\omega}Th_{\bar{k}}^{n}((\alpha,\bar{P})\! \upharpoonright\![\gamma_{\ell},\gamma_{\ell+1}))\\ &=Th_{\bar{k}}^{n}((\alpha,\bar{P})\!\upharpoonright\![\beta,\beta_{\bar{k}}^{ n}))+\sum_{\ell<\omega}t_{\bar{k}}^{n}.\end{array}\]
So, \(Th_{\bar{k}}^{n}((\alpha,\bar{P})\!\upharpoonright\![\beta,\gamma)\) does not depend on the particular \(\gamma\).
**Definition 4.2**.: \(ATH_{\bar{k}}^{n}(\beta,(\alpha,\bar{P}))\) for \(\beta<\alpha,\alpha\) a limit ordinal of cofinality \(>\omega\) is \(Th_{\bar{k}}^{n}((\alpha,\bar{P})\!\upharpoonright\![\beta,\gamma))\) for every \(\gamma\in J,\gamma>\beta,\operatorname{cf}(\gamma)=\omega\); where \(J\) is from Lemma 4.1.
Remark: As \(D_{\alpha}\) is a filter, this definition does not depend on the choice of \(J\).
**Definition 4.3**.: We define \(WTh_{\bar{k}}^{n}(\alpha,\bar{P})\):
1. if \(\alpha\) is a successor or has cofinality \(\omega\), it is \(\varnothing\),
2. otherwise we define it by induction on \(n\): for \(n=0\): \(WTh^{n}_{\bar{k}}(\alpha,\bar{P})=\{t:\{\beta<\alpha:th(\beta,\bar{P})=t\}\) is a stationary subset of \(\alpha\}\), for \(n+1\) : let \(WTh^{n+1}_{\bar{k}}(\alpha,\bar{P})=\{\langle S_{1}(\bar{Q}),S_{2}(\bar{Q}) \rangle:\bar{Q}\in\underline{P}(\alpha)^{\bar{k}(n+1)}\}\) where \(S_{1}(\bar{Q})=WTh^{n}_{\bar{k}}(\alpha,\bar{P},\bar{Q})\), \(S_{2}(\bar{Q})=\{\langle t,s\rangle:\{\beta<\alpha:WTh^{n}_{\bar{k}}((\alpha, \bar{P},\bar{Q})\!\upharpoonright\!\beta)=t,th(\beta,\bar{P}\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
this is sufficient. Let \(g_{\bar{k}}^{n}(\bar{P}^{1}\bar{\neg Q}^{1})=\bar{Q}^{*1},g_{\bar{k}}^{n+1}(\bar{P} ^{1})=\bar{P}^{*1},g_{\bar{k}}^{n+1}(\bar{P}^{2})=\bar{P}^{*2}\). Define \(\bar{r}(n+1)=\ell(g_{\bar{k}}^{n}(\bar{P}^{1}\bar{\neg Q}^{-1}))=\ell(\bar{Q}^{ *1})\) and \(\bar{r}\upharpoonright(n+1)=r_{1}(n,m+\ell(\bar{P}^{1}),\bar{k})\).
By the assumptions and Definition 4.3, there is \(\bar{Q}^{*2}\in\underline{P}(\alpha^{2})^{\bar{k}(n+1)}\) such that (for our \(n,\bar{r}\) and \(\alpha^{2},\bar{P}^{*2}\); \(\alpha^{1},\bar{P}^{*1}\)), \(S_{\ell}(\bar{Q}^{*1})=S_{\ell}(\bar{Q}^{*2})\) for \(\ell=1,2\). (The notation is inaccurate, but should be clear.) So, for \(\ell=1\), we get \(WTh_{\bar{r}}^{n}(\alpha^{1},\bar{P}^{*1},\bar{Q}^{*1})=WTh_{\bar{r}}^{n}( \alpha^{2},\bar{P}^{*2},\bar{Q}^{*2})\), and without loss of generality \(0\in Q_{s}^{*1}\leftrightarrow 0\in Q_{s}^{*2}\). (From now on we can replace \(\bar{r}\) by \(\bar{r}\upharpoonright(n+1)\).) So by Lemma 4.3, for \(\ell=1,2,\bar{Q}^{*\ell}\) is a partition of \(\alpha^{\ell}\) refining \(\bar{P}^{*\ell}\), hence for every \(\beta<\alpha^{\ell}\) there is a unique \(s_{\ell}(\beta)\) such that \(\beta\in Q_{s_{\ell}(\beta)}^{*\ell}\).
Now, for \(\ell=1,2\), choose a closed unbounded subset \(J_{\ell}\) of \(\alpha^{\ell}\) such that:
1. every member of \(J_{\ell}\) which is not an accumulation point of \(J_{\ell}\), has cofinality \(\omega\),
2. for any \(s\), if \(Q_{s}^{*\ell}\) is not a stationary subset of \(\alpha^{\ell}\) then \(Q_{s}^{*}\cap J_{\ell}=\varnothing\),
3. if \(\beta<\gamma<\alpha^{\ell}\); \(\operatorname{cf}(\gamma)=\omega\) then \(Th_{\bar{k}}^{n+1}((\alpha^{\ell},\bar{P}^{\ell})\upharpoonright[\beta,\gamma)) =ATH_{\bar{k}}^{n+1}(\beta,(\alpha^{\ell},\bar{P}^{\ell}))\) (use Lemma 4.1),
4. for every \(\gamma\in J_{\ell},\operatorname{cf}(\gamma)=\omega\), \[Th_{\bar{k}}^{n+1}((\alpha^{\ell},\bar{P}^{\ell})\upharpoonright[0,\gamma)),ATH _{\bar{k}}^{n+1}(0,(\alpha^{\ell},\bar{P}^{\ell})),\]
5. if \(Q_{s}^{*\ell}\cap J_{\ell}\neq\varnothing,\beta\in J_{\ell}\) then there are \(\gamma\in J_{\ell},\gamma>\beta,s_{\ell}(\gamma)=s\) such that \(\{\xi\in J_{\ell}:\beta\leqq\leqq\gamma\}\) is finite,
6. for any \(s,t\), if \(\{\beta<\alpha^{\ell}:t=WTh_{\bar{r}}^{n}((\alpha^{\ell},\bar{Q}^{*\ell})\upharpoonright \beta),s=Th(\beta,\bar{Q}^{*\ell})\}\) is not a stationary subset of \(J_{\ell}\), then it is disjoint to \(J_{\ell}\).
Remark: Note that (5) just strengthens (1).
Now we define \(\bar{Q}^{2}\) by parts. That is, for every \(\beta<\gamma\in J_{2}\cup\{0\},\gamma\) is the successor of \(\beta\) in \(J_{2}\), we define \(\bar{Q}^{2}\upharpoonright[\beta,\gamma)\) such that
\[s_{2}(\beta)=Th_{\bar{k}}^{n}((\alpha^{2},\bar{P}^{2}\bar{\neg Q}^{2})\upharpoonright [\beta,\gamma)).\]
This is possible as by definition of \(s_{2}(\beta),\beta\in Q_{s_{\ell}(\beta)}^{*2}\), hence
\[s_{i}(\beta)\in ATh_{\bar{k}}^{n+1}(\beta,(\alpha^{2},\bar{P}^{2})).\]
We now prove
1. if \(\beta<\gamma\in J_{2}\cup\{0\},\operatorname{cf}(\gamma)=\omega\), then \[s_{2}(\beta)=Th_{\bar{k}}^{n}((\alpha^{2},\bar{P}^{2},\bar{Q}^{2})\upharpoonright[ \beta,\gamma)).\] We prove it by induction on \(\gamma\) for all \(\beta\). 1. By (0) the first \(\gamma>\beta_{1},\gamma\in J_{2}\) has cofinality \(\omega\), and by the definition of \(\bar{Q}^{2}(*)\) is satisfied. 2. Let \(\beta<\xi<\gamma,\xi\in J_{2}\), for no \(\zeta\in J_{2},\xi<\zeta<\gamma\), has cofinality \(\omega\). Then by the induction hypothesis \(Th_{\bar{k}}^{n}((\alpha^{2},\bar{P}_{2},\bar{Q}^{2})\upharpoonright[\beta,\xi))= s_{2}(\beta)\) and \[Th_{\bar{k}}^{n}((\alpha^{2},\bar{P}^{2},\bar{Q}^{2})\upharpoonright[\xi,\gamma))=s(\xi).\]
We should now show that \(s_{2}(\beta)+s_{2}(\xi)=s_{2}(\beta)\). So it suffice to find \(\beta^{\prime}<\xi^{\prime}<\gamma^{\prime}\in J_{1},s_{1}(\beta^{\prime})=s_{2}( \beta),\mathrm{cf}(\xi^{\prime})=\omega=\mathrm{cf}(\gamma^{\prime}),s_{1}(\xi^ {\prime})=s_{2}(\xi^{\prime})\); and by the definition of \(\alpha^{2},Q^{*1}_{s_{2}(\beta)}\) is a stationary subset of \(\alpha^{1}\), hence for some \(\beta^{\prime}\in J_{1},\beta^{\prime}\in Q^{*1}_{s_{2}(\beta)}\) hence \(s_{2}(\beta^{\prime})=s_{2}(\beta)\). As \(\xi\in J_{2}\),
\[\{\zeta\in Q^{*2}_{s_{2}(\xi)}:WTh^{n}_{\bar{r}}(\alpha^{2},\bar{P}^{*2},\bar{ Q}^{*2})=\varnothing\}\]
is stationary, hence we can find \(\xi^{\prime}\in J_{1},ch(\xi^{\prime})=\omega,s_{2}(xi^{\prime})=s_{2}(\xi)\).
3. If \(\gamma\) is an accumulation point of \(J_{2}\) the proof is similar to that of (ii). Choose \(\xi_{m},m<\omega,\beta<\xi_{m}<\xi_{m+1}<\gamma,\lim_{m}\xi_{m}=\gamma, \mathrm{cf}(\xi_{m})=\omega\), and \(s_{2}(\xi_{m})=s_{2}(\xi_{m+1})\) (use (4)). Then \[\begin{array}{ll}Th^{n}_{\bar{k}}(\alpha^{2},\bar{P}^{2},\bar{Q}^{2})\!\upharpoonright [\beta,\gamma))&=Th^{n}_{\bar{k}}((\alpha^{2},\bar{P}^{2},\bar{Q}^{2})\! \upharpoonright[\beta,\xi^{0}))\\ &+\sum_{m<\omega}Th^{n}_{\bar{k}}((\alpha^{2},\bar{P}^{2},\bar{Q}^{2})\!\upharpoonright [\xi_{m},\xi_{m+1}))\\ &=s_{2}(\beta)+\sum_{m<\omega}s_{2}(\xi_{0}).\end{array}\] We should prove this sum is \(s_{2}(\beta)\), and this is done as in (ii).
4. There are \(\xi\in J,\beta<\xi<\gamma,\gamma\) the successor of \(\xi\) in \(J_{2}\) and \(\mathrm{cf}(\xi)>\omega\). As before we can find \(\beta^{\prime}<\xi^{\prime}<\gamma^{\prime}\in J_{1},s_{1}(\beta^{\prime})=s_ {2}(\beta),WTh^{n}_{\bar{r}}((\alpha^{1},\bar{P}^{1*})\!\upharpoonright\xi^{ \prime})=WTh^{n}_{\bar{r}}((\alpha^{2},\bar{P}^{*2})\!\upharpoonright\xi),s_{1} (\xi^{\prime})=s_{2}(\xi),\mathrm{cf}(\xi^{\prime})>\omega,\mathrm{cf}(\gamma^ {\prime})=\omega\). So clearly \[Th^{n}_{\bar{k}}((\alpha^{2},\bar{P}^{2},\bar{Q}^{2})\!\upharpoonright[\xi, \gamma))=s_{2}(\xi)=s_{2}(\xi^{\prime})=Th^{n}_{\bar{k}}((\alpha^{1},\bar{P}^{ 2}.\bar{Q}^{1})\!\upharpoonright[\xi^{\prime},\gamma^{\prime})).\] Now also \[Th^{n}_{k}((\alpha^{2},\bar{P}^{2},\bar{Q}^{2})\!\upharpoonright[\beta,\xi))=Th ^{n}_{\bar{k}}((\alpha^{1},\bar{P}^{1},\bar{Q}^{1})\!\upharpoonright[\beta^{ \prime},\xi^{\prime}))\] by the induction hypothesis on \(n\) and on \(\gamma\). So we have proved (*) and \(g^{n}_{\bar{k}}((\alpha^{2},\bar{P}^{2},\bar{Q}^{2}))=(\alpha^{2},\bar{Q}^{*2})\).
Now by the induction hypothesis on \(n\) it follows that \(Th^{n}_{\bar{k}}(\alpha^{1},\bar{P}^{1},\bar{Q}^{1})=Th^{n}_{\bar{k}}(\alpha^{2 },\bar{P}^{2},\bar{Q}^{2})\).
**Theorem 4.7**.: _If \(\mathrm{cf}(\alpha)>\omega\),_
\[t_{1}=WTh^{n}_{r}(g^{n}_{\bar{k}}(\bar{P})),t_{2}=ATH^{n}_{\bar{k}}(0,(\alpha, \bar{P})),\ \bar{r}=\bar{r}_{1}(n,\ell(\bar{P}),\bar{k}),\]
_then we can effectively compute \(Th^{n}_{\bar{k}}(\alpha,\bar{P})\) form \(t_{1},t_{2}\)._
Proof: The proof is similar to that of 4.4.
_Conclusion 4.8_.: If \(\lambda\) is a regular cardinal, and we know \(ATH^{n}_{\bar{k}}(0,\lambda),\ WTh^{n}_{\bar{r}}(\lambda),\ (\bar{r}=r_{1}(n,0,\bar{k}))\), then we can compute \(Th^{n}_{\bar{k}}(\lambda)\).
**Lemma 4.9**.: _If \(\lambda\) is a regular cardinal \(>\omega,\bar{r}=r(n,0,\bar{k})\), then, letting \(T_{1}=\{Th^{n}_{\bar{r}}(\mu):\omega<\mu<\lambda,\mu\) a regular cardinal\(\},T_{2}=\{Th^{n}_{\bar{r}}(\alpha):\alpha<\lambda\}\), we can compute effectively \(ATH^{n}_{\bar{k}}(0,\lambda)\) from \(T_{1}\); and we can compute \(T_{1}\) effectively from \(T_{2}\)._
Proof: Let \(T=\{t_{1},\ldots,t_{n}\}\), and if \(t_{i}=Th^{n}_{r}(\mu)\) let \(t^{\prime}_{i}=Th^{n}_{k}(\mu^{q}),q=q(n,\bar{k})\), (we can compute it effectively: see the proof of 3.5(B) for the definition of \(q(n,\bar{k})\)) and let \(t=t^{\prime}_{1}+\ldots+t^{\prime}_{\ell}\), then
\[\sum_{m<\omega}t=t\omega=ATH^{n}_{k}(0,\lambda).\lx@note{footnote}{The second phrase is immediate by \ref{thm}(3).}\]
_Conclusion 4.10_.: Let \(\lambda\) be a regular cardinal. If the monadic theory of \(\{\alpha:\alpha<\lambda\}\), and \(\{WTh^{n}_{k}(\lambda):n,\bar{k}\}\) are given then we can compute effectively the monadic theory of \(\lambda\).
**Lemma 4.11**.: _For a regular \(\lambda,\{WTh^{n}(\lambda):n<\omega\}\) and the first-order theory of \(M^{\lambda}=(\underline{P}(\lambda)/D_{\lambda},\cup,\cap,-,\varnothing,1, \ldots,R^{\lambda}_{t},\ldots)\) are recursive one in the other, where \(R^{\lambda}_{t}(P,\bar{Q})\) holds if and only if_
\(\{\beta<\lambda:\beta\in P\)_, and for some \(n,t=WTh^{n}((\lambda,\bar{Q})\!\upharpoonright\!\beta)\}\neq\varnothing( \operatorname{mod}\,D_{\lambda})\)._
Remark: Note that for every \(t\) there is at most one possible \(n\).
Proof: Immediate, similar to the proof of Lemma 2.4.
_Conclusion 4.12_.: If the monadic theory of \(\{\alpha:\alpha<\lambda\}\) and the first-order theory of \(M^{\lambda}\) are decidable, then so is the monadic theory of \(\lambda\).
Using 4.12 we can try to prove the decidability of the monadic theory of \(\lambda\) by induction on \(\lambda\).
For \(\lambda=\omega\) we know it by 3.4.
For \(\lambda=\omega_{1}\) the \(R^{\omega_{1}}_{t}\)'s are trivial, (because each \(\beta<\omega_{1}\) is a successor or \(\operatorname{cf}(\beta)=\aleph_{0}\), hence by Definition 4.4(1), \(R^{\aleph_{1}}_{t}(P,\bar{Q})\) holds if and only if \(t=\varnothing\)). So it suffices to prove the decidability of \((\underline{P}(\omega_{1})/D_{\omega_{1}},\cap,\cup,-,\varnothing,1)\). But by Ulam [11] this is an atomless Boolean algebra, so its theory is decidable. Hence we reprove the theorem of Buchi [1].
_Conclusion 4.13_.: The monadic theory of \(\omega_{1}\) is decidable.
Now we can proceed to \(\lambda=\omega_{2}\). Looking more closely at the proof for \(\omega_{1}\), we see that \(WTh^{n}_{k}(\omega_{1},\bar{P})\) can be computed from the set of atoms in the Boolean algebra generated by the \(P_{i}\) which are stationary subsets of \(\omega_{1}\); and we can replace \(\omega_{1}\) by any ordinal of cofinality \(\omega_{1}\). So all the \(R^{\omega_{2}}_{t}\) can be defined by the function \(F/D_{\omega_{2}}\),
\[F(I)=\{\alpha<\omega_{2}:\operatorname{cf}(\alpha)=\omega_{1},\alpha\backslash I \cap\omega_{2}\notin D_{\alpha}\}.\]
_Conclusion 4.14_.: The first order theory of
\[M^{\omega_{2}}_{1}=(\underline{P}(\omega_{2})/D_{\omega_{2}},\cap,\cup,-, \varnothing,1,F/D_{\omega_{2}})\]
is decidable if and only \(\operatorname{id}\) the monadic theory of \(\omega_{2}\) is decidable.
Notice that \(F(I\cup J)=F(I)\cup F(J)\), and that for \(M^{\omega_{2}}_{1}\) to have a decidable theory, it suffices that it have elimination of quantifiers. For this it suffices
* for any stationary \(A\subseteqq\{\alpha<\omega_{2}:\operatorname{cf}(\alpha)=\omega\}\) and \(B,C\) such that \(F(A)=B\cup C\) there are stationary \(A^{\prime},B^{\prime},A=A^{\prime}\cup B^{\prime},A^{\prime}\cap B^{\prime}= \varnothing,F(A^{\prime})=A(\operatorname{mod}D_{\omega_{2}})\) and \(F(B^{\prime})=B(\operatorname{mod}D_{\omega_{2}})\).
_Conjecture 4.15_.: (*) is consistent with ZFC.
## 5. From orders to uniform orders
An equivalence relation \(E\) on an ordered set \(N\) is _convex_ if \(xEy,\ x<z<y\in N\), implies \(xEy\), i.e., every equivalence class is convex. On \(N/E=\{\alpha/E:a\in N\}\) a natural ordering is defined. If \(J\) is a convex of a model \((M,\bar{P})\) then \(th(J,\bar{P})\) is \(\langle\ell,s_{1},s_{2}\rangle\) such that if there is no last (first) element in \(J,s_{2}=1\ (s_{1}=1)\), if \(b\) is the last (first) element, \(s_{2}=th(b,\bar{P})\ (s_{1}=th(b,\bar{P}))\) (for definition, see the beginning of Section 4) and \(\ell=\min(|J|,2)\).
**Definition 5.1**.:
1. \(\kappa(M)\) is the first cardinal \(\kappa\), such that neither \(\kappa\) nor \(\kappa^{*}\) is embeddable in \(M\).
2. \(\kappa(K)\) is l.u.b. \(\{\kappa(M):M\in K\}\).
**Definition 5.2**.: We define for every \(n,\bar{k}\), the class \(U^{n}_{\bar{k}}\) and \(UTh^{n}_{\bar{k}}((M,\bar{P}))\) for \(M\in U^{n}_{\bar{k}}\)
1. \(U^{n}_{\bar{k}}=\{(M,\bar{P}):M\) is dense order with no first nor last element and there are \(t_{0}\) and a dense \(I\subseteqq|M|\) such that for every \(a<b\in I\): \[t_{0}=Th^{n}_{\bar{k}}((M,\bar{P})\!\upharpoonright\!(a,b))\text{ and }th(a,\bar{P})=th(b,\bar{P})\}.\] Now we define \(UTh^{n}_{\bar{k}}(M,\bar{P})\) be induction on \(n\).
2. \(UTh^{0}_{\bar{k}}(M,\bar{P})=Th^{0}_{\bar{k}}(M,\bar{P})\).
3. \(UTh^{n+1}_{\bar{k}}(M,\bar{P})=\langle S_{1},S_{2},\operatorname{com}\rangle\) where 1. \(S_{1}=\{UTh^{n}_{\bar{k}}(M,\bar{P},\bar{Q}):\bar{Q}\in\underline{P}(M)^{\bar{ k}(n+1)},(M,\bar{P},\bar{Q})\in U^{n}_{\bar{k}}\}\), 2. Before we define \(S_{2}\), we make some conventions: 1. \(T_{1}(T_{2})\) is the set of formally possible \(th(J,\bar{P}^{1}),J\neq\varnothing\), and \(\ell(\bar{P}^{1})=\ell(\bar{P}),(\ell(\bar{P}^{1})=\ell(\bar{P})+\bar{k}(n+1))\); 2. \(T_{3}=\{\langle\ell,s_{1},t,s_{2}\rangle:\langle\ell,s_{1},s_{2}\rangle\in T_{ 2},t\in T(n,\ell(\bar{P})+\bar{k}(n+1),\bar{k})\) and \(\ell=1\) if and only if \(t\) is the "theory" of the empty model); 3. If \(\langle\ell,s_{1},s_{2}\rangle\in T_{1},\langle\ell^{\prime},s^{\prime}_{1},t, s^{\prime}_{2}\rangle\in T_{3}\) then \(\langle\ell,s_{1},s_{2}\rangle\leqq\langle\ell^{\prime},s^{\prime}_{1},t,s^{ \prime}_{2}\rangle\) when: \(\ell=\ell^{\prime}\) and \(s_{1}=1\Leftrightarrow s^{\prime}_{1}=1,s_{2}=1\Leftrightarrow s^{\prime}_{2}=1\) and \(s_{1}\neq 1\to s_{1}\subseteq s^{\prime}_{1},s_{2}\neq 1\to s_{2}\subseteq s^{\prime}_{2}\); 4. At last let \(\bar{r}=\bar{r}(n,\ell(\bar{P}),\bar{k})\) be from 2.13, \(S_{2}=\{UTh^{n}_{\bar{r}}(M/E,\bar{P}^{*},\bar{Q}^{*}):E\text{ a non-trivial convex equivalence relation over }|M|,(M/E,\bar{P}^{*},\bar{Q}^{*})\in U^{n}_{\bar{r}},\bar{P}^{*}= \langle\ldots,P^{*}_{t},\ldots\rangle_{t\in T_{1}}\), where \(P^{*}_{t}=\{a/E:a\in|\bar{M}|,th(a/E,\bar{P})=t\}\) and \(Q^{*}=\langle\ldots,Q^{*}_{t},\ldots\rangle_{t\in T_{3}}\) is a partition of \(|M|/E\) refining \(\bar{P}^{*}\) and \(\emptyset\neq Q^{*}_{t(1)}\subseteqq P^{*}_{t}\) implies \(t(1)\leqq t\}\). 3. Com is \(+\) if \(M\) is a complete order, and \(--\) otherwise.
**Lemma 5.3**.:
1. _From_ \(Th^{n+2}_{\bar{k}}(M,\bar{P})\) _we can check whether_ \((M,\bar{P})\in U^{n}_{\bar{k}}\) _and compute_ \(UTh^{n}_{\bar{k}}(M,\bar{P})\)_._
2. _Also the parallel to 2.3 holds._
**Lemma 5.4**.: _For every dense \(N\in K,\|N\|>1,n,\bar{k}\), there is a convex submodel \(M\) of \(N\) which belongs to \(U^{n}_{\bar{k}},\|M\|>2\)._
Proof: By Theorem 1.3, and 2.7
**Lemma 5.5**.: _Suppose \(N\) is a dense order, \(\kappa(N)\leqq\aleph_{1}\); \(I\subseteqq|N|\) is a dense subset, and for every \(a<b\in I,t_{0}=Th^{n}_{\bar{k}}((N,P)\!\upharpoonright\![a,b))\). Then there is \(t_{1}\) such that_
1. _for every_ \(a<b\in|N|,t_{1}=Th^{n}_{\bar{k}}((N,\bar{P})\!\upharpoonright\!(a,b))\)_._
2. _Moreover for every convex_ \(J\subseteqq|N|\)_, with no first nor last element,_ \(t_{1}=Th^{n}_{\bar{k}}((N,\bar{P})\!\upharpoonright\!J)\)_._
Proof: Clearly it suffices to prove (2). Choose \(a_{0}\in J\cap I\). Now define \(a_{n},0<n<\omega\) such that \(a_{n}\in J\cap I,a_{n}<a_{n+1}\) and \(\{a_{n}:n<\omega\}\) is unbounded in \(J\) (this is possible as \(\kappa(N)\leqq\aleph_{1}\)). Now define similarly, \(a_{n}\in J\cap I,n\) a negative integer so that \(a_{n-1}<a_{n}<a_{0}\) and \(\{a_{n}:n\) is a negative integer\(\}\) is unbounded from below in \(J\).
So, letting \(Z\) be the integers,
\[Th^{n}_{\bar{k}}((N,\bar{P})\!\upharpoonright\!J)=\sum_{n\in Z}Th^{n}_{\bar{k} }((N,\bar{P})\!\upharpoonright\![a_{n},a_{n+1}))=\sum_{n\in Z}t_{0}\stackrel{{ def}}{{=}}t_{1}.\]
**Theorem 5.6**.: _Let \(M\) be an order, \(\kappa(M)\leqq\aleph_{1}\)._
1. _Knowing_ \(t\) _and that_ \(t=UTh^{n}_{\bar{k}}(M,\bar{P}),(M,\bar{P})\in U^{n}_{\bar{k}}\) _we can effectively compute_ \(F(t)=Th^{n}_{\bar{k}}(M,\bar{P})\)_._
2. _If_ \((M^{i},bar^{i})\in U^{n}_{\bar{k}}\) _for_ \(i=1,2\)_, and_ \(UTh^{n}_{\bar{k}}(M^{1},\bar{P}^{1})=UTh^{n}_{\bar{k}}(M^{2},\bar{P}^{2})\) _then_ \(Th^{n}_{\bar{k}}(M^{1},\bar{P}^{1})=Th^{n}_{\bar{k}}(M^{2},\bar{P}^{2})\)_._
Proof: Clearly (A) implies (B). So we prove (A) by induction on \(n\).
For \(n=0\) it is trivial.
Suppose we have proved the theorem for \(n\), and we shall prove it for \(n+1\).
Let \(UTh^{n+1}_{\bar{k}}(M,\bar{P})=\langle S_{1},S_{2},\mbox{\rm com}\rangle\). We should find
\[T=\{Th^{n}_{\bar{k}}(M,\bar{P},\bar{Q}):\bar{Q}\in\underline{P}(M)^{\bar{k}(n+ 1)}\}.\]
If \(t\in S_{1}\), then for some \(\bar{Q}\in\underline{P}(M)^{\bar{k}(n+1)},(M,\bar{P},\bar{Q})\in U^{n}_{\bar{k}}\) and \(t=UTh^{n}_{\bar{k}}(M,\bar{P},\bar{Q})\), hence, by the induction hypothesis \(F(t)=Th^{n}_{\bar{k}}(M,\bar{P},\bar{Q})\), so \(F(t)\in T\). We can conclude that \(T^{\prime}=\{F(t):T\in S_{1}\}\subseteqq T\).
Now if \(t^{*}\in S_{2}\), then there is a convex equivalence relation \(E\) on \(M\), such that \(t^{*}=UTh^{n}_{\bar{r}}(M/E,\bar{P}^{*},\bar{Q}^{*})\) where the conditions of \(S_{2}\) are satisfied. If \(Q^{*}_{\langle\ell,s_{1},t,s_{2}\rangle}\neq\varnothing\), and \(\ell>1\) implies \(t\in T\) then we can define \(\bar{Q}\in\underline{P}(M)\) such that for \(a/E\in Q^{*}_{\langle\ell,s_{1},t,s_{2}\rangle}\):
1. \(UTh^{n}_{\bar{k}}((M,\bar{P},\bar{Q})\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
By the definition of \(E,Th^{n}_{\bar{k}}((M,\bar{P},\bar{Q})\!\!\restriction\!(a_{n},a_{n+1}))\in T^{*}\), hence by (d),
\[Th^{n}_{\bar{k}}((M,\bar{P},\bar{Q})\!\!\restriction\!\{x\in\operatorname{int}(a/ E):a_{0}<x\})\in T^{*}.\]
Similarly,
\[Th^{n}_{\bar{k}}((M,\bar{P},\bar{Q})\!\!\restriction\!\{x\in\operatorname{int}( a/E):x<a_{0}\})\in T^{*}.\]
So by (c),
\[Th^{n}_{\bar{k}}(M,\bar{P},\bar{Q})\!\!\restriction\!\operatorname{int}(a/E)) \in T^{*}.\]
Similarly, by (c),(e) in \(M/E\) there are no two successive elements, so \(M/E\) is a dense order.
Define \(\bar{P}^{*}=\langle\ldots,P_{\langle\ell,s_{1},s_{2}\rangle},\ldots\rangle, \bar{Q}^{*}=\langle\ldots,Q^{*}_{\langle\ell,s_{1},t,s_{2}\rangle},\ldots\rangle\) such that
1. \(a/E\in P_{\langle\ell,s_{1},s_{2}\rangle}\) if and only if \(th(a/E,\bar{P})=\langle\ell,s_{1},s_{2}\rangle\),
2. \(a/E\in Q^{*}_{\langle\ell,s_{1},t,s_{2}\rangle}\) if and only id \(Th^{n}_{\bar{k}}((M,\bar{P},\bar{Q})\!\!\restriction\!\operatorname{int}(a/E))=t\); and \(th(a/E,\bar{P}^{\sim}\bar{Q})=\langle\ell,s_{1},s_{2}\rangle\).
By Lemma 5.2, \((M/E,\bar{P}^{*},\bar{Q}^{*})\) either has only one element or it has an interval \((a/E,b/E)\neq\varnothing\) such that \((M/E,\bar{P}^{*},\bar{Q}^{*})\!\!\restriction\!(a/E,b/E)\in U^{n}_{\bar{r}}\).
Now we prove \(aEb\) and so show that this case does not occur and \(E\) has one equivalence relation, hence \(Th^{n}_{\bar{k}}(M,\bar{P},\bar{Q})\in T^{*}\) and so we shall finish.
Let \(a\leqq a^{\prime}<b^{\prime}\leqq b\), then let
\[\begin{array}{l}J_{2}=\{c\in M:a^{\prime}/E<c/E<b^{\prime}/E\},\\ J_{1}=\{c\in M:a^{\prime}<c\in\operatorname{int}(a^{\prime}/E)\},\\ J_{3}=\{c\in M:b^{\prime}>c\in\operatorname{int}(b^{\prime}/E)\}.\end{array}\]
By (b), \(Th^{n}_{\bar{k}}((M,\bar{P},\bar{Q})\!\!\restriction\!J_{2})\in T^{*}\); by (d) \(Th^{n}_{\bar{k}}((M,\bar{P},\bar{Q})\!\!\restriction\!J_{i})\in T^{*}\) for \(i=1,3\). Hence by (c) and (e) \(Th^{n}_{\bar{k}}((M,\bar{P},\bar{Q})\!\!\restriction\!(a^{\prime},b^{\prime})) \in T^{*}\). So \(aEb\), and we finish.
**Theorem 5.7**.:
1. _If_ \(\kappa(K)\leqq\aleph_{1}\)_, and for every_ \(M\in K\)_, there is_ \(N\in K\cap U^{n+1}\) _extending_ \(M\)_, then from_ \(UTh^{n+1}_{\bar{k}}(K)=\{UTh^{n+1}_{\bar{k}}(M):M\in K\cap U^{n+1}_{\bar{k}}\}\)_, we can compute_ \(Th^{n}_{\bar{k}}(K)\)_. Hence if_ \(UTh^{n}(K)\) _is recursive in_ \(n\)_, then the monadic theory of_ \(K\) _is decidable._
2. _Suppose_ \(\kappa(K)\leqq\aleph_{1},K\) _is closed under_ \(M+N,\sum_{n<\omega}M,\sum_{\begin{subarray}{c}n\in Z\\ n\leqq 0\end{subarray}}M_{n},\sum_{i\in Q}M_{i}\) _are convex submodels and division by convex equivalence relations. Then from_ \(UTh^{n}_{\bar{r}}(K)\)__\((\bar{r}=r(n,0,\bar{k}))\) _we can compute_ \(Th^{n}_{\bar{k}}(K)\)_. Hence if_ \(UTh^{n}(K)\) _is recursive in_ \(n\)_, then the monadic theory of_ \(K\) _is decidable._
Proof:
1. Immediate.
2. Essentially the same as the proof of 5.4.
Remark: Of course there are other versions of (B), e.g., for a class of complete orders.
## 6. Applications of Section 5 to dense orders
**Definition 6.1**.: \(K_{S}\) is the class of orders \(M\) such that no submodel of \(M\) is isomorphic to \(\omega_{1}\) or \(\omega_{1}^{*}\) or an uncountable subset of the reals11
Footnote 11: Those are the Specker orders; we get them from Aronszajn trees.
**Lemma 6.2**.:
1. \(K_{S}\) _satisfies the hypothesis of 5.7(B). Also no member of_ \(K_{S}\) _is complete, except the finite ones._
2. \(K_{S}\) _has uncountable members, but_ \(M\in K_{S}\) _implies_ \(\|M\|\leqq\aleph_{1}\)_._
Proof:
1. Immediate.
2. The Specker orders. See e.g., [11]12 for existence.
Footnote 12: There is some overlapping between \(S_{1}\) and \(S_{2}\).
**Theorem 6.3**.:
1. _The monadic theory of_ \(K_{S}\) _is decidable._
2. _All dense order from_ \(K_{S}\)_, with no first nor last element, have the same monadic theory._
Proof: We shall show that for \((M,\bar{P})\in U^{0}(K),\bar{P}\) a partition, \(pUTh^{1}(M,\bar{P})\) can be computed from \(pUTh^{0}(M,\bar{P})\) (hence the former uniquely determine the latter). Then by the parallel to Lemma 2.5, clause (B) follows immediately and (A) follows by5.7(B).
So let \(t=pUTh^{0}(M,\bar{P})\) be given; that is, we know that \(\bar{P}\) is a partition of \(M\) to dense or empty subsets, \(M\in U^{0}\), hence \(M\) is dense with no first and no last element, \(M\in K\), and we know \(\{i:P_{i}\neq\varnothing\}\). So without loss of generality. \(P_{i}\neq\varnothing\) for every \(i\) and also \(M\neq\varnothing,P_{i}\) is dense. Let \(pTh^{1}(M,\bar{P})=\langle S_{1},S_{2},\mbox{com}\rangle\), so we should compute com, \(S_{1},S_{2}\).
Part (1) com: As \(M\in K\), and as clearly the rational order is embeddable in \(M,M\) cannot be complete.
Part (2) \(S_{1}\): It suffices to prove that any dense subset \(P\) of \(M\) can be split into two disjoint dense subsets of \(M\).
So we shall prove more.
1. If \(M\) is a dense order, \(I\subseteqq|M|\) is a dense subset, _then_ we can partition \(I\) to two dense subsets of \(M\). That is, there are \(J_{1},J_{2},I=J_{1}\cup J_{2},J_{1}\cap J_{2}=\varnothing\) and \(J_{1},J_{2}\) are dense subsets of \(M\). We define a equivalence relation \(E\) on \(I:aEb\) if, \(a=b\) or there are \(a_{0}<a,b<b_{0}\) and \(a_{0}<a^{\prime}<b^{\prime}<b_{0}\) implies \(|\{c\in:a^{\prime}<c<b^{\prime}\}|=|\{c\in I:a<c<b\}|\) (and they are infinite by assumption). Now for every \(E\)-equivalence class \(a/E\) with more than one element, let \(\lambda=|\{a\in I:b^{\prime}<a<c^{\prime}\}|\) for every \(b^{\prime}\leqq c^{\prime}\in a/E\).
Case I: \(|a/E|=\lambda>0\).
Then let \(\{\langle b_{i},c_{i}\rangle:i<\lambda\}\) be an enumeration of all pairs \(\langle b,c\rangle\) such that \(b,c\in a/E,b<c\). Define by induction on \(i<\lambda,a_{i}^{1},a_{i}^{2}\in a/E\). If we have
defined them for \(j<i\), choose
\[a_{i}^{1}\in\{d\in I:b_{i}<d<c_{i}\}\backslash\{a_{j}^{2}:j<i\},\]
\[a_{i}^{2}\in\{d\in I:b_{i}<d<c_{i}\}\backslash\{a_{j}^{1}:j\leqq i\}.\]
By cardinality considerations this is possible. Define \(J_{1}(a/E)=\{a_{i}^{1}:i<\lambda\}\).
Case II: \(\lambda<|a/E|\).
Then clearly \(|a/E|=\lambda^{+}\), and we can partition \(a/E\) into \(\lambda^{+}\) convex subsets \(A_{i},i<\lambda^{+}\), each of power \(\lambda\). So on each we can define \(J_{1}(A_{1})\) such that \(J_{1}(A_{i}),A_{i}\backslash J_{1}(A_{i})\) are dense subsets of \(A_{i}\). Let \(J_{1}(a/E)=\bigcup_{i<\lambda^{+}}J_{1}(A_{i})\).
Case III: \(\lambda=0\), so \(|a/E|=1\).
Let \(J_{1}(a/E)=\varnothing\). Let \(J_{1}=\bigcup_{a\in I}J_{1}(a/E),J_{2}=I\backslash J_{1}\).
It is easy to check that \(J_{1},J_{2}\) are the desired subsets.
Part (3) \(S_{2}\): By (2) it suffices to find to possible \(UTh^{0}(M/E,\bar{P}^{*})\), where \(\bar{P}^{*}=\langle\ldots,P^{*}_{\langle\ell,s_{1},s_{2}\rangle},\ldots \rangle,P^{*}_{\langle\ell,s_{1},s_{2}\rangle}=\{a/E:th(a/E,\bar{P})=\langle \ell,s_{1},s_{2}\rangle\}\), and \((M/E,\bar{P}^{*})\in U^{0}(K)\); so \(W_{E}=\{\langle\ell,s_{1},s_{2}\rangle:P^{*}_{\langle\ell,s_{1},s_{2}\rangle} \neq\varnothing\}\) contain all relevant information. Clearly \(W_{E}\neq\varnothing\) and \(\langle\ell,s_{1},s_{2}\rangle\in W_{E}\Rightarrow\ell>0\) and we can also discard the case \(\langle\ell,s_{1},s_{2}\rangle\in W_{E}\Rightarrow\ell=1\). Also if \(\langle\ell,s_{1},s_{2}\rangle\in W_{E}\), then \(\langle\ell,s_{1},s_{2}\rangle\) is formally possible.
Suppose \(W\) satisfies all those conditions, and we shall find a suitable \(E\) such that \(W_{E}=W\). Let \(W=\{\langle\ell^{i},s_{1}^{i},s_{2}^{i}\rangle:i<q<\omega\}\). Choose a \(J\subseteqq|M|\), countably dense in itself, unbounded in \(M\) from above and from below, such that each \(P_{j}\cap J\) is a dense subset of \(J\), and for no \(a\in|M|\backslash J\) is there a first (last) element in \(\{b\in J:b>a\}\) (\(\{b\in J;b<a\}\)). \(J\) defines \(2^{\aleph_{0}}\) Dedekind cuts, but as \(M\in K\), only \(\leqq\aleph_{0}\) of them are realized. Let \(\{a_{n}:n<\omega\}\) be a set of representatives from those cuts (that is, for every \(a\in|M|\backslash J\) there is \(n<\omega\) such that \([a,a_{n}]\) or \([a_{n},a]\) is disjoint to \(J\)). Let \(J=\{b_{n}:n<\omega\}\). Now we define by induction on \(n\) a set \(H_{n}\) of convex disjoint subsets of \(M\), such that:
1. \(H_{n}\subseteqq H_{n+1};H_{n}\) is finite.
2. If \(I_{1}\neq I_{2}\in H_{n}\) then \(I_{1}<I_{2}\) or \(I_{2}<I_{1}\) and between them there are infinitely many members of \(J\).
3. If \(I\in H_{n},I\) has no last element, then for every \(a\in|M|\backslash J,a>I\), there is \(b\in J,I<b<a\), and also \(J\cap I\) is unbounded in \(I\).
4. The same holds for the converse order.
5. If \(I_{1}<I_{2}\in H_{n},i<q\) then there are \(I\in H_{n+1},th(I,\bar{P})=\langle\ell^{i},s_{1}^{i},s_{2}^{i}\rangle\).13 (f) \(a_{n},b_{n}\in\bigcup\{I:I\in H_{n}\}\).
2. If \(I\in H_{n}\) has a first (last) element then this element belongs to \(J\). It is not hard to define the \(H_{n}\)'s. Clearly \(\bigcup_{n}\bigcup_{I\in H_{n}}I=|M|\). So define \(E\) as follows:
Footnote 13: Also, \(I_{1}<I<I_{2}\), and \(I_{0}\in H_{n}\) implies \(th(I_{0},\bar{P})\in W\).
\(aEb\) if and only if \(a=b\) or for some \(n<\omega,I\in H_{n},a,b\in I\).
It is not hard to check that \(W_{E}=W\). So we finish the proof.
Along similar lines we can prove
**Theorem 6.4**.: _Suppose \(M\) is a dense order with no first nor last elements, \(M\) is a submodel of the reals, and for every perfect set \(P\) of reals, \(P\cap|M|\) is countable, or even \(<2^{\aleph_{0}}\). Then the monadic theory of \(M\) is the monadic theory of rationals._
Remark 1: We can integrate the results of 6.3, 6.4. Always some \(M\) satisfies the hypothesis of 6.4. If \(2^{\aleph_{0}}>\aleph_{1}\), any dense \(M\subseteqq R,|M|<2^{\aleph_{0}}\), and if \(2^{\aleph_{0}}=\aleph_{1}\), the existence can be proved.
Remark 2: In 6.4 we can demand less of \(|M|\): For all countable, disjoint and dense sets \(Y_{1},\ldots Y_{n}(n<\omega)\) there is a perfect set \(P\) of reals such that \(Y_{i}\) is dense in \(P\) for \(1\leqq i\leqq n\) and \(P\cap|M|\) is \(<2^{\aleph_{0}}\) (see Section 7 for definition).
The proof of 5.4 is easily applied to the monadic theory of the reals. (We should only notice that \(R\) is complete.)
_Conclusion 6.5_.: If we can compute the \(UTh^{n}(R)\) for \(n<\omega\) then the monadic theory of the real order is decidable.
Remark: Similar conclusions hold if we add to the monadic quantifier (or replace it by) (\(\exists^{<\aleph_{1}}X\)) (i.e., there is a countable \(X\)). Notice that if \(E\) is a convex equivalence relation over \(R\), then \(\{a/E:|a/E|>1\}\) is countable.
Grzegorczk [10] asked whether the lattice of subsets of reals with the closure operation has a decidable theory. One of the corollaries of Rabin [12] is that the theory of the reals with quantification over closed sets, and quantification over \(F_{\sigma}\) sets is decidable.
By our methods we can easily prove
**Theorem 6.6**.: _The reals, with quantifications over countable sets, has a decidable theory. (We can replace "\(X\) countable" by "\(|X|<2^{\aleph_{0}}\)" or "\((\forall P)\) (\(P\) closed nowhere dense \(\to|P\cap X|<2^{\aleph_{0}}\))")._
_As every closed set is a closure of a countable set, this proves again the result of Rabin [12] concerning Grzegorczk's question. We can also prove by our method Rabin's stronger results, but with more technical difficulties._
## 7. Undecidability of the monadic theory of the real order
Our main theorem here is
**Theorem 7.1**.:
1. _(CH) The monadic theory of the real order is undecidable._
2. _(CH) The monadic theory of order is undecidable._
**Theorem 7.2**.: _(CH) The monadic theory of \(K_{n}=\{(R,Q_{1},\ldots,Q_{n}):Q_{i}\subseteqq R\}\), where the set quantifier ranges over countable sets, \(1\leqq n\), is undecidable. (We can even restrict ourselves to sets of rationals.)_
_Let \(2^{\leq\omega}\) be the set of sequences of ones and zeros of length \(\leqq\omega\); let \(\leqq\) be a partial ordering of \(2^{\leq\omega}\) meaning that it is an initial segment, \(\prec\) the lexicographic order._
**Theorem 7.3**.:
1. _CH The monadic theory of_ \((2^{\leqq\omega},\leqq,\prec)\) _is undecidable._
2. _(CH) The monadic theory of_ \(K_{n}=\{(2^{\leqq\omega},\leqq,\prec,Q_{1},\ldots,Q_{n}):Q_{i}\subseteqq 2^{\leqq\omega}\}\)_, where the set quantifier ranges over sets,_ \(1\leqq n\)_, is undecidable._ _(We can even restrict ourselves to subsets of_ \(2^{<\omega}\)_)._
_Instead of the continuum hypothesis, we can assume only:_
1. _"The union of_ \(<2^{\aleph_{0}}\) _sets of the first category in not_ \(R\)_"._ _This is a consequence of Martin's axiom (see_ _[_11_]__) hence weaker than CH, but also its negation is consistent, (see Hechler_ _[_11_]_ _and Mathias_ _[_12_]_ _and Solovay_ _[_13_]__). Aside from countable sets, we can use only a set constructible from any well-ordering of the reals. Remember that by Rabin_ _[_14_]_ _quantification over closed and_ \(F_{\sigma}\) _sets gives us still a decidable theory._
Conjecture 7A: The monadic theory of \((2^{\leqq\omega},\leqq,\prec)\), where the set quantifier ranges over Borel sets only, is decidable.
This should be connected to the conjecture on Borel determinacy (see Davis [15], Martin [12] and Paris [13]).14 This conjecture implies
Footnote 14: Meanwhile Martin [12] proved the Borel determinacy.
Conjecture 7B: The monadic theory of the reals, where the set quantifier ranges over Borel sets, is decidable (by Rabin [14]).
Conjecture 7C: We can prove 7.1-7.3 in ZFC.
Theorems 7.1(A),(B),7.3(A) answer well known problems (see e.g., Buchi [1, p.38, Problem 1,2a,2b,4a]. Theorem 7.3(B) answers a question of Rabin and the author.
Unless mentioned otherwise, we shall use CH or (*).
Notation: \(R\) denotes the reals. A _prefect_ set is a closed, nowhere dense set of reals, with no isolated points and at least two points (this is a somewhat deviant definition). We use \(P\) to denote prefect sets. Let \(x\) be an inner point of \(P\) if \(x\in P\), and for every \(\epsilon>0,(x{-}{-}\epsilon,x)\cap P\neq\varnothing,(x,x+\epsilon)\cap P\neq\varnothing\). Let \(D\subseteq\subseteq R\) be dense in \(P\) if for every inner point \(x<y\) of \(P\), there is an inner \(z\in P\cap D,x<z<y\). Note that if \(D\) is dense in \(P,P\) is the closure of \(P\cap D\). Real intervals will be denoted by \((a,b)\) where \(a<b\), or by \(I\); \((a,b)\) is an interval of \(P\) if in addition \(a,b\) are inner points if \(P\).
**Lemma 7.4**.: _Let \(J\) be an index-set, the \(D_{i}\)\((i\in J)\) countable dense subsets of \(R\), and \(D=\bigcup_{i\in J}D_{i}\); and for every \(P,|D\cap P|<2^{\aleph_{0}}\). Then there is \(Q\subseteq R\backslash D,Q=Q\{D_{i}:i\in J\}\), such that_
1. _if_ \(P\cap D\subseteqq D_{i}\)__\((i\in J)\) _and_ \(D_{i}\) _is dense in_ \(P\) _(_\(P\) _is, of course, prefect) then_ \(|P\cap Q|<2^{\aleph_{0}}\)_;_
2. _if for no (interval)_ \(I\) _of_ \(P\)_, and_ \(i\in J,P\cap D\cap I\subseteqq D_{i}\) _but_ \(D\) _is dense in_ \(P\) _then_ \(P\cap Q\neq\varnothing\)_._
Proof: Let \(\{P_{\alpha}:0<\alpha<2^{\aleph_{0}}\}\) be any enumeration of the perfect sets. We define \(x_{\alpha},\alpha<2^{\aleph_{0}}\) by induction on \(\alpha\).
For \(\alpha=0,x_{\alpha}\in R\) is arbitrary.
For any \(\alpha>0\), if \(P_{\alpha}\) does not satisfy the assumptions of (B) then let \(x_{\alpha}=x_{0}\) and if \(P_{\alpha}\) satisfies the assumptions of (B) let \(x_{\alpha}\in P_{\alpha}\setminus\bigcup\{P_{\beta}:\beta<\alpha,(\exists i \in J)(P_{\beta}\cap D\subseteqq D_{i}\) and \(D\) is dense in \(P_{\beta})\}=D\).
This is possible because for any \(\beta,i\), if \(P_{\beta}\cap D\subseteqq D_{i},D\) is dense in \(P_{\beta},P_{\beta}\cap P_{\alpha}\) is a closed nowhere dense subset of \(P_{\alpha}\). As otherwise for some interval \(I\) of \(P_{\alpha},P_{\beta}\cap P_{\alpha}\) is dense in \(P_{\alpha}\), so by the closeness of \(P_{\beta}\cap P_{\alpha},P_{\beta}\cap P_{\alpha}\cap I=P_{\alpha}\cap I\); therefore
\[D_{i}\supseteqq P_{\beta}\cap D\supseteqq P_{\alpha}\cap I\cap D,\]
a contradiction of the assumption on \(P_{\alpha}\). So by (*) and the hypothesis \(|P_{\alpha}\cap D|<2^{\aleph_{0}}\) there exists such \(x_{\alpha}\).
Now let \(Q=\{x_{\alpha}:\alpha<2^{\aleph_{0}}\}\). If \(P\) satisfies the assumption of (A), then \(P\in\{P_{\alpha}:0<\alpha<2^{\aleph_{0}}\}\). Hence for some \(\alpha,P=P_{\alpha}\), hence \(P\cap D\subseteqq\{x_{\beta}:\beta<\alpha\}\), so \(|P\cap D|<2^{\aleph_{0}}\). If \(P=P_{\alpha}\) satisfies the assumption of (B) then \(x_{\alpha}\in P_{\alpha},x_{\alpha}\in Q\), hence \(P_{\alpha}\cap Q\neq\varnothing\). So we have proved the lemma.
**Lemma 7.5**.: _There is a dense \(D\subseteqq R\) and \(\{D_{i}:i\in J\},|J|=2^{\aleph_{0}}\) such that_
1. \(|D\cap P|<2^{\aleph_{0}}\) _for every perfect_ \(P\)_._
2. _The_ \(D_{i}\) _are pairwise disjoint._
3. \(D_{i}\subseteqq D,D_{i}\) _is dense._
Proof: Let \(\{P_{\alpha}:\alpha<2^{\aleph_{0}}\}\) enumerate the perfect subsets of \(R\), and let \(\{I_{n}:n<\omega\}\) enumerate the rational intervals of \(R\), and if \(\alpha=\delta+n\) (\(n<\omega,\delta\) a limit ordinal) choose \(x_{\alpha}\in I_{n}\setminus\bigcup_{\beta<\alpha}P_{\beta}\setminus\{x_{ \beta}:\beta<\alpha\}\) and let \(D=\{x_{\beta}:\beta<2^{\aleph_{0}}),D_{\alpha}=\{x_{\omega\alpha+n}:n<\omega\}\).
Notation: \(J\) will be an index set; \([J]^{n}=\{U:U\subseteqq J,|U|=n\}\), and if \(D_{i}\) is defined for \(i\in J\), let \(D_{U}=\bigcup_{i\in U}D_{i}\). Subsets of \([J]^{n}\), i.e., symmetric \(n\)-place relations over \(J\), are denoted by \(S\); and if we know \(\{D_{i}:i\in J\},Q_{S}\) will by \(Q\{D_{U}:U\in S\cup[J]^{n-1}\}\) from 7.4.
**Definition 7.6**.: Let \(\varphi_{n}(X,D,Q,I^{*})\) be the monadic formula saying
1. \(X\) is a dense set in \(I^{*}\) and \(X\subseteqq D\).
2. For every interval \(I\subseteqq I^{*}\), and sets \(Y_{i},\ldots,i=1,n+1\), if \(Y_{i}\cap I\subseteqq X\) and the \(Y_{i}\) are pairwise disjoint and each \(Y_{i}\) is dense in \(I\) then there is a perfect set \(P,P\cap Q=\varnothing\), and each \(Y_{i}\cap I\) is dense in \(P\).
Remark: We can represent the interval \(I_{0}\) as a convex set.
**Lemma 7.7**.: _Let \(D,\{D_{i}:i\in J\}\) be as in 7.5, \(I^{*}\) an interval, \(S\subseteqq\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
* Now we can find perfect \(P_{\alpha}(\alpha<2^{\aleph_{0}})\) such that each \(Y_{k}\) (\(1\leqq k\leqq n+1\)) is dense in \(P_{\alpha}\) and \(\alpha\neq\beta\) implies \(P_{\alpha}\cap P_{\beta}\subseteqq\bigcup_{k=1}^{n+1}Y_{k}\).
Proof of (*): For \(\eta\) a finite sequence of ones and zeros \(X_{\eta}\) will be a set of closed-open intervals and singletons with endpoints in \(\bigcup_{k=1}^{n+1}Y_{k}\), which are pairwise disjoint. We define \(X_{\eta}\) by induction on \(\ell(\eta)\). Let \(X_{\langle\rangle}=\{[a,b)\}\), where \(a,b\in Y_{1}\), and if \(X_{\eta}\) is defined, for each interval \([a,b)\in X_{\eta}\), choose a decreasing sequence \(x_{i}^{a}\) (\(i<\omega\)) whose limit is \(a\), and \(x_{0}^{a}<b\) and \(x_{i}^{a}\in Y_{k}\) if and only if \(\ell(\eta)=k\) mod \(n+1,1\leqq k\leqq n+1\). Let, for \(m=0,1\):
\[X_{\eta^{-}\langle m\rangle}=\{(x_{i+1}^{1},x_{i}^{a}):\text{ for some }b,[a,b)\in X_{\eta}\text{ and }i=m\text{ mod }2\}\]
\[\cup\{\{a\}:\text{ for some }b,[a,b)\in X_{\eta},\text{ or }\{a\}\in X_{\eta}\}.\]
For \(\eta\) a sequence of ones and zeros of length \(\omega,P_{\eta}=\bigcap_{\ell<\omega}(\bigcup X_{\eta\restriction n})\).
Because \(|P\cap D|<2^{\aleph_{0}}\) for some \(\alpha,P_{\alpha}\cap D\subseteqq\bigcup_{k=1}^{n+1}Y_{k}\); so by 7.4 (and the choice of \(Q\)'s), \(|P_{\alpha}\cap Q_{S}|<2^{\aleph_{0}}\). We can find \(P_{\alpha}^{\beta}(\beta<2^{\aleph_{0}})\) such that each \(Y_{k}\) is dense in \(P_{\alpha}^{\beta}\) and \(\beta\neq\gamma\Rightarrow P_{\alpha}^{\beta}\cap P_{\alpha}^{\gamma}\subseteqq \bigcup_{k=1}^{n+1}Y_{k}\). So for some \(\beta,P_{\alpha}^{\beta}\cap Q\subseteqq\bigcup_{k=1}^{n+1}Y_{k}\subseteqq D\), but \(Q\subseteqq R\backslash D\) hence \(P_{\alpha}^{\beta}\cap Q=\varnothing\), and we finish.
**Definition 7.8**.: Let \(\psi_{n}(X,D,Q,I^{*})\) be the monadic formula saying
* \(\varphi_{n}(X,D,Q,I^{*})\),
* for any interval \(I_{1}\subseteqq I^{*}\), if \(Y\) is disjoint to \(X\) and dense in \(I_{1}\) then \(\neg\varphi_{n}(X\cup Y,D,Q,I_{1})\).
**Lemma 7.9**.: _Let \(D,J,D_{i},S,Q_{S}\) be as in 7.7. Then for any \(X\subseteqq R,R\models\psi_{n}[X,D,Q_{S},I^{*}]\) if and only if_
* \(X\) _is dense in_ \(I^{*},X\subseteqq D\)_,_
* _for any interval_ \(I\subseteqq I^{*}\) _there is a subinterval_ \(I_{1}\) _and_ \(U\in S\cup\{V\in[J]^{n-1}:(\forall i\in J)(V\cup\{i\}\notin S)\}\) _such that_ \(X\cap I_{1}=D_{U}\cap I_{1}\)_._
Proof:
* Suppose \(R\models\psi_{n}[X,D,Q_{S},I^{*}]\), then clearly condition (A) holds. For condition (B) let \(I\subseteqq I^{*}\) be an interval. By Definition 7.8(A), \(R\models\varphi_{n}[X,D,Q_{S},I^{*}]\), hence by Lemma 7.7(B), \(I\) has a subinterval \(I_{0}\) such that \(X\cap I_{0}\subseteqq D_{U}\) where \(U\in S\cup[J]^{n-1}\). If \((D_{U}\backslash X)\cap I_{0}\) is somewhere dense, let it be dense in \(I_{1}\subseteq I_{0}\), and let \(Y=(D_{U}\backslash X)\cap I_{1}\), which gives us a contradiction to Definition 7.8(B). If \(U\in[J]^{n-1}\), and for some \(i\in J,V=U\cup\{i\}\in S\), we can get a similar contradiction by \(Y=(D_{V}\backslash X)\cap I_{0}\) in the interval \(I_{0}\) (as \(D_{i}\subseteq D_{V}\backslash X,Y\) is dense). We can conclude that: \(U\in S\) or \(U\in[J]^{n-1}\) and \(U\cup\{i\}\notin S\) for every \(i\in J\) and that \((D_{U}\backslash X)\cap I_{0}\) is nowhere dense. Hence for some \(I_{1}\subseteq I_{0},(D_{U}\backslash X)\cap I_{1}=\varnothing\) hence \(X\cap I_{1}=D_{U}\cap I_{1}\).
* Now suppose that conditions (A),(B) hold; by Lemma 7.7 it is easy to see that \(R\models\psi_{n}[X,D,Q_{S},I^{*}]\).
**Definition 7.10**.: Let \(\chi_{1}(D,Q,I^{*})\) be the monadic formula saying:
1. \(D\) is dense in \(I^{*},I^{*}\) an interval;
2. if \(I\subseteq I^{*},X,Y\) are dense in \(I\) and \[R\models\psi_{1}[X,D,Q,I]\wedge\psi_{1}[Y,D,Q,I]\] then for some \(I_{1}\subseteq I\), \[X\cap Y\cap I=\varnothing\mbox{ or }X\cap I_{1}=Y\cap I_{1}.\]
**Lemma 7.11**.:
1. _If_ \(D,\{D_{i}:i\in J\}\)_, are as in_ 7.5 _then for any interval_ \(I^{*},R\models\chi_{1}[D,Q_{J},I^{*}]\)_._
2. _If_ \(R\models\chi_{1}[D,Q,I^{*}]\) _then we can find_ \(I\subseteqq I^{*}\)_, and_ \(X_{i},i<\alpha_{0}\) _such that_ 1. _each_ \(X_{i}\) _is a dense subset of_ \(I\) _and_ \(R\models\psi_{1}[X_{i},D,Q,I]\)_,_ 2. _if_ \(I_{0}\subseteqq I\)_, and_ \(X\subseteqq I_{0}\) _is dense in_ \(I_{0}\) _and_ \(R\models\psi_{1}[X,D,Q,I_{0}]\) _then there are_ \(i<\alpha\) _and_ \(I_{1}\subseteqq I_{0}\) _such that_ \(X\cap I_{1}=X_{i}\cap I_{1}\)_._
3. _In (B),_ \(|\alpha_{0}|\) _is uniquely defined by_ \(D,Q,I\)_._
Proof:
1. By 7.9 it is immediate.
2. Let \(\{X_{i}:i<\alpha\}\) be a maximal family satisfying (1) and (2) for \(I=I^{*}\). If for some interval \(I\) there are no subintervals \(I^{1}\) and dense \(X^{*}\subseteqq X\cap I^{1}\) such that \((\forall i<\alpha_{0})\) (\(X_{i}\cap X^{*}\) is nowhere dense)15 we are finished. Otherwise we can choose inductively on \(n\) intervals \(I^{n}\subseteqq I^{*}\) disjoint to \(\bigcup_{\ell<n}I^{\ell}\) and \(X^{*}_{n}\subseteqq X\cap I^{n}\) such that \((\forall i<\alpha_{0}),X_{i}\cap X^{*}_{n}\) is nowhere dense16, and such that \(\bigcup_{n<\omega}I^{n}\) is dense in \(I\). Then we could have defined \(X_{\alpha_{0}}=\bigcup_{n<\omega}D^{*}_{n}\), a contradiction. Footnote 15: and \(R\models\psi_{1}[X^{*},D,Q,I^{1}]\).
3. Easy.
**Definition 7.12**.: Let \(\chi^{n}(Q_{1},D,Q,I^{*})\) be the monadic formula saying
1. \(D\) is dense in \(I^{*}\), which is an interval.
2. Suppose \(I_{0}\subseteqq I^{*},X_{\ell}\subseteqq I_{0}(\ell<n)\) and \(R\models\bigwedge_{\ell<n}\psi_{1}(X_{\ell},D,Q,I_{0})\).Then there is \(I_{1}\subseteqq I_{0}\) such that for all \(I_{2}\subseteqq I_{1}\) \[R\models\psi_{n}(\bigcup_{\ell<n}X_{\ell},D,Q_{1},I_{1})\equiv\psi_{n}(\bigcup _{\ell<n}X_{\ell},D,Q_{1},I_{2}).\]
**Lemma 7.13**.: _If \(D,\{D_{i}:i\in J\}\) are as in Lemma 7.5, \(S\subseteqq[J]^{n}\) then for any interval \(I^{*},R\models\chi^{n}[Q_{S},D,Q,I^{*}]\)._
Proof: Immediate.
**Theorem 7.14**.: _The set \(A_{r}\) is recursive in the monadic theory of order; where \(A_{r}=\{\theta:\theta\) is a first order sentence which has an \(\omega\)-model i.e., a model \(M\) such that \((|M|,R_{1})\) is isomorphic to \((\omega,x+1=y)\}\)._
_Conclusion 7.15_.: True first order arithmetic is recursive in the monadic theory of order.
Proof: It suffices to define for every first order sentence \(\theta\), a monadic sentence \(G(\theta)\) so that \(R\models G(\theta)\) if and only if \(\theta\) has an \(\omega\)-models.
By using Skolem-functions and then encoding them by relations, we can define effectively the sentence \(G_{1}(\theta)\) such that \(\theta\) has an \(\omega\)-model if and only if \(G_{1}(\theta)\) has an \(\omega\)-model and
\[G_{1}(\theta)=(\forall x_{1},\ldots,x_{n(0)})(\exists x_{n(0)+1},\ldots,x_{n( 1)})(\bigvee_{i}\bigwedge_{j}\theta_{ij}),\]
\(\theta_{ij}\) is an atomic, or a negation of an atomic, formula; only the relations \(R_{0},\ldots,R_{n(2)}\) appear in it; \(R_{0}\) is the equality; and \(R_{i}\) has \(m(i)\)-places.
Define (where \(X,Y,D,Q\) are variables ranging over sets, \(I\) is a variable ranging over intervals and \(x,y\) are individual variables):
1. \(G_{2}(X_{k}=X_{\ell})=(\forall I^{1}\subseteqq I^{*})(\exists I^{2} \subseteqq I^{1})(X_{k}\cap I^{2}=X_{\ell}\cap I^{2}),\)
2. \(G_{2}[G_{\ell}(X_{k(1)},\ldots,X_{k(m(\ell))})]=(\exists Y)(Y\subseteqq D \backslash D^{*}wedgegege\bigwedge_{i=1}^{m(\ell)}\psi_{2}(X_{k(i)}\cup Y,D,Q_{ i}^{\ell},I^{*})\) (for \(\ell<0\)),
3. \(G_{2}(\theta)=(\forall X_{1},\ldots,X_{n(0)})(\exists X_{n(0)+1},\ldots,X_{n(1 )})\) \((\forall I^{0}\subseteqq I)(\exists I^{*}\subseteqq I^{0})[\bigwedge_{\ell=1}^ {n(0)}\psi_{1}(X_{\ell},D,Q^{*},I^{*})\bigwedge\bigwedge_{\ell=1}^{n(0)}X_{ \ell}\subseteqq D^{*}\) \(\rightarrow\bigwedge_{\ell=n(0)+1}^{n(1)}X_{\ell}\subseteqq D^{*}\cap\bigwedge _{\ell=n(0)+1}^{n(1)}\psi_{1}(X_{\ell},D,Q^{*},I^{*})\wedge\bigwedge_{i} \bigvee_{j}G_{2}(\theta_{ij})].\)
4. Let \(\chi^{*}\) be the conjunction of the following formulas: 1. \(D,D^{*}\) are dense in \(I,D^{*}\subseteqq D\), 2. \(\chi_{1}(D,Q^{*},I)\), 3. \(\chi^{2}(Q_{\ell}^{i},D,Q^{*},I)\). Let us denote \(\tilde{R}_{1}(X,Y,Q_{1}^{1},Q_{1}^{2},I^{\prime})=(X\subseteqq D^{*}\wedge Y \subseteqq D^{*}\wedge X\cap Y=\varnothing\wedge\) \(\psi_{1}(X,D,Q^{*},I^{\prime})\wedge\psi_{1}(Y,D,Q^{*}\wedge,I^{\prime})\wedge( \exists Z)[Z\subseteqq D\backslash D^{*}\wedge\psi_{1}(Z,D,Q^{*},I^{\prime})\wedge\) \(\psi_{2}(X\cup Z,D,Q_{1}^{1},I^{\prime})\wedge\psi_{2}(Y\cup Z,D,Q_{1}^{2},I^{ \prime})]\) and \((\delta)\) \(\psi_{1}(X_{0},D,Q^{*},I)\wedge X_{0}\subseteqq D^{*}\wedge(\forall Y)[\psi_{ 1}(Y,D,Q^{*},I)\wedge Y\subseteqq D^{*}\rightarrow(\exists Y_{1})\tilde{R}_ {1}(Y,Y_{1})]\wedge(\forall I^{\prime}\subseteqq I)(\forall Y)\neg\tilde{R}_{ 1}(Y,X_{0},Q_{1}^{1},Q_{1}^{2},I^{\prime})\wedge\) \((\forall Y_{1}Y_{2}Y_{3})(\forall I^{0}\subseteqq I)[\tilde{R}_{1}(Y_{1},Y_{2},Q_{1}^{1},Q_{1}^{2},I^{0})\wedge\tilde{R}_{1}(Y_{1},Y_{3},Q_{1}^{1},Q_{1}^{2 },I^{0})\) \(\rightarrow(\forall I^{1}\subseteqq I^{0})(\exists I^{2}\subseteqq I^{1})Y_{ 2}\cap I^{2}=Y_{3}\cap I^{2}].\) * The formula saying that if \((\delta)\) holds when we replace \(Q_{1}^{1},Q_{1}^{2}\) by \(\tilde{Q}_{1}^{1},\tilde{Q}_{1}^{2}\) resp. then \((\forall X)(\forall Y)(\forall I^{\prime}\subseteqq I)[\tilde{R}_{1}(X,Y,Q_{1} ^{1},I^{\prime})\rightarrow\tilde{R}_{1}(X,Y,\tilde{Q}_{1}^{1},\tilde{Q}_{1}^{2 },I^{\prime})].\)
5. \(G(\theta)=(\exists Q^{*},D,D^{*},X_{0},\ldots,Q_{\ell}^{i},\ldots)(\forall I)[ \chi^{*}\wedge G_{3}(\theta)].\) Now we should prove only that \(\theta\) has an \(\omega\)-model if and only if \(R\models G(\theta)\).
1. Suppose \(M\) is an \(\omega\)-model 2021-06-14 if and only if \(R\vdash G(\theta)\). Let \(J=\omega+\omega,D_{i}(i<\omega+\omega)\) be countable, pairwise disjoint, dense subsets of \(R\). Choose symmetric and reflexive relations \(S^{i}_{\ell}\) on \(\omega+\omega\) so that \[M\models R_{\ell}(x_{1},\ldots,x_{k(\ell)})\Leftrightarrow(\exists y\in\omega+ \omega)\bigwedge_{i=1}^{k(\ell)}\langle y,x_{i}\rangle\in S^{i}_{\ell}\wedge y \notin\omega).\] To prove \(R\models G(\theta)\), let \(D=\bigcup_{i<\omega+\omega}D_{i},D^{*}=\bigcup_{i<\omega}D,Q^{i}_{\ell}=Q_{(S^{ i}_{\ell})},X_{0}=D_{0}\), and \(Q^{*}=Q_{\omega+\omega}\). Let \(I\) be any interval. It is not hard to check that under those assignments \(R\models x^{*}\wedge G_{3}(\theta)\).
2. Now suppose \(R\models G(\theta)\). Let \(Q^{*},D,D^{*},X_{0},Q^{i}_{\ell}\) be such that \(R\models(\forall I)(\chi^{*}\wedge G_{3}(\theta))\). BY (4) (\(\beta\)), clearly \(R\models(\forall I)\chi_{1}(D,Q^{*},I)\). Hence by Lemma 7.11(B) there are \(I\) and \(D_{i},i<\alpha\) satisfying (1),(2),(3) from 7.11(B). As \(R\models(\forall I)(\chi^{*}\cap G_{3}(\theta))\), then in particular \(R\models\chi^{*}\wedge G_{3}(\theta)\). By (4)(\(\delta\)), \(R\models\psi_{1}(X_{0},D,Q^{*},I)\), so we can choose \(D_{0}=X_{0}\). (See the proof of 7.11.) By (4)(\(\delta\)) we can also assume that \(R\models\tilde{R}_{1}(D_{n},D_{n+1})\) for \(n<\omega\). By (4)(\(\epsilon\)) necessarily \(D_{i}\subseteqq D^{*}\Leftrightarrow i<\omega\). Let \(\{\tilde{j}_{\ell}:\ell<\omega\}\) enumerate all sequences \(j=\langle j(1),\ldots,j(n(0))\rangle\) of natural numbers. As \(R\models G_{3}(\theta)\) for every \(\tilde{j}_{\ell}\) we can choose \(X_{i}=D_{j_{\ell}(i)}\), and so there is an assignment \(X_{i}\to D^{\ell,i}\) for \(n(0)<i\leqq n(1)\) showing that \(R\models G_{3}(\theta)\). So we can define by induction on \(n<\omega\) intervals \(I_{n}\) so that: \(I_{n+1}\subseteqq I_{n},I_{0}\subseteqq I\), and for every \(n(0)<i\leqq n(1)\) for some \(j_{n}(i)<\alpha_{0},D^{\ell,i}\cap I_{n+1}=D_{j_{n}(i)}\cap I_{n+1}\). Now we define a model \(M:|M|=\omega\), and \(M\models R_{\ell}[j(1),\ldots,j(m(\ell))]\Leftrightarrow\) for some \(n,R\models(\exists Y)[Y\subseteqq D\setminus D^{*}\bigwedge\bigwedge_{i=1}^{m( \ell)}\psi_{2}(D_{j(i)}\cap Y,D,Q^{i},I_{n}^{n})]\). It is easy to check that \(R\models\theta\).
Remark: By some elaboration, we can add to the definition of \(A_{r}\) also the demand
"\(R_{2}\) is a well-founded two-place relation"
(also for uncountable structures). Thus, e.g., there are sentences \(\theta_{n}\), such that MA implies: \(R\models\theta\) if and only if \(2^{\aleph_{0}}=\aleph_{n}\).
**Theorem 7.16**.: _The set of first-order sentences which has a model, is recursive in the monadic theory of \(\{(R,Q):Q\subseteqq R\}\) where the set-variables range over subsets of the rationals._
Remark: Notice that a quantification over \(P\) such that \(D\) is dense in \(P\) can be interpreted by a quantification over \(P\cap D\), as the property "\(x\) in the closure of \(X\)" is first-order. Hence \(\varphi_{n},\psi_{n}\) are, in our restricted monadic theory.
By 7.14,7.15, Theorems 7.1,7.2 and 7.3 are in fact immediate. Theorem 7.1(B) can also be proved by the following observation of Litman [14], which is similar to 3.6(B)(1):
**Lemma 7.17**.: _The monadic theory of the real order is recursive in the monadic theory of order._
Proof: For every monadic sentence \(\theta\) let \(G(\theta)\) be the monadic sentence saying:
"If the set \(X\) is completely ordered, is dense and has no first nor last elements then some \(Y\subseteqq X\) has those properties and in addition \((Y,<)\models\theta\)."
As every complete dense order contains a subset isomorphic to \(R\), and any complete dense order \(\subseteqq R\) with no first nor last element is isomorphic to \(R\), clearly \(R\models G(\theta)\) if and only if \(\theta\) is satisfied by all orders so our results is immediate.
Conjecture 7D: The monadic theory of \(R\) and the (pure) second-order theory of \(2^{\aleph_{0}}\) are recursive in each other.17
Footnote 17: Gurevich proved it when \(V-L\).
Conjecture 7E: The monadic theory of \(\{R,Q\}:Q\subseteqq R\}\) with the set-quantifiers ranging over subsets of the rationals; and the (pure) second-order theory of \(\aleph_{0}\) are recursive in each other. Gurevich notes that if \(V=L\) the intersection of 7D,E holds.
Conjecture 7F: The monadic theory of order and the (pure) second-order theory, are recursive in each other.
In conjectures 7D,E,F use (*) or CH if necessary.
Conjecture 7G: If \(D_{\ell}\) is a dense subset of \(R\), and for every \(P,|P\cap D_{\ell}|<2^{\aleph_{0}}\), for \(\ell=1,2\) then \((R,D_{1}),(R,D_{2})\) have the same monadic theory.18
Footnote 18: Gurevich disproved it.
|
2308.11899 | Amplification and Excitation of Surface Plasmon Polaritons via Four-Wave
Mixing Process | We suggest a scheme for the excitation and amplification of surface plasmon
polaritons (SPPs) along the interface between metal and semiconductor quantum
well (SQW), employing a four-wave mixing (FWM) process. The SQW consists of
four-level asymmetric double quantum wells that exhibit quantum interference
effects, which leads to the coupler-free excitation of SPPs. In our proposed
system, the inherent losses of SPPs are compensated by introducing gain through
the FWM process. This results in a significant enhancement in the propagation
length and large penetration depth of SPPs. We further analyze the effect of
gain on the long-range and short-range SPPs and observe that the propagation
distance and lifetime of both types of SPPs are enhanced. | Andleeb Zahra, Muqaddar Abbas, Rahmat Ullah | 2023-08-23T03:55:04Z | http://arxiv.org/abs/2308.11899v1 | # Amplification and Excitation of Surface Plasmon Polaritons via Four-Wave Mixing Process
###### Abstract
We suggest a scheme for the excitation and amplification of surface plasmon polaritons (SPPs) along the interface between metal and semiconductor quantum well (SQW), employing a four-wave mixing (FWM) process. The SQW consists of four-level asymmetric double quantum wells that exhibit quantum interference effects, which leads to the coupler-free excitation of SPPs. In our proposed system, the inherent losses of SPPs are compensated by introducing gain through the FWM process. This results in a significant enhancement in the propagation length and large penetration depth of SPPs. We further analyze the effect of gain on the long-range and short-range SPPs and observe that the propagation distance and lifetime of both types of SPPs are enhanced.
## 1 Introduction
Surface Plasmon Polaritons (SPPs) are surface electromagnetic waves that arise from the interaction of the incident light with the free electrons oscillations in metal, and propagate along the metal-dielectric interface within the frequency ranges from visible to infrared (IR). These surface waves have evanescent-like behavior in the plane normal to the interface. The history of SPPs goes back hundreds of years [1, 2], with the initial observation credited to Robert Wood in 1902 [3]. However, intensive research has been started after the investigation of the plasmonic properties of the silver and gold nanoparticles [4]. SPPs have the peculiar property to confine the light to the nanoscale by providing relaxation from the classical diffraction limit. It marks them special for the fabrication of on-chip integrated plasmonic devices [5]. So far, SPPs are widely used in biomedical and chemical sensors, photo-lithography [6], nonlinear nano-scale photonics [7]. The propagating nature of SPPs is also exploited for information transfer including the plasmon-assisted transmission of entangled photons [8, 9], and the SPP-mediated quantum teleportation [10]. However, there is a characteristic trade-off between the increased confinement of SPPs at the interface and their large propagation distance. Typically, a single interface structure with two semi-infinite media sustains only a single mode of SPPs. On the contrary, the dispersion relation for propagating SPPs in thin films surrounded by two media with similar or different refractive indices splits into two branches [11, 12, 13, 14], indicating the two distinct (symmetric and anti-symmetric) modes of SPPs. The existence of these modes on unsupported thin metal films is observed theoretically and also experimentally verified [15]. The former mode has a comparatively large propagation length than the single interface SPPs, and these are referred to as long-range SPPs (LR-SPP) [16, 17]. Whereas, the short-range SPPs (SR-SPPs) show increased confinement to the metal film. Since SPPs are sensitive to surface conditions, as a result, they decay rapidly and have small propagation distances, which limit their practical applications. Earlier on, structured metallic films [18] or grating [19] are used in slowing down the SPPs and enhancing their propagation length. A few years later, electromagnetically-induced transparency (EIT) is harnessed to slow down the SPPs at the interface of dielectric and active negative index meta-material [20].
The excitation of SPPs requires energy and momentum conservation. The dispersion relation of SPPs is given by; \(k_{SPP}=\frac{\omega}{c}\sqrt{\frac{\epsilon_{m}\,\epsilon_{d}}{\epsilon_{m}+ \epsilon_{d}}}\)[21](where \(\epsilon_{m}\), \(\epsilon_{d}\) are the relative permittivities of the metal and dielectric medium, respectively). This dispersion relation shows that the momentum of an SPP is typically much smaller than the momentum of the incident light. This momentum mismatch prevents the direct excitation of SPPs from light, which is a challenge for the practical implementation of SPPs. Several schemes are thus proposed to circumvent all the challenges associated with the SPPs. Initially, Otto and Kretschmann proposed a prism (a tool to increase the momentum of light) that exhibits frustrated total internal reflection (FTIR). However, geometrical challenges are inherent to these methods. In 2015, EIT is characterized for the very first time to observe the coupler-free transition of SPPs from light [22]. It is investigated that the direct excitation of SPPs is possible only if the dielectric medium is EIT-based and has the real part of its permittivity less than one. Moreover, nonlinear [23], and free space excitation of SPPs [24] is demonstrated by the means of the four-wave mixing (FWM) process. However, losses present in the systems are not negligible, which leads to weak SPR resonance. Using a gain medium is one of the possible solutions for the compensation of losses. Many coupler-based schemes are already proposed in this regard [25, 26, 27], and quantum wells are found to be good candidates as the gain medium.
In general, SQWs have discrete energy levels and atomic-vapors-like optical properties. During the past few decades, quantum coherence phenomena including lasing without inversion [28, 29, 30], coherent population trapping [31], EIT [32, 33, 34], enhancement of refractive index [35], and slow light [36] are intensively studied both theoretically and experimentally in SQWs. In addition, the FWM process is explored in several schemes of SQWs [37, 38, 39, 40]. The FWM process, a flourishing nonlinear process, is the subject of growing research for the past few years, due to its potential applications in quantum information [41], nonlinear optics [42] and spectroscopy [43].
In this article, we explore the amplification and quantum-coherence-driven excitation of SPPs along the metal-SQW interface via FWM process. Furthermore, we introduce a thin metal film to exploit the propagation of long-range (LR) and short-range (SR) SPPs. Our proposed scheme is based on a three-layer structure, where the bottom layer is the asymmetric SQW that exhibits an EIT-based FWM process. In our system, not only does the SQW as a gain medium portrays gain, but also the generated fourth wave through the FWM process provides sufficient compensation for the losses, leading to the resonant excitation and amplification of SPPs with their increased propagation distance along the metal-SQW interface. In addition, by means of the FWM process, LR-SPPs propagate for a longer period to a larger distance. Also, our system is more feasible than the atomic counterpart as the quantum interference effects and other parameters are easily tunable.
## 2 Model and Equations
The schematic to study the SPPs for coupler-free SPR with relatively enhanced propagation length due to the FWM process is shown in Fig. 1. The system consists of a three-layer structure, where a metal film separates the transparent top layer (vacuum/air) from an EIT-based bottom layer that is composed of four-level asymmetric double SQWs. \(\epsilon_{t},\epsilon_{m}\) and \(\epsilon_{s}\) are the relative permittivities of the top, middle and bottom layers, respectively. To initiate the excitation of the SPPs at the metal-SQWs interface, three electromagnetic fields including a weak probe field, a strong control field, and a pump field are incident with angles \(\theta_{p}\), \(\theta_{c}\), and \(\theta_{b}\), respectively, at the top-layer-metal interface. Owing to EIT conditions, the permittivity of SQWs is coherently controllable and can lead to the resonant excitation of SPPs with reduced propagation losses. Also, our structure supports two SPPs modes (symmetric (Long-range) and anti-symmetric (Short-range)) with different propagation lengths in the limit of very thin metallic films (where
\(q<<\lambda_{0}\)).
Based on the recent experimental condition [44], we consider asymmetric double SQW following a four-level configuration [45, 46]. Fig. 1 illustrates the structure of an asymmetric double SQW that can be fabricated by using 10 pairs of wide wells (WW) each with a thickness of 51-monolayer (145 Angstrom) and 35-monolayer (100 Angstrom) thick narrow well. Which are then separated by a thin AlGaAs barrier having a thickness of 9-monolayer (25 Angstrom). As shown in Fig. 1, the valence band consists of levels \(|1\rangle\) and \(|2\rangle\) which are localized hole states. The conduction band consists of levels \(|3\rangle\) and \(|4\rangle\), and these are delocalized bonding and anti-bonding electronic states, respectively having energy difference \(\omega_{s}\). These electronic states are generated due to tunneling (through a thin barrier) in two quantum wells. Two coherent fields i.e., probe and control fields, govern the transitions among the electronic and hole states and induce EIT. Probe field with amplitude \(\mathcal{E}_{p}\) and Rabi frequency \(\Omega_{p}\) initiates the transition \(|3\rangle\leftrightarrow|1\rangle\) with transition frequency \(\omega_{31}\). Whereas, a control field of amplitude \(\mathcal{E}_{c}\) and frequency \(\Omega_{c}\) is coupled to the transition from electronic state \(|3\rangle\) to hole state \(|2\rangle\) having transition frequency \(\omega_{32}\). To drive intersubband transition \(|4\rangle\leftrightarrow|2\rangle\), with natural frequency \(\omega_{42}\), another field named pump field having amplitude \(\mathcal{E}_{b}\) and Rabi frequency \(\Omega_{b}\), is applied. These three fields then induce a FWM process \(|1\rangle\rightarrow|3\rangle\rightarrow|2\rangle\rightarrow|4\rangle \rightarrow|1\rangle\), eventually, a coherent radiation field with Rabi frequency \(\Omega_{s}\) is generated.
Under the rotating-wave approximation, the interaction picture Hamiltonian is given by ( \(\hbar=1\))
\[H_{I} = -\Delta_{p}|3\rangle\langle 3|-(\Delta_{p}-\Delta_{c})|2 \rangle\langle 2|-(\Delta_{p}-\Delta_{c}+\Delta_{b})|4\rangle\langle 4|-(\Omega_{p}e^{i \mathbf{k_{p}}.\mathbf{r}}|3)\langle 1|+ \tag{1}\] \[\Omega_{c}e^{i\mathbf{k_{c}}.\mathbf{r}}|3\rangle\langle 2|+ \Omega_{b}e^{i\mathbf{k_{b}}.\mathbf{r}}|4\rangle\langle 2|+\Omega_{s}e^{i \mathbf{k_{s}}.\mathbf{r}}|4\rangle\langle 1|+H.c\rangle,\]
where \(\Delta_{p}=\omega_{p}-\omega_{31}\), \(\Delta_{c}=\omega_{c}-\omega_{32}\) and \(\Delta_{b}=\omega_{b}-\omega_{42}\) are the detunings of probe, control, and pump fields, respectively and \(\omega_{p}\), \(\omega_{c}\), and \(\omega_{b}\) are their corresponding angular frequencies.
Figure 1: (Color online) Schematic of SPR system with SQW as a quantum medium. The system consists of a three-layer structure with a transparent top layer (blue), a metal film (gray), and a bottom layer of SQW (green), comprised of four-level asymmetric double quantum wells. The inset shows the energy level configuration of the SQWs, enabling the EIT-based FWM process.
Then, by using the linear Schrodinger wave equation, \(i\frac{\partial|\psi\rangle}{\partial t}=H_{I}|\psi\rangle\), with \(|\psi\rangle=A_{1}+A_{2}e^{i(\mathbf{k_{p}}-\mathbf{k_{c}}).\mathbf{r}}+A_{3}e ^{i\mathbf{k_{p}}.\mathbf{r}}+A_{4}e^{i(\mathbf{k_{p}}-\mathbf{k_{c}}+\mathbf{ k_{b}}).\mathbf{r}}\), we get the following equations of motion for probability amplitudes
\[\frac{\partial A_{1}}{\partial t}=i\Omega_{p}^{*}A_{3}+i\Omega_{s}^{*}e^{i \delta\mathbf{k}.\mathbf{r}}A_{4}, \tag{2}\]
\[\frac{\partial A_{2}}{\partial t}=i[(\Delta_{p}-\Delta_{c})+i\gamma_{2}]A_{2} +i\Omega_{c}^{*}A_{3}+i\Omega_{b}^{*}A_{4}, \tag{3}\]
\[\frac{\partial A_{3}}{\partial t}=i(\Delta_{p}+i\gamma_{3})A_{3}+i\Delta_{p}A _{1}+i\Omega_{c}A_{2}+\kappa A_{4}, \tag{4}\]
\[\frac{\partial A_{4}}{\partial t}=i[(\Delta_{p}-\Delta_{c}+\Delta_{b})+i \gamma_{4}]A_{4}+i\Omega_{s}e^{-i\delta\mathbf{k}.\mathbf{r}}A_{1}+i\Omega_{ b}A_{2}+\kappa A_{3}, \tag{5}\]
where, \(\delta\mathbf{k}=\mathbf{k}_{p}+\mathbf{k}_{b}-\mathbf{k}_{c}-\mathbf{k}_{s}\), is the phase mismatching factor. Under the steady-state and the phase matching condition \(\delta\mathbf{k}=0\), the optical susceptibility of the SQW is given by
\[\chi=\frac{N|\mu_{13}|^{2}}{\epsilon_{0}\hbar\Omega_{p}}\times(A_{3}A_{1}^{*}), \tag{6}\]
after substitution, we get
\[\chi=\frac{N|\mu_{13}|^{2}}{\epsilon_{0}\hbar\Omega_{p}}\times\frac{(\Omega_{b }^{2}-d_{2}d_{4})\Omega_{p}-(\Omega_{c}^{*}\Omega_{b}+d_{2}i\kappa)\Omega_{s} }{(-i\kappa(\Omega_{c}^{*}\Omega_{b}+\Omega_{c}\Omega_{b}^{*})-(d_{4}\Omega_{ c}^{*}+d_{3}\Omega_{b}^{2})+d_{2}(d_{3}d_{4}+\kappa^{2}))}, \tag{7}\]
where \(N\) is the number density of electrons in the conduction band of SQW. And, \(d_{2}=(\Delta_{p}-\Delta_{c})+i\gamma_{2}\), \(d_{3}=\Delta_{p}+i\gamma_{3}\), and \(d_{4}=(\Delta_{p}-\Delta_{c}+\Delta_{b})+i\gamma_{4}\) with \(\gamma_{j}=\gamma_{jl}+\gamma_{jd}\) (j=2, 3, 4) that describes the total decay rate of level \(|j\rangle\), where \(\gamma_{jl}\) is the decay rate due to longitudinal-optical (LO) phonon emission, and \(\gamma_{jd}\) is dephasing decay rate between levels \(|i\rangle\leftrightarrow|j\rangle\). The parameter \(\kappa=\sqrt{\gamma_{3l}\gamma_{4l}}\) defines the cross-coupling of the state \(|3\rangle\) and \(|4\rangle\), and it gives rise to interference between bonding and anti-bonding states [40].
The permittivity of the SQW related to the susceptibility is written as [47, 48]
\[\epsilon_{s}=1+\frac{\chi}{1-\frac{1}{3}\chi}. \tag{8}\]
Typically, the excitation of SPPs is explored through the reflection of the probe beam, and the transmission gives the electromagnetic field enhancement within the small volume. So, by Fresnel equations, the reflection coefficient for our proposed scheme is given by [49]
\[r_{tms}=\frac{r_{tm}+r_{ms}exp(2ik_{mx}q)}{1+r_{tm}r_{ms}exp(2ik_{mx}q)}, \tag{9}\]
where q is the thickness of the metallic film and \(r_{ij}\) is the reflection coefficients for a single interface. In case of TM-polarized light, it is expressed as
\[r_{ij}=\frac{\epsilon_{j}k_{ix}-\epsilon_{i}k_{jx}}{\epsilon_{j}k_{ix}+\epsilon _{i}k_{jx}}, \tag{10}\]
where \((i)j=t,m,s\) represent top-layer, metal film, and semiconductor medium, respectively, and \(k_{jx}\) is the normal wave vector which is defined as
\[k_{(i)jx}=\sqrt{k_{0}^{2}\epsilon_{(i)j}-k_{z}^{2}}, \tag{11}\]
where \(k_{z}\) is the in-plane component of the wave vector that depends on the free space wave vector of incident probe beam \(k_{0}\), top-medium refractive index \(n_{t}\) and probe field angle of incidence \(\theta_{p}\) that can be defined as
\[k_{z}=k_{0}n_{t}\text{sin}\theta_{p}. \tag{12}\]
Likewise, the transmission coefficient is given by
\[t_{tms}=\frac{t_{tm}t_{ms}exp(ik_{mx}q)}{1+r_{tm}r_{ms}exp(2ik_{mx}q)}, \tag{13}\]
where \(t_{ij}\) is the two-layer transmission coefficient that is related to the corresponding single-interface reflection coefficient by
\[t_{ij}=1+r_{ij}. \tag{14}\]
The reflectivity R is the absolute square of the reflection coefficient i.e., \(R=|r_{tms}|^{2}\), and transmission T can be calculated as \(T=|t_{tms}|^{2}\). Since incident light is assumed to be TM-polarized to excite SPPs, so, the electric field enhancement factor for a TM-mode is related to the magnetic field enhancement factor and can be written as [49]
\[T_{el}=\frac{\epsilon_{t}}{\epsilon_{s}}|t_{tms}|^{2}. \tag{15}\]
In the following, Eqs. (9)-(15), are used to observe the excitation of SPPs.
## 3 Results and Discussions
This section contains an analysis of the proposed scheme for FWM-based amplification and excitation of SPPs and the propagation of symmetric and anti-symmetric modes for the thin silver film. First, we explore the SPP's excitation by investigating the required condition for coupler-free excitation. At the interface between the top medium (vacuum or air) with \(\epsilon_{t}=1\), and the metal the wave vector of SPPs becomes \(k_{SPP}>k_{0}\), where \(k_{SPP}=k_{0}\sqrt{\frac{\epsilon_{t}\,\epsilon_{m}}{\epsilon_{t}+\epsilon_{m}}}\), therefore, the momentum mismatch (\(\hbar k_{SPP}>\hbar k_{0}\)) arises that prevents the excitation of SPPs directly from light. To overcome this obstacle, an earlier approach employed a medium possessing a refractive index of \(n_{c}\) as a coupler to increase the momentum of light to \(n_{c}\hbar k_{0}\) to enable the excitation of SPPs via light. However, in our proposed system, we consider the FWM-based coupler-free excitation of SPPs at the metal-SQWs interface. If the permittivity of SQWs \(\epsilon_{s}\) becomes less than unity due to EIT effects, then it satisfies the resonance condition \(k_{SPP}=k_{0}n_{t}\text{sin}\theta_{p}\) for a particular angle of incidence \(\theta_{p}\). This yields the excitation of SPPs via FWM without using any coupler.
In the absence of pump field \(\Omega_{b}\), our system reduces to a standard three-level \(\Lambda\)-type configuration, which has already been discussed in atomic-vapor for coupler-free excitation of SPPs [22]. In Fig. 2(a), we plot the real and imaginary parts of the permittivity \(\epsilon_{s}\) of SQWs as the function of probe detuning \(\Delta_{p}\) for \(\Omega_{b}=0\). We here choose an optimal thickness of the metal film, \(q=50\), that allows the sharp SPR resonance and supports only single-interface like SPPs. The other parameters are as follow: \(\Omega_{c}=4\) meV, \(\Omega_{p}=\Omega_{s}=1\) meV, \(\gamma_{2}=0\), \(\gamma_{3l}=\gamma_{4l}=2.07\) meV, \(\gamma_{3d}=\gamma_{4d}=2.58\) meV, \(\Delta_{b}=\Delta_{c}=0\), \(n_{t}=\epsilon_{t}=1\), \(\epsilon_{m}=-13.3+0.883i\), \(\theta_{p}=77^{0}\), \(\lambda_{0}=589.1\) nm. The steep normal dispersion (red curve), and the transparency window (purple curve) in the vicinity of resonance, i.e., \(\Delta_{p}=0\), show the real and imaginary parts of SQWs permittivity \(\epsilon_{s}\). The SPR condition, \(k_{SPP}=k_{0}n_{t}\text{sin}\theta_{p}\) is satisfied at \(\Delta_{p}=-1.73\) meV for probe angle of incidence \(\theta_{p}=77^{0}\), where \(\text{Re}[\epsilon_{s}]<1\) but \(\text{Im}[\epsilon_{s}]\neq 0\). The losses in the system, which are represented by the imaginary part of the permittivity of the SQWs (\(\text{Im}[\epsilon_{s}]\neq 0\)) reduce the sharp SPR and limit their propagation length. To compensate for the losses, we apply a third pump field \(\Omega_{b}\) that initiates the transition between states \(|2\rangle\) and \(|4\rangle\), and along with the probe
and control field induces a FWM process, which is responsible for the gain. Fig. 2(b) illustrates the effect of the pump field \(\Omega_{b}\) on the imaginary part of the permittivity \(\epsilon_{s}\) for the fixed probe detuning at \(\Delta_{p}=-1.73\) meV. Clearly, the absorption decreases with an increase in pump field, and at \(\Omega_{b}=2\) meV, it reduces to zero. Further increase in pump field indicates gain in the system that reaches its maximum value at \(\Omega_{b}=2.5\) meV, it then tends to decrease and retrieves zero absorption at \(\Omega_{b}=3.2\) meV, see Fig. 2(b) (the inset shows the zoomed-in (gain) region).
In Fig. 3, we examine the reflectivity \(R\) and the field enhancement factor \(T_{el}\) of the probe field with respect to the probe detuning \(\Delta_{p}\). We also investigate the effect of the pump field on the reflectivity \(R\) and the field enhancement factor \(T_{el}\). Fig. 3 (a, b) shows the results obtained in the absence of the pump field. In this case, the field enhancement factor is less prominent, and reflectivity is not zero, therefore, SPR effects are not so pronounced. In contrast, when \(\Omega_{b}\) is tuned to 2 meV, the absorption reduces to zero, and the field enhancement factor \(T_{el}\) attains a sharp peak with a maximum value, and the reflectivity \(R\) turns to exactly zero with a sharp dip at negative probe detuning (\(\Delta_{p}=-1.73\) meV), indicating the resonant excitation of SPPs as shown in Fig. 3 (c, d).
In the following, we inspect the angle spectrum of field enhancement factor \(T_{el}\), and reflectivity \(R\) with respect to the pump field. Owing to the pump field, significantly sharp field enhancement (see Fig. 4(a) ) and exactly zero reflectivity (see Fig. 4(b) ) are obtained equivalently at the points of zero absorption i.e., \(\Omega_{b}=2\) and \(\Omega_{b}=3.2\) meV at two different angles, called resonance angles. This shows the symmetric nature of the angle spectra about \(\theta_{p}=0\). The weak SPR effect at \(\Omega_{b}=0\) justifies that the only control field is not adequate to achieve sharp SPR. Interestingly, within the gain region (from 2 to 3.2 meV), there is zero field enhancement and a total reflection, as shown in Fig. 4(a, b), respectively. In essence, excited SPPs radiate into the top layer and absorb by the metal. In this case, the energy from the gain medium is transferred to the SPPs which is then radiated into the reflected wave, see Ref. [50] for details.
Next, we are interested in observing the propagation length of SPPs by virtue of the gain induced by a FWM process. Essentially, propagation length is the distance traveled by SPPs before the decrease in their intensity by 1/e of the original value. It is the reciprocal of the imaginary part of the SPP wave vector, i.e., \(L=1/2\text{Im}[k_{SPP}]\). Small propagation length remains the major challenge for the practical applications of SPPs for years. In our proposed
scheme, the gain due to the generated fourth wave is one of the factors to enhance propagation length significantly. The optical gain power is defined as \(G=-k_{0}\mathrm{Im}[\epsilon_{s}]/\mathrm{Re}[\epsilon_{s}]\)[51]. In Fig. 5, we plot (a) the optical gain and (b) the propagation length of SPPs as a function of the pump field \(\Omega_{b}\), which is a controllable parameter for the gain. The gain increases gradually with an increase in pump field and eventually, it saturates at \(\Omega_{b}=2.5\) meV, as shown in Fig. 5(a). Initially, at zero absorption, propagation length is only 20 \(\mu\)m. However, at the maximum value of the gain, losses are suppressed, thereby propagation length is enhanced to 60 \(\mu\)m, see Fig 5(b).
Figure 4: (Color online) 3D angle spectrum of (a) the field enhancement factor and (b) reflectivity of the probe field with respect to the pump field \(\Omega_{b}\) and probe incident angle \(\theta_{p}\) at fixed probe detuning \(\Delta_{p}=-1.73\) meV. Other parameters are the same as given in Fig. 2.
Figure 3: (Color online) Spectrum of the (a, c) field enhancement factor and (b, d) reflectivity of the probe field as a function of probe detuning \(\Delta_{p}\) for (a, b) \(\Omega_{b}=0\), and (c, d) for \(\Omega_{b}=2\) meV. Other parameters are the same as in Fig. 2.
Since SPPs have evanescent fields associated with them that penetrate inside the surrounding media. The penetration depth is actually the distance that the field can penetrate inside the metal and dielectric layer and is defined by \(\delta_{j}=\frac{1}{k_{0}}\sqrt{\frac{\epsilon_{x}+\epsilon_{d}}{-\epsilon_{j}}}|\)[52], where \(j=m,s\) represents metal and SQW, respectively. The penetration depth of SPPs both in SQW and metal-medium is plotted as a function of the pump field in Fig. 6(a) and (b), respectively. It is noteworthy that the field concentration in SQW is greater than that in metal. A larger penetration depth in the dielectric medium (SQW) is appreciated due to the lower rate of losses associated with them, which corresponds to a larger propagation length. On the contrary, the SPPs energy absorbed by the metal is considered as a loss due to the lossy nature of the metal. There is a characteristic trade-off between the high confinement of the SPPs at the interface and their propagation length. Therefore, it is crucial to have the optimized penetration depth for the large propagation length of SPPs and for their strong confinement at the interface. In our proposed scheme, the penetration depths of the SPPs field inside both media, as shown in Fig. 6, indicate the high field concentration at the interface.
### Long-range and Short-range SPPs
For the symmetric structure presented in Fig. 1, where the permittivity of both the top-medium and SQW is comparable, and the thickness of film \(q\) is much smaller than the optical wavelength \(\lambda_{0}\), i.e., \(d<<\lambda_{0}\), the probe field gives rise to symmetric (LR-SPP) and anti-symmetric (SR-SPP) modes of SPPs.
Using the appropriate boundary conditions for the aforementioned symmetry, the following dispersion relation for LR-SPP is obtained [53].
\[\tanh(k_{mx}q/2)=-\frac{\epsilon_{m}k_{dx}}{\epsilon_{s}k_{mx}}. \tag{16}\]
Similarly for the anti-symmetric mode (SR-SPP), it is given by
\[\coth(k_{mx}q/2)=-\frac{\epsilon_{m}k_{dx}}{\epsilon_{s}k_{mx}}, \tag{17}\]
where \(k_{ix}=\sqrt{k_{j}^{2}-\epsilon_{i}(\omega/c)^{2}}\), for \(i=m,s\), and \(j=LR,SR\) that represent LR and SR-SPPs, respectively. Since \(k_{j}\) cannot be presented explicitly. However, in the limit of sufficiently
Figure 5: (Color online) (a) Optical gain power \(G\) (blue curve) and (b) the propagation length (red curve) of SPPs versus pump field \(\Omega_{b}\). Other parameters are given in Fig. 2
thin films (\(q<40\) nm) [54], the dispersion relation can be simplified using the small angle approximation, i.e., \(\tanh x\approx x\). This gives the following explicit forms for LR-SPP and SR-SPP [54]
\[k_{LR}\approx k_{0}\sqrt{\epsilon_{s}+(k_{0}\epsilon_{s}q/2)^{2}\dot{[}1-( \epsilon_{s}/\epsilon_{m})]^{2}}, \tag{18}\]
\[k_{SR}\approx k_{0}\sqrt{\epsilon_{s}+[2\epsilon_{s}/qk_{0}\epsilon_{m}]^{2}}. \tag{19}\]
The Eqs. (18 and 19) are utilized to exploit the several properties of LR-SPPs and SR-SPPs.
We start by analyzing the propagation of SPPs along a thin silver film at a certain angle of incidence. The losses experienced by SPPs are strongly linked to the thickness of the metal film [49], therefore, it influences their excitation and propagation length. Although the presence of the gain has significantly enhanced the propagation length of SPPs. However, a two-interface structure with a thin metal film surrounded by air and a semiconductor quantum medium supports the LR-SPP that propagates to a relatively larger distance and survives for a longer period. Therefore, to observe such SPPs in our proposed scheme, we reduce the thickness of the metal film to 36.8 nm. This is the required minimum thickness for our system at which the condition for coupler-free excitation (\(k_{LR/SR}<k_{0}\)) holds for the LR-SPPs and SR-SPPs. Since the penetration depth in the metal medium is less than its thickness, so, the SPPs propagating at
Figure 6: (Color online) Penetration depth of SPPs in (a) SQW and in (b) metal film as a function of the pump field \(\Omega_{b}\) at fixed probe detuning \(\Delta_{P}=-1.73\) meV. Other parameters are the same as in Fig. 2
Figure 7: (Color online) The normalized imaginary part of the dispersion relation of the (a) LR-SPPs and (b) SR-SPPs against the pump field \(\Omega_{b}\) at fixed probe detuning \(\Delta_{P}=-1.73\) meV.
the air-metal and metal-SQW interface begin to overlap giving rise to LR and SR-SPPs.
In Fig. 7, we plot the imaginary part of the dispersion relation given in Eq. (16) and (17) as a function of the pump field \(\Omega_{b}\) for LR-SPP and SR-SPP. We fix thickness at \(q=36.8\) nm while keeping all the other parameters the same as given in Fig. 2. Evidently, around \(\Omega_{b}=2\) meV, the absorption becomes zero and further increment in the pump field indicates gain for LR-SPPs with its maximum value at \(\Omega_{b}\approx 2.5\) meV, shown in Fig. 7(a). However, the absorption approaches a minimum value, but it never reduces to zero for short-range SPPs, see Fig. 7(b), which indicates the propagation losses experienced by the short-range SPPs.
Next, in Fig 8, we plot the propagation length (\(L_{LR(SR)}=1/2\text{Im}[k_{LR(SR)}]\)) of both LR and SR-SPPs for two different cases: (a) \(\Omega_{b}=0\), and (b) \(\Omega_{b}=2\) meV. For a smaller thickness of the silver film, less energy is localized inside the metal film. This results in less confinement at the interface, and therefore less dissipation and a larger propagation length for LR-SPPs, as shown in Fig. 8(a). Whereas, reducing the film thickness for short-range SPPs results in a larger energy distribution inside the metal film, and thus high concentration at the interface. This further leads to strong dissipation and their small propagation length, see Fig. 8(b). In the absence of the pump field, the overall propagation length is small enough even for LR-SPPs. However, the presence of the pump field enhances the propagation length significantly to 350 \(\mu\)m for LR-SPPs, see Fig. 8 (c) which is far larger than the SPPs propagating at thick metal-SQW interface (shown in Fig. 5(b)). The maximum concentration of LR-SPPs in the SQW (dielectric) medium is actually responsible for their larger propagation length. On the contrary, the propagation length of SR-SPPs is still shorter even in the presence of the pump field as shown in Fig. 8(d).
Subsequently, the effect of the metal thickness \(q\) is investigated on the lifetime of LR-SPPs and SR-SPPs. The lifetime is the duration that SPPs persist before their absorption into the surrounding medium. It is defined as \(\tau_{LR(SR)}=L_{LR(SR)}/v_{g}\), where \(v_{g}\) is the group velocity which is obtained as \([\frac{\partial k_{LR(SR)}}{\partial\mu}]^{-1}\). It is important to emphasize that the increased lifetime is
Figure 8: (Color online) Propagation length of (a,c) LR-SPPs and (b,d) SR-SPPs versus the metal film thickness \(q\) for (a,b) \(\Omega_{b}=0\) and (c,d) \(\Omega_{b}=2\) meV. Other parameters are the same as in Fig. 2
essential to the effective slowdown of SPPs. In Fig. 9, we analyze the effect of the pump field on the lifetime of LR and SR-SPPs. Fig. 9(a, b) shows the lifetime of LR-SPPs and SR-SPPs in the absence of the pump field. Interestingly, the lifetime behaves in a similar manner as does the propagation length for both modes. The high concentration of these waves on the interface increases their coupling strength with light leading to a high energy dissipation rate. Therefore, in the absence of the pump field, LR-SPPs have a lifetime of femtoseconds and it is maximum for extremely small thicknesses. Conversely, a small thickness corresponds to a shorter lifetime for SR-SPPs. However, as shown in Fig. 9(c), the presence of the pump field increases the lifetime for LR-SPPs from femtoseconds to pico-seconds and is much longer than the lifetime of SR-SPPs (see Fig. 9(d)). Hence, the FWM process plays a crucial role to sustain the SPPs for a relatively longer period with their extended propagation distance for sufficiently thin films. Based on the different propagation lengths and the concentration at the interface, LR-SPP and SR-SPP can be utilized for various purposes in practical applications.
## 4 Conclusion
In this paper, we have proposed a scheme to observe the amplification of SPPs with their enhanced propagation length using a four-level asymmetric semiconductor quantum well structure that exhibits the FWM process. We have also investigated the excitation of SPPs through quantum interference effects by using optimally thick metal film. In our system, the simultaneous interaction between the applied three electromagnetic fields generated a fourth wave through the EIT-based FWM process, which played a significant role to compensate the losses. As a result, sharp SPR resonance and significantly enhanced propagation length of SPPs are observed. In addition, the sensitivity of the SPPs to the probe detuning and angle of incidence is also investigated. Further, we have explored the formation of two distinct SPPs modes for the thin metal film in our proposed scheme. It is figured out that the propagation length and lifetime of these modes are significantly enhanced at zero absorption due to the FWM process. Evidently,
Figure 9: (Color online) The lifetime of (a,c) LR-SPPs and (b,d) SR-SPPs versus the metal film thickness \(q\) for (a,b) \(\Omega_{b}=0\) and (c,d) \(\Omega_{b}=2\) meV. Other parameters are given in Fig. 2.
the excitation and amplification of SPPs along the metal-SQW interface in the visible region are achieved. Our system also supports the LR-SPPs with their relatively larger propagation length and longer lifetime.
Disclosures.The authors declare no conflicts of interest.
Data availability.Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. |
2303.05546 | Weakly-Supervised HOI Detection from Interaction Labels Only and
Language/Vision-Language Priors | Human-object interaction (HOI) detection aims to extract interacting
human-object pairs and their interaction categories from a given natural image.
Even though the labeling effort required for building HOI detection datasets is
inherently more extensive than for many other computer vision tasks,
weakly-supervised directions in this area have not been sufficiently explored
due to the difficulty of learning human-object interactions with weak
supervision, rooted in the combinatorial nature of interactions over the object
and predicate space. In this paper, we tackle HOI detection with the weakest
supervision setting in the literature, using only image-level interaction
labels, with the help of a pretrained vision-language model (VLM) and a large
language model (LLM). We first propose an approach to prune non-interacting
human and object proposals to increase the quality of positive pairs within the
bag, exploiting the grounding capability of the vision-language model. Second,
we use a large language model to query which interactions are possible between
a human and a given object category, in order to force the model not to put
emphasis on unlikely interactions. Lastly, we use an auxiliary
weakly-supervised preposition prediction task to make our model explicitly
reason about space. Extensive experiments and ablations show that all of our
contributions increase HOI detection performance. | Mesut Erhan Unal, Adriana Kovashka | 2023-03-09T19:08:02Z | http://arxiv.org/abs/2303.05546v1 | # Weakly-Supervised HOI Detection from Interaction Labels Only and Language/Vision-Language Priors
###### Abstract
Human-object interaction (HOI) detection aims to extract interacting human-object pairs and their interaction categories from a given natural image. Even though the labeling effort required for building HOI detection datasets is inherently more extensive than for many other computer vision tasks, weakly-supervised directions in this area have not been sufficiently explored due to the difficulty of learning human-object interactions with weak supervision, rooted in the combinatorial nature of interactions over the object and predicate space. In this paper, we tackle HOI detection with the weakest supervision setting in the literature, using only image-level interaction labels, with the help of a pretrained vision-language model (VLM) and a large language model (LLM). We first propose an approach to prune non-interacting human and object proposals to increase the quality of positive pairs within the bag, exploiting the grounding capability of the vision-language model. Second, we use a large language model to query which interactions are possible between a human and a given object category, in order to force the model not to put emphasis on unlikely interactions. Lastly, we use an auxiliary weakly-supervised preposition prediction task to make our model explicitly reason about space. Extensive experiments and ablations show that all of our contributions increase HOI detection performance.
## 1 Introduction
Human-object interaction (HOI) detection is formally defined as correctly localizing interacting human-object pairs and classifying their interaction in a given natural image. The problem has been formulated in different ways, either end-to-end, or more commonly as a two-step procedure wherein all human and object instances get detected first, then interacting human-object pairs are identified. Regardless of its formulation, researchers rely on strong supervision to tackle HOI detection. This strong supervision is in the form of bounding box annotations for interacting human-object pairs as well as semantic labels for their interactions, which are costly to acquire and cognitively demanding for annotators as they require one to fully understand the image content1. Despite the excessive cost of gathering annotations for HOI detection, weakly-supervised directions to relax this strong supervision need have not
Figure 1: **Top-left:** Existing HOI detection methods need costly annotations which contain bounding boxes for interacting human-object pairs as well as their interaction categories. **Top-right:** Our method relies on image-level interaction labels, without any information on where, between whom and how many times those interactions occur. **Bottom:** During training, our approach utilizes image captions to prune non-interacting human/object proposals with the help of a vision-language model. Remaining human and object proposals will be paired for classification and a large language model will verify if predicted interactions are plausible. Best viewed in color with zoom.
been fully explored, due to the combinatorial complexity of object interactions over object and predicate space.
In this paper, we tackle weakly-supervised HOI detection using the weakest supervision in the literature, namely image-level interaction labels (e.g. ride). This supervision level is less costly and more natural to acquire than ones required in existing efforts, as annotators would be required to answer a simple question: "What are the individuals doing in this picture?". To make learning possible, our approach utilizes free-form captions paired with images to weakly-supervise an auxiliary task and to prune non-interacting humans and objects. We query a large language model (LLM) to eliminate unlikely human-object interactions (e.g. riding toothbrush). To increase the spatial reasoning capability of our model, we further formulate an auxiliary preposition prediction task. In this task, our model learns to assign one of the predefined prepositions to each human-object pair during training via weak supervision. Having free-form captions in hand also gives us the ability of extracting image-level interaction labels using a language parser, and hence further relax the level of supervision. Our code will be released upon publication. To summarize, our main contributions are as follows:
* We formulate a weakly-supervised HOI detection setting where supervision comes from image-level interaction labels (e.g. ride, eat). This weak supervision has not previously been used in the literature.
* We utilize free-form captions paired with images to exploit the implicit grounding capability of a vision language model (VLM) in order to prune non-interacting human and object proposals.
* We make use of an large language model (LLM) to verify if a given \(<\)interaction, object\(>\) pair is plausible.
* To further increase our model's spatial reasoning capability, we formulate a weakly-supervised preposition prediction task.
* For the first time in the literature, we train an HOI detection model using image-caption pairs which are abundant on the web.
## 2 Related Work
### Human-object interaction detection
The problem of detecting interactions between humans and objects was originally introduced in [11] and has drawn immense attention in the computer vision community since then. Most of the research efforts on this topic [9, 23, 38, 20, 22, 41] use a two-stage solution in which human/object locations are extracted along with their semantic labels by an off-the-shelf object detector first, and an interaction classification model is learnt on pairwise human-object features. Apart from human/object appearances, there exist models that make use of contextual features [9, 38], spatial layouts [23, 38, 41] and human pose estimations [23]. Inspired by one-stage object detection efforts, researchers lately try to formulate end-to-end HOI detection approaches where human/instance localization and interaction classification are performed in parallel [24, 17, 18, 19]. These methods are analogous to CNN-based (e.g. YOLO[33]) and Transformer-based (e.g. DETR[3]) end-to-end object detectors. PPDM[24] takes a step forward and drops the need for heuristically created "anchors", formulating HOI detection as a point matching problem between human and object locations.
Regardless of being one-stage or two-stage, these methods rely on strong supervision which is costly to acquire. This supervision is in the form of quadruplets that contain interacting human-object locations, object category and interaction category. Even though HOI detection is extremely costly to supervise, there exists a lack of weakly-supervised efforts in the literature. Among the existing weakly-supervised methods, MX-HOI [21] proposes a momentum-independent learning framework where they utilize both weak and strong supervision. Additionally, AlignFormer [16] formulates an alignment layer in transformer framework, that generates pseudo-aligned human-object pairs from weak annotations, conditioning on geometric and visual priors. Both of these methods utilize image-level \(<\)interaction, object\(>\) annotations (e.g. {eatbanana}) as weak supervision as opposed to the much weaker supervision we use in our work, namely image-level interaction labels (e.g. {eat}).
### Using cues from vision-language models
Following vision-language models' breakthrough, researchers have explored their usage in aiding diverse computer vision tasks. For example, one of the most popular VLMs, namely CLIP [32], has been researched extensively in the context of image generation [31], cross-modal retrieval [1, 8], image classification [1], object detection [10], HOI detection [7, 25] and image captioning [4], thanks to its robust image-text joint space learned on a massive dataset.
Even though [7, 25] also utilize CLIP in the context of HOI detection, how CLIP is employed within our approach is quite different. [7] uses CLIP's text encoder to initialize context-aware HOI queries within a fully-supervised Transformer-based HOI detector. [25] utilize CLIP as a teacher within their model and distill knowledge for both visual and textual understanding of interactions. The most similar work to ours in terms of how CLIP is employed is ProposalCLIP [36], where authors prune low-quality object proposals produced by a static algorithm (e.g. Edge
Boxes [42]). Their method runs cropped proposal regions along with produced captions for object categories (i.e. ("a photo of a \(c_{i}^{(obj)\ast}\))\({}_{i=1}^{(Co^{obj})}\)) through CLIP and removes proposals based on the alignment entropy over caption set. In our work, on the other hand, we need to quantify if a given proposal is a part of an interaction or not. Unlike running CLIP on large number of proposals, we run whole image through CLIP once and create grounding maps to calculate an interaction score on each proposal.
## 3 Method
### Formulating HOI detection with weak supervision
Assume an object detector that outputs a set of human and object predictions for a given image, \(\mathcal{H}=\{h_{i}\}_{i=1}^{N}\) and \(\mathcal{O}=\{o_{j}\}_{j=1}^{M}\) respectively. Each of these predictions is in the form of \(\{x^{(1)},y^{(1)},x^{(2)},y^{(2)},c^{o},s^{o}\}\) where \(x^{(1)},y^{(1)}\) and \(x^{(2)},y^{(2)}\) denote the top-left and bottom-right corner coordinates of the proposal bounding box respectively, \(c^{o}\) is the semantic category assigned by the object detector ("person" for each proposal in \(\mathcal{H}\)) and \(s^{o}\in[0,1]\) is the confidence score.
Given (1) the above set of human and object proposals, and (2) provided image-level interaction labels (e.g. ride), our goal is to learn an HOI detection model, \(F(\cdot)\), that can map each human-object pair \(\{h,o\}\in\mathcal{H}\times\mathcal{O}\) to an interaction class \(c^{v}\) that belongs to a predefined set of classes \(C^{v}=\{c^{v,(k)}\}_{k=1}^{K}\) and a confidence score \(s^{v}\in[0,1]\), yielding \(\mathcal{H}\times\mathcal{O}\xrightarrow{F(\cdot)}\{c_{i,j}^{v},s_{i,j}^{v}\} _{i=1,j=1}^{N,M}\). Here the \(v\) superscript denotes "verb" (interaction), while \(k\) denotes the specific verb/interaction class. Please note that this definition of weakly-supervised HOI detection is slightly different than the one given in SSSec. 1 as we offload localization and semantic labeling of humans and objects to an object detector as in every two-stage HOI detection work. We can further rewrite \(F\) as composition of two separate functions, \(F_{1}\) and \(F_{2}\), where \(F_{1}\) is responsible for extracting pairwise features and modeling interactions while \(F_{2}\) performs classification, yielding \(F=F_{2}\circ F_{1}\).
In the fully-supervised case, learning is facilitated by giving the model access to the correct HOI targets \(Y=\{\text{bbox}_{i}^{\text{human}},\text{bbox}_{i}^{\text{object}},c_{i}^{v}, c_{i}^{v}\}_{i=1}^{L}\) which contain ground-truth human and object locations as bounding box coordinates, as well as the semantic categories for the object and the interaction. When one has ground-truth targets at hand, an HOI detection model can be trained to increase the likelihood of \(\{h,o\}\) pairs that spatially and semantically overlap with a HOI target to have the same interaction class as that target. In our case, however, the model can only access a set of ground-truth interaction classes for a given image, without even knowing if a certain interaction happens once or multiple times in the image (see Figure 1).
Inspired by existing weakly-supervised object detection (WSOD) literature, we formulate weakly-supervised HOI detection as a multiple instance learning (MIL) problem. In this formulation, each image is considered as a bag of human-object pairs (i.e. \(\mathcal{H}\times\mathcal{O}\)). If a bag (i.e. image) is labeled positive for a certain interaction, it has to contain at least one \(\{h,o\}\) pair of that interaction. Similar to WSDDN [2], we split the final classification layer, \(F_{2}\), into a two-stream head (i.e. \(F_{2}^{(1)}\) and \(F_{2}^{(2)}\)) where one models "what is the most probable interaction class for a given human-object pair?" (i.e. \(P(C^{v}\,|\,\{h,o\},\,F_{2}^{(1)})\)) while the other models "what is the most probable human-object pair for a given interaction class?" (i.e. \(P(\{h_{i},o_{j}\}_{i=1,j=1}^{N,M}\,|\,c^{v},\,F_{2}^{(2)})\)). Assuming we get \(d\)-dimensional feature for each pair through \(F_{1}\) as row vectors in a feature matrix \(\mathbf{Z}\), such as \(F_{1}(\mathcal{H}\times\mathcal{O})=\mathbf{Z}\in\mathbb{R}^{NM\times d}\), the aforementioned probabilities can be calculated by mapping \(\mathbf{Z}\) to a \(|C^{v}|\)-dimensional space first and then applying softmax on different dimensions. Hence, we can formulate \(F_{2}^{(1)}\) as a mapping \(\mathbb{R}^{d}\rightarrow\mathbb{R}^{|C^{v}|}\) followed by a row-wise softmax on \(\mathbf{Z}\) while \(F_{2}^{(2)}\) can be formulated as a mapping \(\mathbb{R}^{d}\rightarrow\mathbb{R}^{|C^{v}|}\) followed by a column-wise softmax on \(\mathbf{Z}\). Then, we can write \(F_{2}\)'s output \(\mathbf{Z}^{\text{HOI}}\) as:
\[\begin{split}\mathbf{Z}^{\text{HOI}}=F_{2}^{(1)}(\mathbf{Z}) \odot F_{2}^{(2)}(\mathbf{Z})\in\mathbb{R}^{NM\times|C^{v}|}& \text{where}\\ F_{2}^{(1)}(\mathbf{Z})=\sigma^{\rightarrow}(\mathbf{Z}\ast W_{F _{2}^{(1)}})&\\ F_{2}^{(2)}(\mathbf{Z})=\sigma^{\downarrow}(\mathbf{Z}\ast W_{F _{2}^{(2)}})&\end{split} \tag{1}\]
where \(\odot\) represents Hadamard product, \(N\) is the number of human proposals, \(M\) is the number of object proposals, \(|C^{v}|\) is the number of interaction classes, \(W_{F_{2}^{(1)}},W_{F_{2}^{(2)}}\in\mathbb{R}^{d\times|C^{v}|}\) are weight matrices, and \(\sigma^{\rightarrow}\) and \(\sigma^{\downarrow}\) are row-wise and column-wise softmax operations, respectively.
Finally, we can formulate our learning objective as minimizing \(|C^{v}|\) binary classification losses, one for each interaction class \(c^{v}\in C^{v}\):
\[\begin{split}\mathcal{L}^{\text{HOI}}(\hat{Y}^{v},Y^{v})& =\frac{1}{|C^{v}|}\sum_{k=1}^{|C^{v}|}\ell(\hat{y}^{v,(k)},y^{v,(k)})\\ \hat{Y}^{v}&=\sum^{NM}\mathbf{Z}^{\text{HOI}}\end{split} \tag{2}\]
where \(Y^{v}\) is the binary image-level interaction labels and \(y^{v,(k)}=1\) iff \(c^{v,(k)}\) is apparent in the image, \(0\) otherwise. Let \(\mathbf{Z}^{\text{HOI}}_{[i\times j]}\) denote the \(|C^{v}|\)-dimensional row vector in \(\mathbf{Z}^{\text{HOI}}\) that corresponds to the pair \(\{h_{i},o_{j}\}\). Then one can obtain the pair's interaction class \(c_{i,j}^{v}\) and confidence score \(s_{i,j}^{v}\) during inference as:
\[c_{i,j}^{v} =c^{v,(\arg\max_{k}\mathcal{Z}_{[i\times j]}^{\text{HOI},(k)})} \tag{3}\] \[s_{i,j}^{v} =\max(\textbf{Z}_{[i\times j]}^{\text{HOI}})\cdot s_{h_{i}}^{o} \cdot s_{o_{j}}^{o}\]
where \(s_{h_{i}}^{o}\) and \(s_{o_{j}}^{o}\) are confidence scores assigned to \(h_{i}\) and \(o_{j}\) by the object detector, respectively.
### Extracting interaction labels from captions
Our weakly-supervised HOI detector learning procedure requires image-level interaction labels for supervision. However, one can utilize captions to extract those annotations to further relax the level of annotation required. In this work, we demonstrate how one can train an HOI detector on a dataset scraped from the web and contains noisy captions, using a simple technique.
We start with extracting nouns and verbs from captions using a POS tagger [14]. Consider we have a set of predefined interaction categories \(C^{v}\), and verb and noun sets for a particular image \(\mathcal{V}=\{v_{i}\}_{i=1}^{A}\) and \(\mathcal{N}=\{n_{i}\}_{i=1}^{B}\), respectively. For each image where "person" \(\in\mathcal{N}\), we construct its label as \(Y^{v}=\{v\,|\,v\in\mathcal{V}\,\text{and}\,v\in C^{v}\}\). We use a synonym list to match "person", as given in [30].
### Pruning non-interacting proposals
Learning an HOI detector using only image-level interaction labels is inherently a difficult task. A model needs to learn how to identify interacting human-object pairs among a large candidate pool and classify their interactions correctly. Without bounding box supervision, the model is left by itself to learn how an interacting human or object should look like, and what combination of those maps to a certain interaction class. For instance, consider learning the interaction "kick" with object "ball". We can expect that most of the images containing this particular interaction and object would portray a game field where more than one person is apparent. **To a weakly-supervised model, each person would be equally likely to be the subject of the "kick" interaction.** One can try to build coarse heuristic rules (e.g. interacting human-object pairs should be close in space) or more fine-grained ones (e.g. human-object pairs for kick interaction should be close in space, but they can be further for another interaction) to reduce the search space but it is impossible to precisely develop rules for every natural interaction. To this end, we propose to exploit the implicit grounding capability of a vision-language model to prune non-interacting human and object proposals. In this work, we employ CLIP [32] and produce visual grounding maps for image-text pairs using [6].
Consider that we have access to free-form captions for the images in our training data; we do _not_ require captions at inference time. Given an image and its caption, we first extract all verbs \(\mathcal{V}=\{v_{i}\}_{i=1}^{A}\) and nouns \(\mathcal{N}=\{n_{i}\}_{i=1}^{B}\) out of the caption using a POS tagger [14]. We then create hu
Figure 2: **Overview of our method during training. Retrieving human and object proposals from an object detector, our method first prunes non-interacting human/object proposals with the help of a vision-language model, calculating an interaction score for each proposal. Next, we pair remaining human-object proposals and run those pairs through a two-stream feed-forward neural net (\(F_{2}\)) that operates on \(F_{1}\)’s output space. Finally, image-level predictions are calculated by summing \(F_{2}\)’s output over region pairs. We query a large-language model to restrict our model’s output space only to meaningful interactions. In order to improve our model’s spatial reasoning capability, we formulate a weakly-supervised preposition prediction task wherein supervision comes from preposition extracted from captions. During inference, we drop proposal pruning and preposition prediction modules, requiring only an image to detect HOI instances.**
man captions as \(HC=\{\text{``a person is }v_{i}\text{-ing''}\}_{i=1}^{A}\) and object captions as \(OC=\{\text{``a }n_{i}\text{ is being }v_{j}\text{-ed''}\}_{i=1,j=1}^{B,A}\). We run the image and created captions through CLIP to produce a grounding map per caption and resize them into original image dimensions via bilinear interpolation. Finally, grounding maps are min-max normalized to map their values into \([0,1]\) range.
Retrieving grounding maps \(GH\) and \(GO\) for human and object captions respectively, one can calculate a grounding score, \(g\), for each human and object proposal, \(h\in\mathcal{H},o\in\mathcal{O}\). Intuitively, \(g\) should measure how likely a certain proposal is to engage in an interaction. We calculate \(g\) for each proposal as follows:
\[g_{h} =\frac{1}{(x_{h}^{(2)}-x_{h}^{(1)})(y_{h}^{(2)}-y_{h}^{(1)})} \frac{1}{|GH|}\sum_{k=1}^{|GH|}\sum_{i=x_{h}^{(1)},j=y_{h}^{(1)}}^{x_{h}^{(2)},y _{h}^{(2)}}GH_{i,j}^{(k)}\] \[g_{o} =\frac{1}{(x_{o}^{(2)}-x_{o}^{(1)})(y_{o}^{(2)}-y_{o}^{(1)})} \frac{1}{|GO|}\sum_{k=1}^{|GO|}\sum_{i=x_{o}^{(1)},j=y_{o}^{(1)}}^{x_{o}^{(2)},y_{o}^{(2)}}GO_{i,j}^{(k)} \tag{4}\]
The above equations simply calculate the average grounding score that falls into each human/object proposal region using the corresponding grounding maps. Finally, the interaction score for a human proposal \(h_{i}\) or an object proposal \(o_{j}\) is calculated as the multiplication of its grounding score \(g\) and confidence score given by the object detector \(s^{o}\):
\[I_{h_{i}}=g_{h_{i}}\cdot s_{h_{i}}^{o}\,,\,I_{o_{j}}=g_{o_{j}}\cdot s_{o_{j}}^{o} \tag{5}\]
The reason behind multiplying \(g\) and \(s^{o}\) for interaction score calculation is quite simple. In our experiments, we have seen that the generated grounding maps usually focus on the most distinct parts of the interacting human/object, which would result in proposals covering only those distinct areas to get the highest interaction scores if only \(g\) was used.
Lastly, we sort human/object proposals in descending order of their interaction scores \(I_{h}\)/\(I_{o}\) and keep top \(50\%\) as is while assigning a special "background" class to others. These "background" proposals still get paired with human proposals within the model and will serve as negatives.
### Suppressing implausible interactions
Previous work [12, 41] has shown that it can be beneficial to restrict a model's output space only to meaningful interactions, conditioning on some type of lookup table in which plausible interactions are encoded. While [12] proposed to learn these conditions within the model optimizing an indicator function over possible interactions given human and object proposals, [41] compute them directly on data, iterating over ground-truth HOI targets. There also exist works that learn such conditions by modeling interactions as phrases (e.g. "person eat banana") in a textual [29] or multi-modal space [40]. Unlike these methods, ours does not require subject-predicate-object annotations nor multi-modal training.
In this work, we propose to use a large language model (LLM) to query which interactions are plausible for a given object category. Our hypothesis is that these models would have learnt natural co-occurrences throughout their training on massive text, and this information would also be applicable to the visual domain. We consider two natural approaches for how an LLM can be used for this purpose: (1) inputting "A person is [MASK] \(c^{o}\)" caption to the model (where \(c^{o}\) denotes a particular object category) and calculating a probability distribution over possible interaction categories \(C^{v}=\{c^{v,(k)}\}_{k=1}^{K}\) at the masked-language modeling (MLM) head to obtain the [MASK] token, ignoring the rest of the vocabulary, and (2) plugging "What a person do with \(c^{o}\)?" as a question and interaction classes \(C^{v}=\{c^{v,(k)}\}_{k=1}^{K}\) as an answer set, then retrieving the language model's output distribution over \(C^{v}\) at the multiple choice question answering (MCQA) head. After obtaining a probability distribution over interaction classes given an object category i.e. \(P(C^{v}\,|\,c^{o})\), we create a binary lookup table for each object category, wherein interaction categories are encoded as plausible (if their probability is larger than average) or otherwise implausible.
\[\Phi_{c^{o}} =\{\phi(c^{o},c^{v,(k)})\}_{k=1}^{|C^{v}|}\] \[\phi(c^{o},c^{v,(k)}) =\left\{\begin{array}{cc}1&P(c^{v,(k)}\,|\,c^{o})>\frac{1}{|C^{ v}|}\sum^{|C^{v}|}P(c^{v}\,|\,c^{o})\\ 0&\text{otherwise}\end{array}\right. \tag{6}\]
Lastly, we double the confidence score of a human-object pair \(\{h_{i},o_{j}\}\) if its predicted label is plausible given the object category of \(o_{j}\):
\[s_{i,j}^{v^{\prime}}=s_{i,j}^{v}\cdot(1+\phi(c_{o_{j}}^{o},c_{i,j}^{v})) \tag{7}\]
We use RoBERTa [28] to query if a given \(<\)interaction, object\(>\) pair is plausible. We build \(\Phi_{c^{o}}\) for each dataset before training, instead of querying LLM constantly.
### Formulating weakly-supervised preposition prediction
Prior work [15] demonstrates that encoding pairwise spatial relations as discrete labels (e.g. inside of, contains) within a model improves performance on tasks that require explicit spatial understanding, such as TextVQA [37]. Inspired from but different from their work, we formulate a preposition prediction task in which the model is forced to learn a mapping from pairwise features to discrete spatial
labels in weakly-supervised manner, in the unique context of human-object interaction.
Similar to our weakly-supervised HOI detection formulation given in SSSec. 3.1, we employ a two-stream head, \(F_{3}\), that operates on \(F_{1}\)'s output space. Assuming our pre-defined preposition set is \(C^{p}=\{c^{p,(k)}\}_{k=1}^{K}\) and we get a \(d\)-dimensional feature for each human-object pair through \(F_{1}\) as row vectors in feature matrix \(\mathbf{Z}\), such as \(F_{1}(\mathcal{H}\times\mathcal{O})=\mathbf{Z}\in\mathbb{R}^{NM\times d}\), we formulate \(F_{3}^{(1)}\) as a mapping \(\mathbb{R}^{d}\rightarrow\mathbb{R}^{|C^{p}|}\) followed by a row-wise softmax on \(\mathbf{Z}\) while \(F_{3}^{(2)}\) is formulated as a mapping \(\mathbb{R}^{d}\rightarrow\mathbb{R}^{|C^{p}|}\) followed by a column-wise softmax on \(\mathbf{Z}\). Then, we can write \(F_{3}\)'s output \(\mathbf{Z}^{\text{PREP}}\) as:
\[\begin{split}\mathbf{Z}^{\text{PREP}}=F_{3}^{(1)}(\mathbf{Z}) \odot F_{3}^{(2)}(\mathbf{Z})\in\mathbb{R}^{NM\times|C^{p}|}\qquad\text{where} \\ F_{3}^{(1)}(\mathbf{Z})=\sigma^{\rightarrow}(\mathbf{Z}*W_{F_{3 }^{(1)}})\\ F_{3}^{(2)}(\mathbf{Z})=\sigma^{\downarrow}(\mathbf{Z}*W_{F_{3 }^{(2)}})\end{split} \tag{8}\]
where \(\odot\) represents Hadamard product, \(N\) is the number of human proposals, \(M\) is the number of object proposals, \(|C^{p}|\) is the number of preposition classes, \(W_{F_{3}^{(1)}},W_{F_{3}^{(2)}}\in\mathbb{R}^{d\times|C^{p}|}\) are weight matrices, and \(\sigma^{\rightarrow}\) and \(\sigma^{\downarrow}\) are row-wise and column-wise softmax operations, respectively.
Finally, we formulate our learning objective as minimizing \(|C^{p}|\) binary classification losses, one for each preposition class \(c^{p}\in C^{p}\):
\[\begin{split}\mathcal{L}^{\text{PREP}}(\hat{Y}^{p},Y^{p})& =\frac{1}{|C^{p}|}\sum_{k=1}^{|C^{p}|}\ell(\hat{y}^{p,(k)},y^{p,(k )})\\ \hat{Y}^{p}&=\sum^{NM}\mathbf{Z}^{\text{PREP}}\end{split} \tag{9}\]
where \(Y^{p}\) is the binary image-level preposition labels and \(y^{p,(k)}=1\) iff \(c^{p,(k)}\) is apparent in the image, \(0\) otherwise. Adding a new task in the model, our overall training objective now becomes minimizing both \(\mathcal{L}^{\text{HOI}}\) (Eq. 2) and \(\mathcal{L}^{\text{PREP}}\):
\[\mathcal{L}=\mathcal{L}^{\text{HOI}}+\lambda\mathcal{L}^{\text{PREP}} \tag{10}\]
Since none of the datasets we use in our experiments comes with such preposition annotations, we utilize captions to extract them. Specifically, we run captions through a scene graph parser (e.g. Stanford Scene Graph Parser [35]) to extract \(<\)subject, predicate, object\(>\) triplets. We filter out triplets whose subject is not "person" or predicate is not in \(C^{p}\), which we curated by hand collecting \(32\) most common prepositions. We use the same synonym list for "person", mentioned in SSSec. 3.2. After the filtering process, the unique predicates from the remaining triplets are used as image-level preposition labels for their corresponding images.
## 4 Experiments
### Setup
**Datasets and metrics.** We use the well-established HOI detection benchmark datasets, HICO-DET [5] and V-COCO [11], in our experiments. HICO-DET contains \(37,633\) training and \(9,546\) test images with bounding box annotations for interacting human-object pairs and their interaction labels. There are \(80\) object (same as in MS COCO [27]) and \(117\) interaction categories in HICO-DET with \(600\) unique \(<\)interaction, object\(>\) pairs. As HICO-DET instances do not come with a paired caption, we use a state-of-the-art image captioning model, OFA [39], to generate one for each image in the training split. On the other hand, V-COCO is a relatively smaller dataset with \(5,400\) images in trainval and \(4,946\) images in test split. There are \(80\) object and \(26\) interaction categories. As V-COCO is a subset of MS COCO, each image is paired with \(5\) captions. We use standard metrics for each dataset which are Agent AP and Role AP for V-COCO, and Full mAP for HICO-DET. Please note that HICO-DET's Full mAP is analogous to V-COCO's Role AP, which requires predicted human and object bounding boxes to have at least \(0.5\) IoU with corresponding HOI target, and predicted interaction category should be the same as the target interaction label. V-COCO's Agent AP, on the other hand, requires correct localization of humans (i.e. IoU \(>0.5\)) engaging in a particular interaction.
Furthermore, for the first time in the literature we learn an HOI detection model on a small subset of Conceptual Captions, which consists of roughly \(18,000\) image-caption pairs, without any HOI-related annotation. We extract image-level interaction labels from captions as explained in SSSec. 3.2. We use the V-COCO test split to evaluate models trained on this new dataset.
**Baseline and training procedure.** Please note that our proposed approach as a whole is applicable to any existing two-stage HOI detection method. In our experiments, we use SCG [41] as our baseline and implement our main contributions on top of it. We choose SCG because it is one of the best performing fully-supervised two-stage HOI detector with a publicly-available implementation. Unless noted otherwise, we use the same hyperparameter settings as [41]. We use Faster R-CNN[34] with ResNet50-FPN[13] pretrained on MS COCO to generate detections. We train all models on \(4\times\) NVIDIA Quadro RTX 5000 GPUs with an initial learning rate of \(1e-4\) and batch size of \(16\) (\(4\) images per GPU). On V-COCO and HICO-DET, all models are trained for \(8\) epochs, reducing learning rate to \(1e-5\) after \(6^{\text{th}}\) epoch. On the other hand, we train models on Conceptual Captions subset for \(5\) epochs, without applying any decay strategy on learning rate. For weakly-supervised HOI detection task, we use binary adaptation of focal loss [26]
(\(\ell\) in Eq. 2) following the baseline, and binary cross entropy loss for weakly-supervised preposition prediction task (\(\ell\) in Eq. 9). We set the weight for weakly-supervised preposition prediction task as \(0.1\) (\(\lambda\) in Eq. 10). Interested readers may consult the original paper for additional details on model implementation and training procedure.
### Comparison with the SOTA
In this subsection, we compare our model against the state-of-the-art HOI detection efforts. We also include fully-supervised approaches to inform readers on the performance gap between fully- and weakly-supervised HOI detection literature, and to show that our baseline SCG [41] is comparable with SOTA when trained with strong supervision. We would like to stress that there are not many weakly-supervised work in the HOI detection literature and existing approaches (e.g. MX-HOI [21] and AlignFormer [16]) use image-level \(<\)interaction, object\(>\) annotations as weak supervision. Careful readers would have already noticed that this specific definition of "weak supervision" is considerably stronger than is in our formulation as it reduces the search space on object proposals to match with a specific interaction category in a given image. Consider a natural image that portrays a person riding a bike while another riding a motorcycle in an urban setting. For that particular image, image-level \(<\)interaction, object\(>\) annotation will yield to {ride-bike, ride-motorcycle} while image-level interaction annotation (our supervision) will be {ride} only. If \(<\)interaction, object\(>\) annotation are exploited, object proposals that can be matched with "ride" will be narrowed down to bike and motorcycle. In our case, however, the object space is excessively larger, including other objects that may be apparent in the image (e.g. bus, car, etc.).
In Table 1, we compare HOI detection performance on V-COCO test split among the models trained on the V-COCO trainval set (except Ours-CC). Our method improves absolute \(9.54\%\) over weakly-supervised variant of SCG, which we build our contributions upon, and absolute \(15.54\%\) over AlignFormer, which uses stronger supervision in the form of image-level \(<\)interaction, object\(>\) labels. Our method trained on the Conceptual Captions subset (Ours-CC) also surpasses AlignFormer and achieves a comparable performance to weakly-supervised SCG, even though we extract image-level interaction labels from captions to supervise its learning (SSSec. 3.2) and use MS COCO trained object detector to produce human/object proposals.
Similarly in Table 2, we compare HOI detection performance on the HICO-DET test split among the models trained on the HICO-DET training set. Results show that both weakly-supervised SCG and Ours fail to sustain their improvement over AlignFormer, due to the increased number of interaction categories over V-COCO (26 vs 117). Unsurprisingly, AlignFormer was not affected heavily by increased combinatorial complexity over the \(<\)interaction, object\(>\) joint space thanks to its stronger supervision than ours. It is also worth noting that both AlignFormer and MX-HOI use an object detector fine-tuned on HICO-DET (denoted by \(\dagger\)) while we do not. Regardless of its unsustained performance on HICO-DET, our method still improves over the baseline weakly-supervised SCG by abso
\begin{table}
\begin{tabular}{l|l|l|c} Method & Sup. & Backbone & Role AP \\ \hline \hline iCAN [9] & Full & RN50 & 52.04 \\ VSGNet [38] & Full & RN152 & 57.00 \\ SCG [41] & Full & RN50 FPN & 58.02 \\ IDN [22] & Full & RN50 & 60.30 \\ HOTR [18] & Full & RN50+Transformer & 64.40 \\ MSTR [19] & Full & RN50+Transformer & **65.20** \\ \hline MX-HOI [21] & Weak+ & RN101 & **-** \\ AlignFormer [16] & Weak+ & RN50 & **14.15** \\ \hline Baseline [41] (§3.1) & Weak & RN50 FPN & 20.05 \\ Ours & Weak & RN50 FPN & **29.59** \\ \hline Ours-CC & Weak- & RN50 FPN & **17.71** \\ \end{tabular}
\end{table}
Table 1: V-COCO test Role AP performance among methods trained on V-COCO trainval split (except OursCC). Ours outperforms AlignFormer by a large margin (absolute \(15.54\%\)), even though its supervision comes from image-level \(<\)interaction, object\(>\) labels (**Weak+**) rather than image-level \(<\)interaction\(>\) only labels (**Weak**) we use. It also greatly improves (absolute \(9.54\%\)) over Baseline, which is close to SOTA when trained fully-supervised, verifying effectiveness of our contributions. Trained on a dataset scraped from the web, extracting image-level \(<\)interaction\(>\) only labels from captions (**Weak-**), our method (OursCC) still outperforms AlignFormer by absolute \(3.56\%\). RN denotes ResNet. MX-HOI did not report V-COCO results. **Bolding** shows the best method within each supervision level.
\begin{table}
\begin{tabular}{l|l|l|c} Method & Sup. & Backbone & mAP \\ \hline \hline iCAN [9] & Full & RN50 & 14.84 \\ VSGNet [38] & Full & RN152 & 19.80 \\ SCG [41] & Full & RN50 FPN & 21.85 \\ IDN [22] & Full & RN50 & 23.36 \\ HOTR [18] & Full & RN50+Transformer & 23.46 \\ MSTR [19] \(\dagger\) & Full & RN50+Transformer & **31.17** \\ \hline MX-HOI [21] \(\dagger\) & Weak+ & RN101 & 16.14 \\ AlignFormer [16] \(\dagger\) & Weak+ & RN50 & **19.26** \\ \hline Baseline [41] (§3.1) & Weak & RN50 FPN & 7.05 \\ Ours & Weak & RN50 FPN & **8.38** \\ \end{tabular}
\end{table}
Table 2: Full mAP (Default) comparison on HICO-DET. Unsurprisingly, AlignFormer and MX-HOI benefit from having much stronger supervision, namely image-level \(<\)interaction, object\(>\) labels (**Weak+**), when combinatorial complexity over interaction space is increased moving from V-COCO to HICO-DET. However, Ours still improves over Baseline, which uses the same image-level \(<\)interaction\(>\) only labels (**Weak**) verifying effectiveness of our contributions. RN denotes ResNet. \(\dagger\) denotes using an object detector fine-tuned on HICO-DET.
lute \(1.33\%\) (relative \(18.87\%\)), which has been trained with the same level of supervision as Ours.
### Ablation study
To demonstrate effectiveness of our contributions, we incrementally ablate them over the baseline weakly-supervised SCG on V-COCO, HICO-DET and Conceptual Captions. The results are shown in Tables 3, 4 and 5. While all of our contributions clearly improve the performance over the baseline, results also show that caption-dependent parts of our method (SSec. 3.3 & SSec. 3.5) are not affected heavily from the caption source. Independent of whether captions are collected in a controlled setting (V-COCO), scraped from the web (Conceptual Captions) or generated by a captioning model (HICO-DET), our model can utilize them to boost model performance.
## 5 Conclusion
In this work, we tackle HOI detection problem with the weakest supervision setting in the literature, using image-level interaction labels only (e.g. "ride"). We exploit the implicit grounding capability of a vision-language model, in order to prune non-interacting human and object proposals. We restrict our model's output space to natural interactions only, querying a large language model if a given \(<\)interactions, object\(>\) is plausible. We lastly formulate a weakly-supervised preposition prediction task to improve spatial reasoning capability of our model explicitly. For the first time in the literature, we learn an HOI detector on image-caption pairs, extracting image-level interaction labels out of captions.
**Ethical concerns.** VLMs and LLMs can contain implicit biases inherited from their training data. Even though their usage within this work's context did not pose any explicit harm during our experimentation, we would like to warn users that their usage in a different context may expose people to potentially unethical content.
\begin{table}
\begin{tabular}{l|l|c} Method & Sup. & mAP (\(\Delta\)) \\ \hline \hline Baseline [41] (§3.1) & Weak & 7.05 \\ \hline +Pruning (§3.3) & Weak & 7.55 (+0.50) \\ +Suppressing (§3.4) & Weak & 7.81 (+0.76) \\ +Preposition (§3.5) & Weak & 8.38 (+1.33) \\ \end{tabular}
\end{table}
Table 4: Incremental ablations on HICO-DET. \(\Delta\) denotes performance difference over Baseline. All three of our contributions help improving HOI detection performance.
\begin{table}
\begin{tabular}{l|l|c c} Method & Sup. & Agent AP (\(\Delta\)) & Role AP (\(\Delta\)) \\ \hline \hline Baseline [41] (§3.1) & Weak & 32.41 & 20.05 \\ \hline +Pruning (§3.3) & Weak & 33.88 (+1.47) & 21.80 (+1.75) \\ +Suppressing (§3.4) & Weak & 37.04 (+4.63) & 28.28 (+8.23) \\ +Preposition (§3.5) & Weak & 40.53 (+8.12) & 29.59 (+9.54) \\ \end{tabular}
\end{table}
Table 3: Incremental ablations on V-COCO. \(\Delta\) denotes performance difference over Baseline. All three of our contributions help improving HOI detection performance.
Figure 3: **Qualitative examples** sampled from V-COCO test split. Black and white boxes show interacting humans and objects, respectively. Our method successfully detects more interactions than Baseline, especially when the same human is subject to more than one interactions. Moreover, the 3rd example shows that it selects the better object proposal for “sit” interaction (horse). Interaction label explanations can be found in original V-COCO paper [11], Table 1.
\begin{table}
\begin{tabular}{l|l|c c} Method & Sup. & Agent AP (\(\Delta\)) & Role AP (\(\Delta\)) \\ \hline \hline Baseline [41] (§3.1) & Weak & 17.71 & 14.33 \\ \hline +Pruning (§3.3) & Weak & 19.44 (+1.73) & 15.95 (+1.62) \\ +Suppressing (§3.4) & Weak & 20.00 (+2.29) & 18.23 (+3.90) \\ +Preposition (§3.5) & Weak & 20.75 (+3.04) & 17.71 (+3.38) \\ \end{tabular}
\end{table}
Table 5: Incremental ablations on Conceptual Captions. \(\Delta\) denotes performance difference over Baseline. While all three contributions help improve performance over Baseline, the preposition prediction task slightly decreases Role AP when added on top of implausible interaction suppression (but still boosts Agent AP). |
2308.07023 | Dark Coloured Scalars Impact on Single and Di-Higgs Production at the
LHC | The search for Dark Matter (DM) at colliders is primarily pursued via the
detection of missing energy in particular final states. These searches are
based on the production and decay processes where final states include DM
particles and at least one Standard Model (SM) particle. DM will then reveal
itself as missing energy. An alternative form to get a hint of a dark sector is
via loop contribution to SM processes. In this case, it is not even relevant if
the new particles have their origin in the dark sector of the model. In this
work we discuss the impact of an arbitrary number of coloured scalars in single
Higgs and double Higgs production at the Large Hadron Collider (LHC), and we
show their complementarity. We determine the range of variation of the
corrections relative to the SM for an arbitrary number of coloured scalars $n$,
and discuss in more detail the cases $n=1$ and $n=2$. | Pedro Gabriel, Margarete Mühlleitner, Daniel Neacsu, Rui Santos | 2023-08-14T09:15:52Z | http://arxiv.org/abs/2308.07023v1 | # Dark Coloured Scalars Impact on Single and Di-Higgs Production at the LHC
###### Abstract
The search for Dark Matter (DM) at colliders is primarily pursued via the detection of missing energy in particular final states. These searches are based on the production and decay processes where final states include DM particles and at least one Standard Model (SM) particle. DM will then reveal itself as missing energy. An alternative form to get a hint of a dark sector is via loop contribution to SM processes. In this case, it is not even relevant if the new particles have their origin in the dark sector of the model. In this work we discuss the impact of an arbitrary number of coloured scalars in single Higgs and double Higgs production at the Large Hadron Collider (LHC), and we show their complementarity. We determine the range of variation of the corrections relative to the SM for an arbitrary number of coloured scalars \(n\), and discuss in more detail the cases \(n=1\) and \(n=2\).
KA-TP-18-2023
Introduction
Any extension of the Standard Model (SM) aiming at solving the Dark Matter (DM) puzzle has to include at least one DM candidate. One of the simplest ways to address this problem is to enlarge the scalar sector of the SM by including a dark sector, usually using a discrete symmetry, and a portal coupling that connects the two sectors. Once a minimal model that provides a DM candidate is built, one needs to make sure that it is in agreement with the current measurement of the relic density and with all results from direct and indirect detection together with the constraints imposed by collider experiments. Models with a dark sector can then be further extended to explain other unsolved issues of the SM. Ultimately, any complete extension of the SM has to be in agreement with all available experimental data.
In recent years many models have been proposed to solve other discrepancies between the SM predictions and the experimental results. A particular class of models manages to solve two of these problems simultaneously: the B-physics anomalies, related essentially to the \(b\to s\mu^{+}\mu^{-}\) transition [1, 2] and the muon \(g-2\) anomaly [3, 4, 5, 6, 7] while providing a sound DM candidate. However, a very recent reinterpretation of the LHCb collaborations completely washed out the discrepancy with the SM prediction in the \(b\to s\mu^{+}\mu^{-}\) transition [8, 9]. Still, this type of models can be made compatible with these new results for \(b\to s\mu^{+}\mu^{-}\) (compatible with the SM predictions) while still solving the DM and \(g-2\) problems.
The existence of this type of models prompted us to study the contribution of the new coloured scalars, that live in the dark sector, to single and di-Higgs production. The models were discussed in great detail in [10, 11] and are based on a previous model proposed in [12]. They introduce massive coloured scalar fields which, depending on the charge assignments and \(SU(2)\) quantum numbers, can lead to one or several coloured scalars. A discrete \(Z_{2}\) symmetry is imposed such that the new fields from the dark sector are odd under \(Z_{2}\) while the SM fields are even under this symmetry. In Ref. [10], three new fields were added to the SM, one \(SU(3)_{c}\) coloured scalar, \(\Phi_{3}\), one colourless scalar, \(\Phi_{2}\), and one vectorlike fermion, \(\chi\), with an integer electric charge of \(0\) or \(\pm 1\). The scalars are \(SU(2)_{L}\) singlets and the fermions form an \(SU(2)_{L}\) doublet. This model was dubbed Model 5. In Ref. [11] a different scenario was studied with the scalars as \(SU(2)_{L}\) doublets and the fermion as an \(SU(2)_{L}\) singlet, and called Model 3.
As the dark sector communicates with the SM via the Higgs potential, the new scalars couple to the Higgs boson. In fact, only two types of interactions are relevant to our discussion: the Higgs couplings to the new coloured scalars and the strong couplings of the coloured scalars with the gluons with origin in the covariant derivative. Therefore the one-loop single Higgs and di-Higgs production only depend on very specific terms in the Higgs potential, the ones that connect the coloured scalars with the SM Higgs doublet. Besides that, the SM Higgs coupling to the fermions (and also the Higgs self-couplings) remain exactly the SM ones - there is no mixing of the Higgs with the other scalars as they have different quantum numbers. The coloured scalars contribute to the gluon fusion single Higgs and di-Higgs production with only one coloured scalar of electric charge \(2/3\), \(\phi_{q}^{+2/3}\), in Model 5, while for Model 3 there are two coloured scalars contributing with electric charges \(2/3\) and \(5/3\), \(\phi_{q}^{+2/3}\) and \(\phi_{q}^{+5/3}\), respectively. We also generalise our results to the case of an arbitrary number of coloured scalars. Note that single Higgs production is a clean probe of the Higgs portal coupling in a scenario where the extension of the SM only includes an arbitrary number of coloured scalars. The di-Higgs cross section can then be used to further confirm the structure suggested by single Higgs production. From now on we will drop the old nomenclature and just refer to the model by the number of
coloured scalars.
The LHC has performed numerous searches for DM. The only truly model-independent bound in the case of coloured scalar production and decay (depending only on the mass of the coloured scalar) would be a monojet event, that relies only on the strong gauge coupling. These bounds would be valid in a scenario where the couplings of the coloured scalars to the quarks and vector-like fermions are negligible or where branching ratios that lead to visible final states are too small to be detected. However, according to [12] the best bounds are obtained in the searches for DM associated with top and bottom quarks [13]. These are more restrictive than a re-interpretation of the searches for squarks at the LHC. They conclude in [12] that the mass of coloured scalars have a rough lower bound of 1 TeV. We will use this bound in our analysis.
We finalise this section by noting that the only new coupling present in the processes to be analysed is the portal coupling. Hence, in the case \(n=1\) all results will depend on only two variables, the portal coupling and the coloured scalar mass. For an arbitrary \(n\) we will have \(n\) portal couplings and \(n\) coloured scalar masses.
The paper is organised as follows. In Sec. 2 we present the single Higgs production mode, and in Sec. 3 the di-Higgs production mode is discussed. In Sec. 4 we compare the contributions of the new physics models to single Higgs and double Higgs production. Our conclusions are given in Sec. 5.
## 2 Single Higgs Production
We consider \(n\) independent coloured complex scalars \(\phi_{q}^{i=1,..,n}\) transforming in the fundamental representation of \(SU(3)_{c}\). After electroweak symmetry breaking, the potential relevant to this work is given by
\[V=\sum_{i=1}^{n}\bigg{[}\underbrace{(\mu_{\phi_{q}^{i}}^{2}+\frac{v^{2}}{2} \lambda_{h\phi_{q}^{i}})}_{m_{\phi_{q}^{i}}^{2}}|\phi_{q}^{i}|^{2}+\frac{1}{2 }\lambda_{h\phi_{q}^{i}}h^{2}|\phi_{q}^{i}|^{2}+v\lambda_{h\phi_{q}^{i}}h|\phi _{q}^{i}|^{2}+\lambda_{\phi_{q}^{i}}|\phi_{q}^{i}|^{4}+...\bigg{]}+...\;\;, \tag{1}\]
where the couplings \(\lambda_{h\phi_{q}^{i}}\) and \(\lambda_{\phi_{q}^{i}}\) are real and we have defined the masses of the fields by
\[m_{\phi_{q}^{i}}^{2}=\mu_{\phi_{q}^{i}}^{2}+\frac{v^{2}}{2}\lambda_{h\phi_{q}^ {i}}\;. \tag{2}\]
Note that there are in total \(3n\) independent parameters. If we also consider that these \(n\) fields form an \(SU(2)_{L}\) multiplet, \(\Phi_{q}=\left(\phi_{q}^{1}\;\;\phi_{q}^{2}\;\;...\;\;\phi_{q}^{n}\right)^{ \mathbf{T}}\), this would impose the following constraints: \(\mu_{\phi_{q}^{i}}^{2}=\mu_{\phi_{q}^{k}}^{2}\equiv\mu_{\Phi_{q}}^{2}\) and \(\lambda_{\phi_{q}^{i}}=\lambda_{\phi_{q}^{k}}\equiv\lambda_{\Phi_{q}}\).1 We are now left with only \(n+2\) degrees of freedom. This implies that for equal portal couplings \(\lambda_{h\phi_{q}^{i}}\) the masses \(m_{\phi_{q}^{i}}^{2}\) given by Eq. (2) are also equal and vice-versa. For this work we will consider the more general case of \(n\) independent fields but still assuming that they never have the exact same quantum numbers.
Footnote 1: We use uppercase \(\Phi\) and lowercase \(\phi\) to distinguish between the parameters defined for the multiplet \(\Phi\) and the scalars \(\phi\).
Single Higgs production via gluon fusion, which is the main production process at the LHC, proceeds at leading order (LO) in the SM via quark loops [14] as shown in Fig. 1a, with the heavier quarks giving the major contribution. In the new models, which we will refer to as BDM models, two new diagrams emerge as shown in Figs. 1b and 1c.
The amplitude for this process can be cast into the form
\[{\cal M}_{\triangle}^{gg\to h}=\frac{g_{s}^{2}m_{h}^{2}}{16\pi^{2}}\left( \sum_{Q}g_{Q}^{h}F_{\triangle}^{Q}+\sum_{\phi_{q}^{i}}g_{\phi_{q}^{i}}^{h}F_{ \triangle}^{\phi_{q}^{i}}\right)\,A_{1\mu\nu}\,\epsilon_{a}^{h}\epsilon_{b}^{v }\,\delta_{ab}, \tag{3}\]
where the indices \(a\) and \(b\) are associated with the incoming gluons, \(A_{1}^{\mu\nu}=g^{\mu\nu}-p_{b}^{\mu}p_{a}^{\nu}/p_{a}\cdot p_{b}\) and the quark and scalar form factors are given by [15]
\[F_{\triangle}^{Q} =\tau_{Q}\left(1+(1-\tau_{Q})f(\tau_{Q})\right), g_{Q}^{h} =\frac{1}{\upsilon}, \tag{4}\] \[F_{\triangle}^{\phi_{q}^{i}} =-\frac{1}{2}\tau_{\phi_{q}^{i}}\left(1-\tau_{\phi_{q}^{i}}f(\tau _{\phi_{q}^{i}})\right), g_{\phi_{q}^{i}}^{h} =\frac{\lambda_{h\phi_{q}^{i}}\upsilon}{2m_{\phi_{q}^{i}}^{2}}, \tag{5}\]
with \(\tau_{X}=4m_{X}^{2}/m_{h}^{2}\) (\(X=Q,\phi_{q}^{i}\)) and \(f(\tau)\) defined as
\[f(\tau)=\left\{\begin{array}{ll}\arcsin\left(\frac{1}{\sqrt{\tau}}\right)^{ 2}&\tau\geq 1\\ -\frac{1}{4}\left[\log\left(\frac{1+\sqrt{1-\tau}}{1-\sqrt{1-\tau}}\right)-i \pi\right]^{2}&\tau<1\end{array}\right.. \tag{6}\]
In the limit of large masses the form factors approach a constant value,
\[\lim_{m_{\triangle}^{2}\rightarrow\infty}F_{\triangle}^{Q} = \frac{2}{3}\, \tag{7}\] \[\lim_{m_{\phi_{q}^{i}}^{2}\rightarrow\infty}F_{\triangle}^{\phi_ {q}^{i}} = \frac{1}{6}\, \tag{8}\]
and therefore the large mass behaviour is determined solely by the coupling pre-factors \(g_{X}^{h}\). Consequently, for large masses, the scalar loop contribution to the amplitude is suppressed by a factor of \(1/m_{\phi_{q}^{i}}^{2}\). Because the quark Yukawa couplings are proportional to their masses, the quark loop contribution approaches a constant value for large masses. Thus, although this process can be used to determine how many heavy quarks are present in the model the same is not true for the coloured scalars. In Fig. 2 we present \(f(\tau)\) as a function of \(\tau\) in the left plot and the quark and scalar form factors as a function of \(\tau\) in the right plot, which nicely shows that the two form factors approach constant values in the large mass limit.
Figure 1: Generic single Higgs production diagrams. (a) - SM quark loops; (b)/(c) - BDM coloured scalars loops.
### The LHC Production Cross Section
The calculation of the gluon fusion production cross section is performed at LO by implementing the new form factors for the coloured scalars (Eq. (5)) in the program HIGLU[16] which can be used to calculate the single Higgs production cross section at the LHC in the SM and in the Minimal Supersymmetric extension of the SM (MSSM). In the SM the \(gg\) initiated production is much larger than its quark counterpart making the latter negligible in SM-like models, such as the ones discussed in this work. We can therefore write the hadronic cross section as
\[\sigma(pp\to h)=\sigma_{0}^{h}\tau_{h}\frac{\mathrm{d}\mathcal{L}^{gg}}{ \mathrm{d}\tau_{h}},\qquad\qquad\sigma_{0}^{h}=\frac{\pi}{16m_{h}^{4}}\left| \mathcal{M}^{gg\to h}_{\triangle}\right|^{2}, \tag{9}\]
where \(\frac{\mathrm{d}\mathcal{L}^{gg}}{\mathrm{d}\tau_{h}}\) is the gluon luminosity and \(\tau_{h}=m_{h}^{2}/s\), with \(s\) denoting the total hadronic c.m. energy squared. In order to reduce the impact of the important higher-order (HO) effects we calculate the relative deviation of the new physics (NP) cross section in our model from the SM cross section, defined as
\[\delta_{h}=\frac{\sigma_{NP}-\sigma_{SM}}{\sigma_{SM}}. \tag{10}\]
We hence assume that the relative HO corrections to the new physics cross section in our model do not deviate significantly from those of the SM case which can safely be assumed for the QCD corrections2 while for the EW corrections3 this is not necessarily the case. The latter are, however, small compared to the QCD corrections. Using Eqs. (3-5) and Eq. (9), we can write \(\delta_{h}\) as
Footnote 2: The gluon fusion cross section is known at next-to-leading order (NLO) QCD including the full mass dependences [17, 18, 19, 20, 21, 22, 23, 24]. Within the heavy top-quark limit the next-to-next-to-leading order (NNLO) [25, 26, 27, 28, 29, 30] and next-to-next-to leading order (N\({}^{3}\)LO) [31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41] QCD corrections have been calculated. An explicit large top-mass expansion has estimated the missing quark-mass effects beyond NLO to be less than 1% [42, 43, 44, 45].
Footnote 3: The NLO EW corrections have been calculated in [46, 47, 48, 49, 50, 51, 52] and the mixed QCD-EW corrections in [53].
\[\delta_{h}=\frac{\left|\sum_{Q}F_{\triangle}^{Q}+v\sum_{\phi_{q}^{i}}g_{\phi_{ q}^{i}}^{h}\frac{\mathcal{F}_{\triangle}^{\phi_{q}^{i}}}{\rho}\right|^{2}- \left|\sum_{Q}F_{\triangle}^{Q}\right|^{2}}{\left|\sum_{Q}F_{\triangle}^{Q} \right|^{2}}=2v\sum_{\phi_{q}^{i}}g_{\phi_{q}^{i}}^{h}\;\mathrm{Re}\left[\frac {F_{\triangle}^{\phi_{q}^{i}}}{\sum_{Q}F_{\triangle}^{Q}}\right]+v^{2}\frac{ \left|\sum_{\phi_{q}^{i}}g_{\phi_{q}^{i}}^{h}F_{\triangle}^{\phi_{q}^{i}} \right|^{2}}{\left|\sum_{Q}F_{\triangle}^{Q}\right|^{2}}. \tag{11}\]
For the following numerical analysis we include the bottom, charm and top quark loops in single Higgs production, while in double Higgs production only top and bottom quark loops
Figure 2: Left: \(f(\tau)\) as a function of \(\tau\); right: quark and scalar form factors as a function of \(\tau\).
are taken into account. We use the following input values for the Higgs, top, bottom and charm quark masses, respectively:
\[m_{h}=125\;{\rm GeV},\quad m_{t}=172.5\;{\rm GeV},\quad m_{b}=4.75\;{\rm GeV}, \quad m_{c}=1.43\;{\rm GeV}. \tag{12}\]
We use the LO pdfs NNPDF40_lo_as_01180 [54, 55] and the LO strong coupling constant
\[\alpha_{s}=0.118. \tag{13}\]
The cross sections are calculated for a c.m. energy of \(\sqrt{s}=14\) TeV. Note that the dependence on \(\sqrt{s}\) cancels out in \(\delta_{h}\).
### Model with One Scalar versus a Model with Two Scalars
Let us start by considering the scenarios \(n=1\) (just one coloured scalar) and \(n=2\) (two coloured scalars). As already discussed, all scalar masses will be taken to be above 1 TeV. In the case \(n=1\) and considering here, for the sake of the discussion, only the top quark contribution (the bottom contribution only ranges at the percent level), the following simplified form for \(\delta_{h}\) is obtained
\[\delta_{h}=\lambda_{h\phi_{q}^{1}}\frac{v^{2}}{m_{\phi_{q}^{1}}^{2}}\left( \frac{F_{\triangle}^{\phi_{q}^{1}}}{F_{\triangle}^{Q}}\right)+\lambda_{h\phi_{ \frac{1}{4}}}^{2}\frac{v^{4}}{4m_{\phi_{q}^{1}}^{4}}\left(\frac{F_{ \triangle}^{\phi_{q}^{1}}}{F_{\triangle}^{Q}}\right)^{2}. \tag{14}\]
Any extension with more than one coloured scalar will have one more effective Higgs-scalar coupling \(\lambda_{h\phi_{q}^{i}}\) and one more scalar mass \(m_{\phi_{q}^{i}}\) for each new scalar added to the model. Thus, in order to simplify the presentation of the results, we impose the constraint of equal coloured scalar masses for any extension with more than one coloured scalar. As we will show later, for masses above 5 TeV the cross sections will be very small unless the number of scalars becomes very large. So the interesting range for the mass is indeed very small. Note that in the plots presented later we will always include the bottom, charm and top contributions.
In the BDM models, the quartic coupling \(\lambda_{h\phi_{q}^{i}}\) that enters the calculation of the cross section is an effective coupling in the following sense: in the case \(n=1\) it is just the portal coupling between the Higgs and the singlet coloured scalar; for \(n=2\), the two effective couplings are the sum of combinations of three portal couplings (in the case of an \(SU(2)\) representation). In more detail, for \(n=1\) the coloured scalar is an \(SU(2)\) singlet and the portal coupling with the Higgs doublet can be written as
\[V_{\rm portal}^{n=1}=\lambda_{H\Phi_{q}}|H|^{2}|\Phi_{q}|^{2}\, \tag{15}\]
and the effective coupling takes the form
\[\lambda_{h\phi_{q}^{1}}=\lambda_{H\Phi_{q}}\,. \tag{16}\]
In the scenario \(n=2\) the coloured scalar is an \(SU(2)\) doublet and the portal couplings are now
\[V_{\rm portal}^{n=2}=\lambda_{H\Phi_{q}}|H|^{2}|\Phi_{q}|^{2}+\lambda_{H\Phi_ {q}}^{\prime}|H^{\dagger}\Phi_{q}|^{2}+y_{H\Phi_{q}}|H^{\dagger}i\sigma_{2} \Phi_{q}|^{2}, \tag{17}\]
which results in two effective couplings,
\[\lambda_{h\phi_{q}^{1}}=\lambda_{H\Phi_{q}}+\lambda_{H\Phi_{q}}^{\prime}\, \qquad\lambda_{h\phi_{q}^{2}}=\lambda_{H\Phi_{q}}+y_{H\Phi_{q}}. \tag{18}\]
We have also checked that the same applies to the triplet representation of \(SU(2)\)[56]. However, one should stress that what is relevant here is that we will discuss any type of model with an arbitrary number of scalars, each with an effective portal coupling and a given mass. The results can then be translated to any specific model of this kind.
Since all form factors are positive and strictly decreasing for \(m_{\phi^{i}_{q}}>1\) TeV, the highest contributions to the cross sections will be achieved when these form factors are at their highest value corresponding to the lowest mass for all the scalars. Under the equal masses constraint (\(m_{\phi^{i}_{q}}=m_{\phi^{i}_{q}}\equiv m_{\phi_{q}}\Rightarrow F^{\phi^{i}_{ q}}_{\triangle}=F^{\phi^{i}_{q}}_{\triangle}\equiv F^{\phi_{q}}_{\triangle}\)) we can write
\[\delta_{h}=\left(\sum_{i}\lambda_{h\phi^{i}_{q}}\right)\frac{v^{2}}{m_{\phi^{i }_{q}}^{2}}\left(\frac{F^{\phi_{q}}_{\triangle}}{F^{Q}_{\triangle}}\right)+ \left(\sum_{i}\lambda_{h\phi^{i}_{q}}\right)^{2}\frac{v^{4}}{4m_{\phi^{i}_{q}} ^{4}}\left(\frac{F^{\phi_{q}}_{\triangle}}{F^{Q}_{\triangle}}\right)^{2}. \tag{19}\]
With all masses equal, \(\delta_{h}\) is not sensitive to individual couplings but only to their total sum, \(\sum_{i}\lambda_{h\phi^{i}_{q}}\). Further, taking all couplings equal, \(\lambda_{h\phi^{i}_{q}}=\lambda_{h\phi^{i}_{q}}\equiv\lambda_{h\phi_{q}}\), we still cover the full range of possible values for \(\delta_{h}\) because for any particular choice of couplings \(\{\lambda_{h\phi^{1}_{q}},...,\lambda_{h\phi^{n}_{q}}\}\) there is always a single coupling \(\lambda_{h\phi_{q}}\) such that \(\sum_{i}\lambda_{h\phi^{i}_{q}}=n\lambda_{h\phi_{q}}\) which will give equivalent results for \(\delta_{h}\). With this approximation the coupling \(\lambda_{h\phi_{q}}\) will just be rescaled by a factor \(n\) when going from the case \(n=1\) to arbitrary \(n\).
Before presenting the results we will discuss the allowed values for the couplings. As the upper bound we will consider the perturbativity bound of \(4\pi\). For the lower bound, one of the conditions for the potential (Eq. (1)), to be bounded from below, following the same procedure as in [57], gives rise to the following constraint
\[\lambda_{h\phi^{i}_{q}}\geq-\frac{m_{h}}{v}\sqrt{2\lambda_{\phi^{i}_{q}}}\,, \tag{20}\]
where \(m_{h}\) is the SM Higgs boson mass and \(\lambda_{\phi^{i}_{q}}\) is the \(\phi^{i}_{q}\) quartic self coupling parameter that must be positive, \(\lambda_{\phi^{i}_{q}}\geq 0\), and obey the perturbativity bound of \(\lambda_{\phi^{i}_{q}}\leq 4\pi\). Therefore we will vary the relevant parameters \(\lambda_{h\phi^{i}_{q}}\) between the lower value given by Eq. (20) and the upper value \(4\pi\).
In Fig. 3 we present the results for \(\delta_{h}\) as a function of the effective portal coupling \(\lambda_{h\phi_{q}}\) for a mass of \(m_{\phi_{q}}=1\) TeV and for \(n=1\) and \(n=2\). The single Higgs cross section was calculated with HIGLU for a c.m. energy of 14 TeV resulting in a SM LO cross section of \(\sigma_{SM}^{h}=15.76\) pb for the above given input values. It is evident that \(\delta_{h}\) varies linearly with the effective coupling \(\lambda_{h\phi_{q}}\), which means that, in this range, the interference term between the SM and NP form factors is dominant. The large scalar masses we are working with and the fact that the interference term is proportional to only \(1/m_{\tilde{\phi}_{q}}^{2}\) while the purely NP contributions are suppressed by a factor of \(1/m_{\tilde{\phi}_{q}}^{4}\) (cf. Eq. (14)), are the reason behind this behaviour.
In Fig. 4 we show the results for \(\delta_{h}\) as a function of the coloured scalar mass for the minimum value of the coupling (left) and the maximum value of the coupling (right) and for \(n=1\) and \(n=2\). Since, as argued above, the interference term is dominant, \(\delta_{h}\) behaves approximately as \(1/m_{\tilde{\phi}_{q}}^{2}\) for fixed \(\lambda_{h\phi_{q}}\). For the allowed range of variation the maximum value of variation relative to the SM is between about -10% and +40%.
The NP term only becomes comparable to the interference term in the limit
\[\lambda_{h\phi_{q}}=\frac{4m_{\phi_{q}}^{2}}{v^{2}}\frac{F_{\bigtriangleup}^{ \tilde{\mathcal{O}}}}{F_{\bigtriangleup}^{\tilde{\mathcal{O}}_{q}}}\xrightarrow{m _{\mathcal{O}\to\infty}}\xrightarrow{16m_{\tilde{\phi}_{q}}^{2}}, \tag{21}\]
which means that for a mass of \(m_{\tilde{\phi}_{q}}=1\) TeV \(\lambda_{h\phi_{q}}\approx 260\) for \(n=1\). As more scalars are added the picture can change. As the interference term scales with \(n\) and the NP term scales as \(n^{2}\), for a number of scalars above 20 and all masses equal to 1 TeV the NP term starts to dominate.
Figure 4: \(\delta_{h}\) as a function of the coloured scalar mass for the minimum value of the coupling (left) and the maximum value of the coupling (right) and for \(n=1\) and \(n=2\).
### Models with \(n\) Coloured Scalars
In the previous section we have set all masses to be equal. Relaxing this condition forces us to return to the more general expression given in Eq. (11). However, we can follow a different approach in order to simplify the final expression by taking advantage of the large scalar masses and using the limit for \(F_{\triangle}^{\phi^{i}_{q}}\) given in Eq. (8). For scalar masses above 1 TeV the error in \(F_{\triangle}^{\phi^{i}_{q}}\) by using this limit is only about 0.2%. With this approximation \(\delta_{h}\) can be written as
\[\delta_{h}=\frac{1}{\left|\sum_{Q}F_{\triangle}^{Q}\right|}\frac{v^{2}}{6} \sum_{i}\frac{\lambda_{h\phi^{i}_{q}}}{m^{2}_{\phi^{i}_{q}}}+\frac{1}{\left| \sum_{Q}F_{\triangle}^{Q}\right|^{2}}\frac{v^{4}}{144}\,\left(\sum_{i}\frac{ \lambda_{h\phi^{i}_{q}}}{m^{2}_{\phi^{i}_{q}}}\right)^{2}, \tag{22}\]
where we have \(\left|\sum_{Q}F_{\triangle}^{Q}\right|\approx 0.641\) when including the top, bottom and charm quarks. Including only the top quark and the limit in Eq. (7) would imply an error in \(\left|\sum_{Q}F_{\triangle}^{Q}\right|\) of around \(~{}4\%\). This approximation has the advantage of allowing us to write the results as a function of the ratio \(x_{i}=\lambda_{h\phi^{i}_{q}}/m^{2}_{\phi^{i}_{q}}\) where the index \(i\) represents each scalar4. It is now clear that we can show \(\delta_{h}\) as a function of the sum \((\sum_{i}x_{i})\). As previously discussed, as long as we span all possible values for this sum we will also have fully explored all values that \(\delta_{h}\) can take. In order to do this let us first note that the minimum and maximum of \((\sum_{i}x_{i})\) are achieved when all \(x_{i}\) are at their minimum and maximum values, respectively. Hence, to generate all values for the sum and consequently for \(\delta_{h}\), we can make the simple choice of \(x_{i}=x_{j}\) with the limits of \(\min(x_{i})=\min\left(\lambda_{h\phi^{i}_{q}}/m^{2}_{\phi^{i}_{q}}\right)=-(8 \pi)^{1/2}m_{h}/v\,\text{TeV}^{-2}\) and \(\max(x_{i})=\max\left(\lambda_{h\phi^{i}_{q}}/m^{2}_{\phi^{i}_{q}}\right)=4\pi\) TeV\({}^{-2}\) where we have considered \(\min(m_{\phi^{i}_{q}})=1\) TeV.
Footnote 4: This approximation is not strictly necessary. In the general case the ratio would be \(x_{i}=\lambda_{h\phi^{i}_{q}}\frac{F_{\triangle}^{\phi^{i}_{q}}}{m^{2}_{\phi^ {i}_{q}}}\). All conclusions in this section are only dependent on the fact that \(x_{i}\) decreases with mass, a behaviour present whether we use the approximation or not since \(F_{\triangle}^{\phi^{i}_{q}}\) approaches a constant value for large masses and \(\lambda_{h\phi^{i}_{q}}\) takes a constant value between its boundaries.
In Fig. 5 we show \(\delta_{h}\) as a function of \(\sum_{i}x_{i}\). The minimum and maximum limits for a model with \(n\) scalars are indicated by the coloured zones, where a minimum mass of 1 TeV is considered and the couplings are varied between their minimum and maximum allowed values. The horizontal lines represent the relative experimental uncertainty of the experimental results for Higgs production via gluon fusion at \(1\sigma\). In the left plot the lines are taken from the ATLAS combination [58] at 13 TeV and 80 fb\({}^{-1}\), leading to \(\delta_{h}\in[-5,13]\). In the right plot we present just the case \(n=1\) for a better understanding of the bounds on \(\sum_{i}x_{i}\) for \(n=1\). Considering \(n=1\) we can see that \(-3.2<\sum_{i}x_{i}<8\) approximately. This in turn means that for a mass of 1 TeV the coupling is also constrained to be \(-3.2<\lambda_{h\phi^{i}_{q}}<8\). Therefore the bounds are not very strong at the moment but are already better than the perturbative limit for the upper bound. Still, as the mass grows the bound on the coupling gets weaker. For \(n>1\) if the couplings are all of the same order, the constraints will be stronger if again the masses are all of the order 1 TeV. But there is always the possibility of having all couplings very small except one, recovering the \(n=1\) constraints for the larger coupling. Furthermore, if the couplings have different signs we end up with a larger freedom than for the case \(n=1\). These scenarios will have to be studied for the specific model in question using every other information on the model.
In Fig. 6 we show the allowed values of \(\sum_{i}x_{i}\) (left) and \(\delta_{h}\) (right) at \(1\sigma\) and \(2\times 1\sigma\) using the present experimental limits from ATLAS [58], CMS [59], and the predictions for the future HL-LHC [60]. The predictions for the HL-LHC show that we will attain a result of the order \(\delta_{h}\in[-1.6,1.6]\).
Figure 5: \(\delta_{h}\) as a function of \(\sum_{i}x_{i}\). The minimum and maximum limits for a model with \(n\) scalars are indicated by the coloured zones, where a minimum mass of \(m_{\tilde{\varphi}_{q}^{i}}=1\) TeV is considered and the couplings are varied between the lower bound of \(-(8\pi)^{1/2}m_{h}/v\) and the perturbativity upper bound of \(4\pi\). In the left plot the horizontal lines are taken from the ATLAS combination [58] and show the \(1\sigma\) results for Higgs production via gluon fusion. In the right plot we present just the case \(n=1\) for a better understanding of the bounds on \(\sum_{i}x_{i}\) for \(n=1\).
Figure 6: Allowed values of \(\delta_{h}\) (left) and \(\sum_{i}x_{i}\) (right) at \(1\sigma\) and \(2\times 1\sigma\) using the present experimental limits from ATLAS [58], CMS [59], and the predictions for the future HL-LHC [60].
Double Higgs Production
Similar to the single Higgs case, the production of a pair of Higgs bosons is dominated by the gluon fusion process, which at LO is given by a triangle and a box diagram with heavy quarks running in the loop [61]. The new coloured scalars will contribute to di-Higgs production by similar loop diagrams. Due to the new 2 gluon-2 coloured scalars and 2 Higgs-2 coloured scalars couplings, however, there are now additional topologies that contribute to the process.
### The Leading-Order Amplitude
The complete set of diagrams is given by the ones involving the trilinear Higgs self-coupling, shown in Fig. 7, and diagrams that do not depend on it, depicted in Fig. 8. The new topologies arising in our model are given in Fig. 7 (b) and (c) and in Fig. 8 (b)-(e). As in the SM, we have triangle and box topologies and now additionally also a self-energy-like topology.
The LO amplitude can be decomposed into two different tensor structures, which correspond to total gluon spin 0 and 2, respectively, along the collision axis. They are given by [62]
\[A_{1}^{\mu\nu} =g^{\mu\nu}-\frac{p_{b}^{\mu}p_{a}^{\nu}}{(p_{a}p_{b})} \tag{23}\] \[A_{2}^{\mu\nu} =g^{\mu\nu}+\frac{1}{p_{T}^{2}(p_{a}p_{b})}\left[\left(p_{c}^{2} \right)\,p_{b}^{\mu}p_{a}^{\nu}-2\left(p_{b}p_{c}\right)\,p_{c}^{\mu}p_{a}^{\nu }-2\left(p_{a}p_{c}\right)\,p_{b}^{\mu}p_{c}^{\nu}+2\left(p_{a}p_{b}\right)\,p _{c}^{\mu}p_{c}^{\nu}\right] \tag{24}\]
with
\[A_{1}\cdot A_{1}=A_{2}\cdot A_{2}=2\,,\quad A_{1}\cdot A_{2}=0 \tag{25}\]
and
\[p_{T}^{2}=2\frac{\left(p_{a}p_{c}\right)\,\left(p_{b}p_{c}\right)}{\left(p_{a} p_{b}\right)}-p_{c}^{2}, \tag{26}\]
where \(p_{a,b}\) denote the four-momenta of the two incoming gluons, and \(p_{c,d}\) those of the outgoing Higgs bosons. The LO amplitude given by the diagrams in Fig. 7, which contain the trilinear Higgs self-coupling, can be cast in the form
\[\mathcal{M}_{hhh}^{gg\to hh}=\frac{g_{S^{\prime}S}^{2}}{16\pi^{2}}\;C_{\triangle }\,\left(\sum_{Q}g_{Q}^{h}F_{\triangle}^{Q}+\sum_{\phi_{q}^{i}}g_{\phi_{q}^{i} }^{h}F_{\triangle}^{\phi_{q}^{i}}\right)A_{1\mu\nu}\epsilon_{a}^{\mu}\epsilon _{b}^{\nu}\;\delta_{ab}\;, \tag{27}\]
Figure 7: Generic diagrams contributing to double Higgs production involving the trilinear Higgs self-coupling: (a) - SM quark loop; (b)/(c) - coloured scalars loop.
where
\[C_{\triangle}=\frac{3m_{h}^{2}/v}{s-m_{h}^{2}}\, \tag{28}\]
\(\epsilon_{a,b}^{\mu/v}\) represent the gluon polarisation vectors and \(g_{s}\) denotes the strong coupling constant. The first term in Eq. (27) corresponds to the first diagram and the second one to the last two diagrams.5 The form factors \(F_{\triangle}^{Q/\phi_{q}^{i}}\) and the couplings \(g_{Q/\phi_{q}^{i}}^{h}\) are given in Eqs (4) and (5). The amplitude independent of the Higgs self-coupling can be written as
Footnote 5: In accordance with the FeynArts[63, 64] notation, we call triangle diagrams loops with three legs attached and box diagrams loops with four legs attached.
\[\mathcal{M}_{\text{no\ hhh}}^{gg\to hh}=\frac{g_{s}^{2}s}{16\pi^{2}} \ C_{\square}\Bigg{[}\sum_{Q}\Big{(}(g_{Q}^{h})^{2}F_{\square}^{Q}A_{1\mu v}+( g_{Q}^{h})^{2}G_{\square}^{Q}A_{2\mu v}\Big{)}\] \[+\sum_{\phi_{q}^{i}}\bigg{(}\bigg{(}(g_{\phi_{q}^{i}}^{h})^{2}F_{ \square_{1}}^{\phi_{q}^{i}}+g_{\phi_{q}^{i}}^{hh}F_{\square_{2}}^{\phi_{q}^{i }}\bigg{)}\ A_{1\mu v}+(g_{\phi_{q}^{i}}^{h})^{2}G_{\square_{1}}^{\phi_{q}^{i }}A_{2\mu v}\bigg{)}\Bigg{]}\epsilon_{a}^{\mu}\epsilon_{b}^{v}\ \delta_{ab}\, \tag{29}\]
where \(C_{\square}=1\), the prefactors \(g_{Q/\phi_{q}^{i}}^{h}\) are given in Eqs. (4) and (5) and
\[g_{\phi_{q}^{i}}^{hh}=\frac{\lambda_{h\phi_{q}^{i}}}{2m_{\phi_{q}^{i}}^{2}}. \tag{30}\]
The quark form factors \(F_{\square}^{Q}\) and \(G_{\square}^{Q}\) corresponding to Fig. 8 (a), which have been calculated in the literature before (cf. e.g. [62]), are deferred to the Appendix B, while the new form factors
Figure 8: Generic diagrams contributing to double Higgs production independent of the trilinear Higgs self-coupling. (a) - SM quark loop; (b-e) - coloured scalars loop.
are given here. The form factor \(\Gamma^{\phi^{i}_{q}}_{\square_{1}}\) sums the contributions of the diagrams Figs. 8 (b) and (c) proportional to \(A^{\mu\nu}_{1},F^{\phi^{i}_{q}}_{\square_{2}}\) stems from the sum of the contributions of Figs. 8 (d) and (e), and \(G^{\phi^{i}_{q}}_{\square_{1}}\) is the sum of the contributions of Figs. 8 (b) and (c) proportional to \(A^{\mu\nu}_{2}\). They read explicitly6
Footnote 6: See also e.g. [65] and [66]. In the former paper, the authors focused on the impact of light coloured scalars on di-Higgs production while in the latter the effect of light coloured scalar leptoquarks was analysed.
\[G^{\phi_{q}}_{\square_{1}}=\frac{4m^{4}_{\phi_{q}}}{s}\left( \frac{1}{tu-m^{4}_{h}}\right)\left(s(t+u)C^{m^{2}_{\phi_{q}}}_{ab}+(2t)(t-m^{2 }_{h})C^{m^{2}_{\phi_{q}}}_{ac}+(2u)(u-m^{2}_{h})C^{m^{2}_{\phi_{q}}}_{bc}\right. \\ -(t^{2}+u^{2}-2m^{4}_{h})C^{m^{2}_{\phi_{q}}}_{cd}-(st^{2}+2m^{2} _{\phi_{q}}(tu-m^{4}_{h}))D^{m^{2}_{\phi_{q}}}_{bac}\] \[\left.\hskip 113.811024pt-(su^{2}+2m^{2}_{\phi_{q}}(tu-m^{4}_{h}))D ^{m^{2}_{\phi_{q}}}_{abc}-(2m^{2}_{\phi_{q}}(tu-m^{4}_{h}))D^{m^{2}_{\phi_{q}}} _{acb}\right) \tag{31}\]
\[F^{\phi_{q}}_{\square_{1}}=\frac{4m^{4}_{\phi_{q}}}{s}\Bigg{(} \frac{2}{s}(t-m^{2}_{h})C^{m^{2}_{\phi_{q}}}_{ac}+\frac{2}{s}(u-m^{2}_{h})C^{ m^{2}_{\phi_{q}}}_{bc}\\ -(2m^{2}_{\phi_{q}})(D^{m^{2}_{\phi_{q}}}_{abc}+D^{m^{2}_{\phi_{q} }}_{bac})-(2m^{2}_{\phi_{q}}+\frac{1}{s}(tu-m^{4}_{h}))D^{m^{2}_{\phi_{q}}}_{ acb}\Bigg{)} \tag{32}\]
where we have suppressed the i index only for convenience and
\[F^{\phi^{i}_{q}}_{\square_{2}}=F^{\phi^{i}_{q}}_{\triangle}\, \tag{33}\]
with the latter given in Eq. (5). The Mandelstam variables \(s,t,u\) and the scalar integrals \(C_{ij}\) and \(D_{ijk}\) are defined in the appendix.
### The Leading-Order Cross Section
The amplitude squared for the computation of the cross section can be separated into two different parts, one for each spin projection,7 so that the differential partonic cross section can be cast into the form
Footnote 7: The interference term vanishes as for the tensor structures \(A_{1}\) and \(A_{2}\) we have \(A_{1}\cdot A_{2}=0\).
\[\frac{d\hat{\sigma}^{hh}}{d\hat{t}}=\frac{G^{2}_{F}\alpha^{2}_{s}}{256(2\pi)^{ 3}}\left[|\mathcal{M}_{F}|^{2}+|\mathcal{M}_{G}|^{2}\right]\, \tag{34}\]
where \(G_{F}\) denotes the Fermi constant, \(\alpha_{s}\) the strong coupling constant, and \(\hat{t}\) the momentum transfer squared from one of the initial state gluons to one of the final state Higgs bosons. Each of the partial amplitudes \(\mathcal{M}_{F/G}\) contains only the terms constructed with the \(F/G\) form factors, respectively. Hence
\[\mathcal{M}_{F}\ =\ \sum_{Q}\Big{(}C_{\triangle}\,g^{h}_{Q}F^{Q}_{ \triangle}+C_{\square}\,(g^{h}_{Q})^{2}F^{Q}_{\square}\Big{)}+\sum_{\phi^{i}_{ q}}\left(C_{\triangle}\,g^{h}_{\phi^{i}_{q}}F^{\phi^{i}_{q}}_{\triangle}+C_{ \square}\,\left((g^{h}_{\phi^{i}_{q}})^{2}F^{\phi^{i}_{q}}_{\square_{1}}+g^{hh}_ {\phi^{i}_{q}}F^{\phi^{i}_{q}}_{\square_{2}}\right)\right) \tag{35}\]
\[\mathcal{M}_{G}\ =\ C_{\square}\left(\sum_{Q}g^{h}_{Q}G^{Q}_{\square}+\sum_{\phi^{i}_{ q}}(g^{h}_{\phi^{i}_{q}})^{2}G^{\phi^{i}_{q}}_{\square_{1}}\right). \tag{36}\]
The total cross section for \(hh\) production through gluon fusion at the LHC is obtained by integrating Eq. (34) over the scattering angle and the gluon luminosity,
\[\sigma(pp\to hh)=\int_{4m_{h}^{2}/s}^{1}\mathrm{d}\tau_{h}\frac{\mathrm{d} \mathcal{L}^{gg}}{\mathrm{d}\tau_{h}}\hat{\sigma}^{hh}(\hat{s}=\tau_{h}s) \tag{37}\]
where \(s\) is the c.m. energy at the LHC. The numerical evaluation of the total production cross section is performed at LO with the program HPAIR[62, 67] where we have implemented the new form factors. The Fortran code HPAIR was originally written for the SM and the MSSM and calculates the double Higgs production through gluon fusion at LO and NLO in the heavy quark limit.
Also for double Higgs production we present our results as a ratio with respect to the SM value in order to minimise the contribution of HO effects, that is \(\delta_{hh}\) is defined as
\[\delta_{hh}=\frac{\sigma_{NP}-\sigma_{SM}}{\sigma_{SM}}\,. \tag{38}\]
This assumes that the HO corrections in our model do not differ significantly from those of the SM, which is a rather good approximation for the QCD corrections8, whereas not necessarily for the EW corrections, for which at present only first partial results exist, however,9 and which are expected to be less important. In contrast to single Higgs production we cannot find a simple analytic formula for this quantity due to the more involved form of the amplitudes and consequently also of the cross sections and the dependence of the form factors on the c.m. energy.
Footnote 8: After first results in the heavy-top limit [67], the NLO QCD corrections including the full top quark mass dependence have been provided in [68, 69, 70, 71, 72]. The NNLO corrections have been obtained in the large \(m_{t}\) limit [73, 74], the results at next-to-next-to-leading logarithmic accuracy (NNLL) became available in [75, 76], and the corrections up to N\({}^{3}\)LO were presented in [77, 78, 79, 80] for the heavy top-mass limit. For a review of higher-order corrections to SM di-Higgs production, see e.g. [81].
Footnote 9: First results on the electroweak corrections have been provided in [82, 83, 84, 85].
### Phenomenological Analysis of the Cases \(n=1\) and \(n=2\)
Let us start with the simpler scenarios with one or two coloured scalars. The dependence of the form factors on the mass is not trivial. Since the NP contributions should decouple from the SM for very large masses this means that \(\delta_{hh}\) would eventually behave as a strictly decreasing function of the coloured scalar mass.
We will follow the same approach as for single Higgs production and choose all coloured scalar masses equal to be 1 TeV. As for the couplings, while in single Higgs production with equal masses only the total sum of the couplings was relevant, in di-Higgs production the amplitude now depends on \(\lambda_{h\phi_{q}^{k}}\) and \(\lambda_{h\phi_{q}^{k}}^{2}\) terms. For now we will impose the constraint of equal couplings for \(n=2\). In Fig. 9 we present \(\delta_{hh}\) as a function of the effective portal coupling \(\lambda_{h\phi_{q}}\) for a mass of \(m_{\phi_{q}}=1\) TeV and for \(n=1\) and \(n=2\). The double Higgs cross section was calculated with HPAIR for a c.m. energy of 14 TeV. As expected, for \(\lambda_{h\phi_{q}}=0\) the NP and SM LO cross sections coincide, where the SM LO cross section calculated with HPAIR amounts to 16.37 fb.
In Fig. 10 we now present \(\delta_{hh}\) as a function of the coloured scalar mass for the minimum (left) and maximum (value) of the effective portal coupling \(\lambda_{h\phi_{q}}\) and for \(n=1\) and \(n=2\).10 As expected, the models share similar behaviours when reducing the \(n=2\) case to a single coupling and mass under the equal parameters constraints which approximately double the cross section for two coloured scalars relative to the \(n=1\) scenario. Since \(\delta_{hh}\) depends generally on powers of \((\lambda_{h\phi_{q}})^{p}\) with \(1\leq p\leq 4\), this is an indication that the linear terms seem to be the most significant ones for these results - doubling the couplings approximately doubles the cross section. Linear terms can only originate from the diagrams proportional to \(\lambda_{h\phi_{q}}\) and their
Figure 10: \(\delta_{hh}=(\sigma_{NP}-\sigma_{SM})/\sigma_{SM}\) as a function of the coloured scalar mass for the minimum value of the coupling (left) and maximum value of the coupling (right) and for \(n=1\) (brown) and \(n=2\) (blue). The double Higgs cross section was calculated with HPAIR for a c.m. energy of 14 TeV.
interference with the SM ones. This is further supported by the observation that, when \(\lambda_{h\phi_{q}}>0\), the contributions to the Higgs pair production cross section are negative and, hence, odd powers of the coupling are involved. On the other hand, the shape of \(\delta_{hh}\) is clearly described by a non-linear function in \(\lambda_{h\phi_{q}}\). Contrary to what happened in single Higgs production, this is no longer necessarily a sign that the interference terms are insufficient to describe the results. This is due to the fact that a dependence on \(\lambda_{h\phi_{q}}^{2}\) can originate from either the square of the purely NP diagrams proportional to \(\lambda_{h\phi_{q}}\) (see diagrams 7b, 7c, 8d, 8e) or from the SM interference with the NP diagrams proportional to \(\lambda_{h\phi_{q}}^{2}\) (see diagrams 8b, 8c). The interference term depends on \(\sum_{k}(\lambda_{h\phi_{q}^{k}})^{2}\), while the term originating from squaring the NP diagrams depends on \((\sum_{k}\lambda_{h\phi_{q}^{k}})^{2}\). For \(n=1\) the two dependencies are identical while for \(n\geq 2\) the former represents an extra degree of freedom for \(\delta_{hh}\) for a fixed \(\sum_{k}\lambda_{h\phi_{q}^{k}}\). This is an important observation if we want to present the results as a function of the sum of the couplings, \(\sum_{k}\lambda_{h\phi_{q}^{k}}\), as we did in the single Higgs case.
HPAIR has further been altered with the option to turn on or off particular sets of diagrams. Naturally, we will separate the ones proportional to \(\lambda_{h\phi_{q}}\) and \(\lambda_{h\phi_{q}}^{2}\). We further separate the two pairs of diagrams 7b-7c and 8d-8e, since their form factors are the same as in single Higgs production. The sets of diagrams chosen serve the purpose of separating the contributions of the form factors \(F_{\triangle}^{\phi_{q}}\), \(F_{\square_{2}}^{\phi_{q}}\), which are linear in \(g_{\phi_{q}}^{h}\) and \(g_{\phi_{q}}^{hh}\), respectively, and \(F_{\square_{1}}^{\phi_{q}}\) and \(G_{\square_{1}}^{\phi_{q}}\), which are proportional to the squared coupling \((g_{\phi_{q}}^{h})^{2}\).
The results for \(n=2\) are presented in Fig. 11 for a fixed mass of 1 TeV as a function of the coupling (top), for the minimum coupling as a function of the mass (middle) and for the maximum coupling as a function of the mass (bottom). The left plots show the individual contributions and the interference terms while the right plots present how the individual contributions behave with the couplings (top) and with the mass (middle and bottom). The black line represents the sum of all contributions, while the coloured lines represent the individual coloured scalar form factor contributions, separated as indicated by the legend. Note that the SM contributions drop out in \(\delta_{hh}\). More specifically, the contributions denoted by the different colours are proportional to the following coloured form factors and couplings,
\[\begin{array}{llll}\mathrm{blue}/F_{\triangle}:&\sim\{F_{\triangle}^{\phi_{q} },|F_{\triangle}^{\phi_{q}}|2\}&\sim\{G_{\phi_{q}}^{h},(G_{\phi_{q}}^{h})^{2}\} \\ \mathrm{red}/F_{\square_{2}}:&\sim\{F_{\square_{2}}^{\phi_{q}},|F_{\square_{2}} ^{\phi_{q}}|2\}&\sim\{G_{\phi_{q}}^{hh},(G_{\phi_{q}}^{hh})^{2}\}\\ \mathrm{green}/F_{\square_{1}}+G_{\square_{1}}:&\sim\{F_{\square_{1}}^{\phi_{q} },G_{\square_{1}}^{\phi_{q}},|F_{\square_{1}}^{\phi_{q}}|2,|G_{\square_{1}}^{ \phi_{q}}|2\}&\sim\{G_{\phi_{q}}^{h,2},G_{\phi_{q}}^{h,2},(G_{\phi_{q}}^{h,2})^ {2},(G_{\phi_{q}}^{h,2})^{2}\}\\ \mathrm{violet}/F_{\triangle}\cdot F_{\square_{2}}:&\sim 2\mathrm{Re}(F_{ \triangle}^{\phi_{q}}F_{\square_{2}}^{\phi_{q}*})&\sim G_{\phi_{q}}^{h}\cdot G _{\phi_{q}}^{hh}\\ \mathrm{orange}&\sim(F_{\triangle}+F_{\square_{2}})\cdot F_{\square_{1}}:&\sim \{2\mathrm{Re}(F_{\triangle}^{\phi_{q}}F_{\square_{1}}^{\phi_{q}*}),2\mathrm{ Re}(F_{\square_{1}}^{\phi_{q}}F_{\square_{2}}^{\phi_{q}+*})\}&\sim\{G_{\phi_{q}}^{h} \cdot G_{\phi_{q}}^{h,2},G_{\phi_{q}}^{h,2}\cdot G_{\phi_{q}}^{hh}\}\end{array} \tag{39}\]
where we introduced the abbreviations
\[G_{\phi_{q}}^{h}\equiv\sum_{\phi_{q}^{i}}g_{\phi_{q}^{i}}^{h}\,\quad G_{\phi_{q}}^{hh} \equiv\sum_{\phi_{q}^{i}}g_{\phi_{q}^{i}}^{hh}\quad G_{\phi_{q}}^{h,2}\equiv \sum_{\phi_{q}^{i}}(g_{\phi_{q}^{i}}^{h})^{2}. \tag{40}\]
Note that \(G_{\phi_{q}}^{hh}\) and \(G_{\phi_{q}}^{h}\) only differ by a factor \(1/v\). The terms linear in the form factors of the blue, red and green contribution stem from the interference with the SM form factors. The violet and orange contributions (dashed lines) hence denote the interference terms between
Figure 11: \(\delta_{hh}\) for \(n=2\) and a fixed mass of 1 TeV as a function of the coupling (top), for the minimum coupling as a function of the mass (middle) and for the maximum coupling as a function of the mass (bottom). Left: individual coloured form factor contributions and interference terms. Right: dependence of the individual contributions on the couplings (top) and the masses (middle and bottom). Black line: sum of all contributions; coloured lines: individual coloured scalar form factor contributions, separated as indicated by the legend and described in Eq. (39). The dashed lines are for the interference terms. Grey dashed lines: asymptotic behaviour in the scenario where the interference terms with the SM are the dominant ones. The grey full line at 0 in all plots is there to guide the eyes.
the coloured contributions. The grey dashed lines in the right upper plot show the asymptotic behaviour in the coupling in the scenario where the interference terms with the SM are the dominant ones (where we generically denote by \(\lambda\) the couplings \(G_{\phi_{q}}^{h}\) (blue line) and \(G_{\phi_{q}}^{hh}\) (red line) and by \(\lambda^{2}\) the coupling \(G_{\phi_{q}}^{h,\,2}\) (green line)). We can infer from the plot that for masses of 1 TeV and higher, both the \(F_{\triangle}\) (blue line) and the \(F_{\square_{1}}+G_{\square_{1}}\) (green line) contributions are rather well described by only considering their interference with the SM form factors. The \(F_{\square_{2}}\) contribution (red line), however, is not well approximated by the interference with the SM contribution only. This observation is also confirmed by the middle and lower right plots which show the asymptotic behaviours in coloured masses for fixed coupling for the case that the interference term dominates.
We end this section by presenting in Fig. 12 the double Higgs corrections \(\delta_{hh}\) as a function of the scalar mass \(m_{\phi_{q}}\) for \(n=2\). The couplings are varied between the two extreme values as discussed previously. The new physics impact due to the additional coloured loops are below 10 % already for a mass of 1 TeV and fall steeply with rising mass. Therefore the effect of two extra coloured scalar only will be extremely hard to probe even at the HL-LHC.
### Model \(n=2\) for Different Masses
We very briefly look at the implications of relaxing the condition of equal masses. For this, we calculated the full range of \(\delta_{hh}\) for ratios between the two masses of
\[m_{\phi_{q}^{2}}=\eta\ m_{\phi_{q}^{1}}\, \tag{41}\]
while scanning over all values for the couplings. Using HPAIR, the results for \(\delta_{hh}\) are displayed in Fig. 13. The plot is for \(n=2\) and \(\eta\) was varied between 1 and 2 and we find similar conclusions to the ones discussed in the previous section. There, we found that, when increasing the two masses equally above 1 TeV, the range of values for \(\delta_{hh}\) would always shrink. Naturally, when increasing only one mass, we expect the same to happen, although the effect is milder as can be seen in the figure. We have checked that for \(n=2\),
\[\delta_{hh}^{\rm max/min}(m_{\phi_{q}^{1}},m_{\phi_{q}^{2}})\approx\left[ \delta_{hh}^{\rm max/min}(m_{\phi_{q}^{1}},m_{\phi_{q}^{1}})+\delta_{hh}^{\rm max /min}(m_{\phi_{q}^{2}},m_{\phi_{q}^{2}})\right]/2. \tag{42}\]
Figure 12: \(\delta_{hh}\) as a function of the scalar mass \(m_{\phi_{q}}\) for \(n=2\). The couplings are varied between the two extreme values as discussed previously.
This is a consequence of the more general observation that
\[\delta_{hh}(m_{\phi_{q}^{1}},m_{\phi_{q}^{2}},\lambda_{hb_{q}})\approx\left[ \delta_{hh}(m_{\phi_{q}^{1}},m_{\phi_{q}^{1}},\lambda_{hb_{q}})+\delta_{hh}(m_{ \phi_{q}^{2}},m_{\phi_{q}^{2}},\lambda_{hb_{q}})\right]/2\,. \tag{43}\]
We can conclude that relaxing the equal masses condition will not result in any additional behaviour of note. Reducing one mass has the same effect as reducing both but with the obvious difference that the effect is less significant. Consequently, we will also not obtain a larger range of values for \(\delta_{hh}\) by adding this extra freedom. We can now extrapolate this conclusion for higher values of \(n\). This scenario will be discussed in the next section.
### Models with \(n\) Coloured Scalars
We finalise this chapter by looking in more detail at double Higgs production in the case of an arbitrary number of scalars. The parameter space will be comprised of \(n\) effective couplings \(\lambda_{hb_{q}^{k}}\) to the Higgs boson and \(n\) scalar masses \(m_{\phi_{q}^{k}}^{2}\) (\(k=1,...,n\)), one for each of the coloured scalars, resulting in \(2n\) input parameters ( \(\lambda_{k}\equiv\lambda_{hb_{q}^{k}}\) from now on). We again start with the condition of equal masses, \(m_{\phi_{q}^{k}}^{2}=m_{\phi_{q}^{k}}^{2}\equiv m_{\phi_{q}}^{2}\), reducing the input parameters to \(n+1\). As discussed in the previous section, this condition should be sufficient in order to fully explore \(\delta_{hh}\). For single Higgs production, this resulted in a simple dependence of the corrections on only the total sum of the couplings. In the case of Higgs pair production, the amplitude now contains both \(\lambda_{k}\) and \(\lambda_{k}^{2}\) terms and thus there are now two relevant quantities: the total sum of the couplings and the total sum of the squared couplings. Naturally, taking these two sums over the couplings as our parameters is advantageous, as it allows us to reduce the number of input parameters from \(n+1\) to only \(3\), to properly study \(\delta_{hh}\) for any model. The cases \(n=1,2\) have two and three independent input parameters, respectively, and were studied in the previous sections.
We will now proceed to write both the cross section \(\sigma_{hh}\) and the relative deviation from the SM cross section, \(\delta_{hh}\), as a function of the two effective quantities, \(\sum\lambda_{k}\) and \(\sum\lambda_{k}^{2}\). Assuming a
common fixed coloured mass \(m_{\phi_{i}}^{2}\), as we do from now on, we note that because single Higgs production only depends on \(\sum\lambda_{k}\), if one is able to write the relative deviations \(\delta_{h,hh}\) as a function of the same variable, the two results can be combined. Even under the simplification of equal masses, we have now \(\delta_{hh}\) as a function of two parameters, \(\delta_{hh}\equiv\delta_{hh}\left(\sum\lambda_{k},\sum\lambda_{k}^{2}\right)\). Therefore, the model limits are represented by a 2-dimensional region in the parameter space of these two sums. By taking the approach where we consider the sum \(\sum\lambda_{k}\) as the independent variable, the limits on this sum are easily obtained. Applying the same constraints, \(\lambda_{min}\leq\lambda_{k}\leq\lambda_{max}\), to all the couplings of a model with \(n\) coloured scalars, the sum of the couplings will be limited by
\[n\lambda_{min}\leq\sum\lambda_{k}\leq n\lambda_{max}. \tag{44}\]
As for the limits for \(\sum\lambda_{k}^{2}\) as a function of \(\sum\lambda_{k}\), we need to find the solution of a conditional extreme problem: the extremes of \(\sum\lambda_{k}^{2}\) subject to the constraints \(\sum\lambda_{k}=c\) and \(\lambda_{min}\leq\lambda_{k}\leq\lambda_{max}\). Within the \(n\) dimensional space of the individual couplings, \((\lambda_{1},\lambda_{2},...\lambda_{n})\), the region of interest is represented by an \(n-1\) hyperplane defined by \(\sum\lambda_{k}=c\) but constrained by an \(n\)-dimensional hypercube resulting from the constrained couplings, \(\lambda_{min}\leq\lambda_{k}\leq\lambda_{max}\). For a fixed sum (\(\sum\lambda_{k}=c\)) the minimum of \(\sum\lambda_{k}^{2}\) is given when the couplings are all equal
\[\sum\lambda_{k}^{2}\geq\frac{c^{2}}{n}=\frac{\left(\sum\lambda_{k}\right)^{2} }{n}\, \tag{45}\]
which is a just a Cauchy-Schwartz type of inequality.
The determination of the maximum is more elaborated. The solution is given by the edges of the hypercube or, more simply, when all but one coupling are fixed to \(\lambda_{min}\) or \(\lambda_{max}\). This can be cast in the form,
\[\sum_{k=1}^{n}\lambda_{k}^{2}\leq\sum_{j=0}^{n-1}\left\{\,\left[j \lambda_{max}^{2}+\left(n-1-j\right)\lambda_{min}^{2}+\left(\sum_{k=1}^{n} \lambda_{k}-\left(j\lambda_{max}+\left(n-1-j\right)\lambda_{min}\right) \right)^{2}\right]\,\times\\ \left[\Theta\left(\sum_{k=1}^{n}\tilde{\lambda}_{k}-j\right)- \Theta\left(\sum_{k=1}^{n}\tilde{\lambda}_{k}-\left(j+1\right)\right)\right] \,\right\}\,, \tag{46}\]
where \(\tilde{\lambda}_{k}=(\lambda_{k}-\lambda_{min})/(\lambda_{max}-\lambda_{min})\) and \(\Theta(x)\) is the Heaviside function. The derivation of this formula can be found in [86].
In Fig. 14 we present an example of the region determined by the above conditions. We show the allowed regions for each model defined by the number \(n\) of coloured scalars. The left plot depicts the borders of the labelled regions for even values of \(n\). The odd values in-between are represented by a dashed grey line. The right plot focuses on the lower values of \(n\), representing all up to \(n=4\). Higher values are represented by grey dashed lines.
The next step is to calculate \(\delta_{hh}\) as a function of the two sums \(\sum\lambda_{k}\) and \(\sum\lambda_{k}^{2}\). This can be done by discretising the two variables in \(N\) points which would involve a computational time of \(\mathcal{O}(N^{2})\). We will instead present an approach that can recycle the previous results from Fig. 11 with \(n=2\) and a fixed mass, which can be computed in \(\mathcal{O}(N)\) time. We separate the individual contributions to \(\delta_{hh}\) into three components as follows:
\[\delta_{n,\,\text{eq}}^{\,A}(\lambda): \sim\{F_{\triangle}^{\phi_{i}},|F_{\triangle}^{\phi_{i}}|^{2},F_{ \square_{2}}^{\phi_{q}},|F_{\square_{2}}^{\phi_{q}}|^{2},2\text{Re}(F_{ \triangle}^{\phi_{q}}F_{\square_{2}}^{\phi_{q}*})\} \sim\{G_{\phi_{q}}^{h},(G_{\phi_{q}}^{h})^{2},G_{\phi_{q}}^{hh},(G_{ \phi_{q}}^{hh})^{2},G_{\phi_{q}}^{h}\cdot G_{\phi_{q}}^{hh}\}\] \[\delta_{n,\,\text{eq}}^{\,B}(\lambda): \sim\{F_{\square_{1}}^{\phi_{q}},G_{\square_{1}}^{\phi_{q}},|F_{ \square_{1}}^{\phi_{q}}|^{2},|G_{\square_{1}}^{\phi_{q}}|^{2}\} \sim\{G_{\phi_{q}}^{h,2},G_{\phi_{q}}^{h,2},(G_{\phi_{q}}^{h,2})^{2},(G_{ \phi_{q}}^{h,2})^{2}\} \tag{47}\] \[\delta_{n,\,\text{eq}}^{\,C}(\lambda): \sim\{2\text{Re}(F_{\triangle}^{\phi_{q}}F_{\square_{1}}^{\phi_{q}*}),2\text{Re}(F_{\square_{1}}^{\phi_{q}}F_{\square_{2}}^{\phi_{q}*})\} \sim\{G_{\phi_{q}}^{h}\cdot G_{\phi_{q}}^{h,2},G_{\phi_{q}}^{h,2}\cdot G_{ \phi_{q}}^{hh}\}\]
Figure 14: Allowed regions for each model defined by the number \(n\) of coloured scalars. The left plot depicts the borders of the labelled regions for even values of \(n\). The odd values in-between are represented by a dashed grey line. Of note is that, for \(n=1\), the limits are not a region but just a single line (represented as a dashed red line). The right plot focuses on the lower values of \(n\), representing all up to \(n=4\). Higher values are represented by grey dashed lines.
Figure 15: \(\Sigma\lambda_{i}^{2}\) as a function of \(\Sigma\lambda_{i}\) with the value of \(\delta_{hh}\) in the colour bar. Left (right): \(m_{\hat{\varphi}_{q}}=1(2)\) TeV. Note that the colour scale is not the same in the two figures. The contours represent the allowed values for \(\delta_{hh}\) for each value of \(n\). The allowed region for \(n=5\) has not been represented as it would make the identification of the \(n=4\) and \(n=6\) regions more difficult.
where the label "eq" indicates the equal coupling condition (\(\lambda_{k}=\lambda_{l}\equiv\lambda\)) and there is only one independent parameter, \(\lambda\), due to this condition. We have already found that all three components can be significant and must be taken into account. The equivalence between these three components for a model with \(n\) couplings with an arbitrary model with \(n^{\prime}\) couplings, \(\{\lambda^{\prime}_{1},...,\lambda^{\prime}_{n^{\prime}}\}\), is given by the following formula:
\[\delta^{n^{\prime}}_{hh}(\{\lambda^{\prime}_{1},...,\lambda^{\prime}_{n^{ \prime}}\})=\delta^{A}_{n,\;\mathrm{eq}}(\lambda)\bigg{|}_{\lambda=\frac{1}{n }\sum\lambda^{\prime}_{k}}+\delta^{B}_{n,\;\mathrm{eq}}(\lambda)\bigg{|}_{ \lambda=\sqrt{\frac{1}{n}\sum\lambda^{\prime 2}_{k}}}+\delta^{C}_{n,\; \mathrm{eq}}(\lambda)\bigg{|}_{\lambda=\sqrt[3]{\left(\frac{1}{n}\sum\lambda^ {\prime}_{k}\right)\left(\frac{1}{n}\sum\lambda^{\prime 2}_{k}\right)}}\;\;. \tag{48}\]
In other words, the \(\delta^{n^{\prime}}_{hh}\) for a model with \(n^{\prime}\) couplings can be obtained from the results for a model with \(n\) equal couplings \(\lambda\), by taking the \(A\), \(B\), and \(C\) contributions at the \(\lambda\) values indicated by the vertical bars.
In Fig. 15 we show \(\Sigma\lambda_{i}^{2}\) as a function of \(\Sigma\lambda_{i}\) with the value of \(\delta_{hh}\) in the colour bar. The left plot is for a coloured scalar mass of 1 TeV while the right plot is for 2 TeV. Note that the colour scale is not the same in the two figures. The contours represent the allowed values for \(\delta_{hh}\) for each value of \(n\). The comparison of the two plots shows that, as expected, the range of variation of \(\delta_{hh}\) decreases with increasing value of \(m_{\phi_{q}}\). Furthermore, the dependence of \(\delta_{hh}\) on \(\sum\lambda_{k}^{2}\) decreases with increasing coloured mass which is due to the fact that the terms proportional to \(\sum\lambda_{k}^{2}\) are suppressed by a factor of \(1/m_{\phi_{q}}^{4}\).
In Fig. 16 we now show \(\delta_{hh}\) as a function of the sum of couplings, \(\sum\lambda_{k}\). The range of variation is related with the freedom in \(\sum\lambda_{k}^{2}\) and was calculated with the results and model limits from Fig. 15. As the mass grows the term in \(\lambda_{k}\) becomes increasingly important and for a mass of 2 TeV the variation in \(\lambda_{k}^{2}\) almost vanishes. Therefore, for large masses the dependence for single and double Higgs productions becomes very similar. Note, that since the interference is destructive for positive couplings in the case of double Higgs production, the maximum \(\delta_{hh}\) occurs for the smaller (negative) values of the couplings.
Finally, in Fig. 17 left (right) we present \(\delta_{hh}\) as a function of the number of scalars \(n\) for three (five) coloured scalar masses. The left plot shows the scenarios from \(n=1\) to \(n=10\) while
Figure 16: \(\delta_{hh}\) for a scalar mass of 1 TeV (left) and 2 TeV (right) as a function of \(\sum\lambda_{i}\). This encompasses the possible range from the freedom in \(\sum\lambda_{i}^{2}\) and was calculated with the results and model limits from Figure 15.
the right plot shows larger values of \(n\). For small \(n\) the deviations from the SM are small as we had seen before but they can be extremely large for very large values of \(n\), even if the coloured scalar masses are large.
Contrary to single Higgs production, the experimental limits on double Higgs production are very weak and at the moment unlikely to be useful in constraining the parameter space. The lowest observed bound on the limit for double Higgs production, as reported by the ATLAS collaboration to be 6.9 times the SM cross section, is equivalent to a \(\delta_{hh}\) of 590% [87] for a c.m. energy of 13 TeV. With a mass of 1 TeV, this would apply constraints only above \(n\approx 45\). As for possible future improvements we can consider the HL-LHC projections [60]. For the \(hh\to b\bar{b}b\bar{b}\) channel a reported value as low as 1.6 times the SM cross section, equivalent to a \(\delta_{hh}\) of 60%, can be achieved, assuming that the overall uncertainty scales with the luminosity as \(1/\sqrt{L}\). This would bring the previous threshold value of \(n\) down to around 13. This means that certain combinations of the masses with the number of scalars will certainly be constrained with future measurements.
## 4 Single Higgs vs. Double Higgs Production
In the previous chapters we have discussed in detail the contribution of an arbitrary number of coloured scalars to single Higgs and di-Higgs production processes via gluon fusion at the LHC. We will now discuss the complementarity between the two processes. One should note, however, that although we expect a good precision in the measurement of the single Higgs process this is not the case for di-Higgs production.
The first point to note is that the NP contribution to the single Higgs mode has a constructive interference for positive \(\lambda_{k}\) while for di-Higgs it is negative. The reason for the positive interference term for \(\lambda_{k}>0\), is that both the SM and the NP form factors in single Higgs production are positive. For double Higgs production this is no longer the case. The reason behind this is the destructive interferences between the \((F_{\triangle}^{Q/\phi_{q}},F_{\square_{2}}^{\phi_{q}})\) and \((F_{\square}^{Q},F_{\square_{1}}^{\phi_{q}})\) form factors. It is already well known that the SM triangle and box form factors interfere destructively as can be
Figure 17: \(\delta_{hh}\) as a function of the number of scalars \(n\) for three (left) and five (right) coloured scalar masses with \(n=1,...,10\) (left) and \(n=10,20,50,100\) (right).
read off from their values in the heavy quark limit, \(F_{\triangle}^{Q}=\frac{2}{3}+\mathcal{O}(m_{Q}^{-2})\) and \(F_{\square}^{Q}=-\frac{2}{3}+\mathcal{O}(m_{Q}^{-2})\). To understand why this also applies to our coloured scalars we can make use of the Low Energy Theorem as was done in [88, 89] (for squarks) to deduce the sign of \(F_{\square_{1}}^{\Phi_{q}}\). By this theorem, \(F_{\square_{1}}^{\Phi_{q}}\) is given by the derivative in mass of the term \(F_{\triangle}^{\Phi_{q}}/m_{\Phi_{q}}^{2}\). Since we already know that the triangle form factor for large scalar masses decreases with the mass, the sign of \(F_{\square_{1}}^{\Phi_{q}}\) will be negative. Therefore the negative contributions for positive couplings we are observing are due to the interference terms of the NP form factors, \(F_{\triangle}^{\Phi_{q}}\cdot F_{\square_{1}}^{\Phi_{q}}\) and \(F_{\square_{2}}^{\Phi_{q}}\cdot F_{\square_{1}}^{\Phi_{q}}\), but also from the interference between SM and NP form factors, \(F_{\square}^{Q}\cdot F_{\triangle}^{\Phi_{q}}\), \(F_{\square}^{\Phi}\cdot F_{\square_{2}}^{\Phi_{q}}\) and \(F_{\triangle}^{Q}\cdot F_{\square_{1}}^{\Phi_{q}}\). The remaining \(F\cdot F\) terms involving at least one NP form factor are positive. As for \(G_{\square}^{Q}\cdot G_{\square_{1}}^{\Phi_{q}}\), its contribution to the amplitude is suppressed by \((1/m_{Q}^{2})\cdot(1/m_{\Phi_{q}}^{6})\), where the latter factor stems from the \(G_{\square_{1}}^{\Phi_{q}}\) dependence \(\sim 1/m_{\Phi_{q}}^{2}\) multiplied by the coupling factor \((g_{\Phi_{q}}^{h})^{2}\sim 1/m_{\Phi_{q}}^{4}\).
In Fig. 18 we present \(\delta_{hh}\) (blue) and \(\delta_{h}\) (brown) as a function of the averaged coloured coupling \(\sum\lambda_{k}/n\) for \(n=1\) (left) and \(n=2\) (right). The mass of the coloured scalars has been chosen equal and set to 1 TeV. We note that with the chosen input values given above we obtain at \(\sqrt{s}=14\) TeV at LO for the SM the single Higgs cross section value \(\sigma_{SM}^{h}=15.76\) pb calculated with HIGLU including the bottom, charm and top quark loops, and the double Higgs cross section value \(\sigma_{SM}^{hh}=16.37\) fb calculated with HPAIR including the bottom and top quark loops. The complementarity between the dependence of \(\delta_{h}\) and \(\delta_{hh}\) w.r.t. the coupling \(\lambda_{k}\) is very clear from the figure. We also note that for \(n=1\) the \(\delta_{h}\) and \(\delta_{hh}\) values are lines while for \(n=2\) there is an allowed region for \(\delta_{hh}\) due to the additional dependence on \(\sum_{k}\lambda_{k}^{2}\). This leads to the observation that, with a single Higgs measurement very close to the SM value constraining \(\sum\lambda_{k}/n\) to small values, any significant excess of di-Higgs production would provide a strong indication that \(n\geq 2\).
We finalise this section with a plot (Fig. 19) where we show the region of the coloured mass versus the number of scalars that leads to a maximal deviation of 1% (black) or 0.1% (red) in \(\delta_{h}\) (left) and to a maximal deviation of 1% in \(\delta_{hh}\) (right) of single, respectively, double Higgs production from the corresponding SM value, while varying the couplings within their allowed theoretical bounds. This gives us a feeling on the region where it will not be possible to probe these models even in the long run. For double Higgs production we indicate the 1% region only,
Figure 18: \(\delta_{hh}\) (blue) and \(\delta_{h}\) (brown) as a function of the averaged coupling for \(n=1\) (left) and for \(n=2\) (right). For these plots only, the minimum coupling used was \(-4\pi\) instead of the previous bounded-from-below condition.
since the predictions for the HL-LHC are significantly above this threshold (\(\delta_{hh}^{\text{HL-LHC}}=60\%\)). For the single Higgs production, the predictions (\(\delta_{h}^{\text{HL-LHC}}=1.6\%\)) indicate that a precision of 1% could be attainable. Thus we also present the 0.1% region in this case as the region that cannot be probed by experiment. As single Higgs production can be constrained more stringently (possibly up to 0.1%) than Higgs pair production this means that larger coloured masses can be probed in single than in di-Higgs production. Independently of the experimental precisions, the plots show, that single Higgs production is more sensitive to coloured scalars than di-Higgs production. For a value of \(n=10\) e.g. a value of \(\delta_{h}=1\%\) probes masses of about 12 TeV, whereas \(\delta_{hh}=1\%\) probes masses of 5 TeV only. Finally, we have checked that the lower border of the regions, where \(\delta=1\%\) or 0.1%, follows the relationship \(n\propto m_{\phi_{q}}^{2}\) very closely. The large masses required for these low values of \(\delta\) ensure that the terms proportional to \(n/m_{\phi_{q}}^{2}\) are dominant and hence why this behaviour is observed.
## 5 Conclusions
We have calculated the relative changes \(\delta_{h}\) of SM single and \(\delta_{hh}\) of SM double Higgs production when including new heavy coloured scalars. Our calculations are based on the LO cross sections at the LHC using the Fortran codes HIGLU and HPAIR where we included our new physics contributions. We have found that for an arbitrary number of scalars and taking their masses to be equal \(\delta_{h}\) can be written as a function of only two variables, given by the sum of the couplings of the coloured particles to the Higgs boson, \(\sum_{i}\lambda_{i}\), and their masses \(m_{i}\). As for the double Higgs case, the \(\delta_{hh}\) dependence extends now to three variables, the extra variable being \(\sum_{i}\lambda_{i}^{2}\). We devised a way to find the limits on this new variable in terms of \(\sum_{i}\lambda_{i}\), again for equal masses. We have discussed the limits on these variables for single Higgs production, where the results already constrain some of the parameter space. For di-Higgs production the bounds are still very loose and we have to wait until the end of Run3 to hopefully get some bounds on the couplings.
We have shown that if we relax the condition of equal masses what can be said is that the range of allowed values for \(\delta_{hh}\) would be smaller than the range obtained by taking all masses equal to the smallest mass of the \(n\) coloured scalars. We have also shown that taking equal
Figure 19: Regions where \(\delta_{h}\) (left) and \(\delta_{hh}\) (right) fall below 1% as a function of the coloured scalar mass and the number of scalars. For the single Higgs production (left) the threshold of 0.1% has also been included. For these calculations the previous BFB condition from Eq. (20) was used.
couplings and performing a scan between their minimum and maximum values is sufficient to obtain the complete range for \(\delta_{hh}\).
Another important point to note is the complementarity between single and di-Higgs processes. Once the value of the coupling is fixed, the relative deviations from the SM move in different directions. That is, if \(\delta_{h}\) increases \(\delta_{hh}\) decreases with the coupling and vice-versa. The extra freedom of \(\delta_{hh}\) also provides another venue for determining the number of scalars from observations. An excess of single or di-Higgs production could indicate the existence of \(n\geq 1\) coloured scalars. But an excess of di-Higgs production paired with a single Higgs measurement close to zero would point to \(n>1\).
One final and very important point to note is that in direct searches for DM at the LHC we do not have access to the number of DM fields because we only look for missing energy associated with some SM particle. On the contrary, in our approach the number of fields is a variable that influences the results.
HPAIR Extension to Coloured Scalars
In the following we present our implementation of the contributions from the coloured scalars to Higgs pair production at leading order in the code HPAIR. It has been made available at [90]. All changes of the original source code are contained entirely within \(\mathtt{hpair.fi}\). The code is compiled by using make and then running the executable run which will assume the input and output files \(\mathtt{hpair.in}\) and \(\mathtt{hpair.out}\) respectively. For the compilation process the LHAPDF libraries required by HPAIR must be supplied and their installation path indicated with the variable LIBS in the makefile.
The BDM input options are contained within the original input file of HPAIR, \(\mathtt{hpair.in}\). The following lines delimit where these new options are found:
```
156!!RDMOPTIONS:
158!= (...)
190!-
```
_BDM OPTIONS END--_
By setting the variable \(\mathtt{ibdm}\) to 1, the coloured scalar form factors are added to the SM amplitude. The type of model, characterised by the number of scalars \(n\), is selected with the variable \(\mathtt{ibdmtype}\). These are found in the following block:
```
159!1!ibdm=1THENTHENNEVSCALARSDIAGRAMSWILLBEADDEDTOF1
161!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!5167!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!5167!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!516!5167!5167!5167!516!5167!5167!5167!516!5167!5167!5167!5167!5167!5167!5167!5167!5167!5167!5167!5167!5167!5167!5167!5167!5167!5167!5167!5167!5168!5167!5167!5168!5168!5169!5169!5169!5160!5169!51617!51617!51617!51617!51617!51617!51617!5168!51617!51617!51617!51617!51617!51617!51617!51617!5168!5169!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!5168!51617!51617!5168!51617!5169!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!516817!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!51617!516177!51617!51617!51617!516177!51617!51617!51617!51617!51617!51617!51617!516177!51617!51617!51617!51617!51617!51617!51617!51617!516177!51617!516177!51617!51617!516177!51617!516177!516177!51617!51617!51617!51617!516177!516177!51617!516177!516177!516177!51617!516177!51617!516177!51617!516177!51617!516177!516177!516177!51617!516177!516177!51617!516177!516177!51617!51617!516177!516177!516177!516177!51617!516177!516177!516177!516177!516177!516177!51617!51617!51617!516177!516177!516177!516177!516177!516177!51677!516177!516177!516177!516177!516177!516177!5161777!516177!516177!516177!516177!516177!51617!516177!516177!516177!516177!516817!5168!5161777!516177!516177!516177!516177!516177!51617!516177!516177!516177!516177!516177!516177!516177!516177!516177!516177!516177!516177!5168!516177!51617!5161777!516177!5168!5161777!516177!5168!5161777!5168!5178!5161777!516177!5168!516177!5169!516177!516177!516177!516177!516177!5168!516177!516177!516177!516177!516177!516177!516177!5168!51617!5168!516177!5168!5169!5169!516177!516177!51777!5177!5169!5160!5177!516177!516177!5177!5177!5177!5177!51777!51777!5177!51777!51777!51777!51777!51777!51777!5177!5177!51777!5177!5177!5177!51777!5177!5177!51777!5177!51777!5177!5177!5177!51777!5177!5177!5177!5177!5177!51777!5177!5177!51777!5177!5177!5177!51777!5177!5177!51777!51777!51777!5177!51777!5177!5177!517777!5177!51777!5177!51777!51777!51777!51777!5
* [181] full = 1
* [182] triang = 1
* [183] boxTri = 1
* [184] boxQuad = 0 The variable full is used to select whether all the NP form factors, as presented in Eqs. (27) and (29), are automatically included in the amplitude ( full =1) or not ( full =0). In the latter case, the following three variables triang, boxTri and boxQuad are used to determine which form factors are to be included in the calculations:
* triang =1: Will include the triangle form factor \(g^{h}_{\phi_{q}}F^{\phi_{q}}_{\triangle}\) (Eq. (5)) originating from the triangle diagrams (Fig. 7b-7c)
* boxTri =1: Will include the box form factors \((g^{h}_{\phi_{q}})^{2}G^{\phi_{q}}_{\square_{1}}\) and \((g^{h}_{\phi_{q}})^{2}F^{\phi_{q}}_{\square_{1}}\) (Eqs. (31-32)) originating from the box diagrams with the triple couplings between one Higgs and two coloured scalars (Fig. 8b-8c)
* boxQuad =1: Will include the box form factor \((g^{hh}_{\phi_{q}})F^{\phi_{q}}_{\square_{2}}\) (Eq. (5), as \(F^{\phi_{q}}_{\square_{2}}=F^{\phi_{q}}_{\triangle}\)) originating from the box diagrams with the quartic couplings between two Higgs and two coloured scalars (Fig. 8d-8e) Some care in the formatting must be taken when changing the values of the parameters. The code uses the number of the lines to identify the input parameters and thus they must be preserved. The names of the variables indicated in the input file have no impact. However, the number of characters before the equal sign must always be nine in total:
\[G_{\Box}^{Q}=\frac{2m_{Q}^{2}}{s} \left(\frac{1}{tu-m_{h}^{4}}\right)\left(\frac{s}{2}(t^{2}+u^{2}+2m_ {h}^{4}-8m_{Q}^{2}(t+u))C_{ab}^{m_{Q}^{2}}\right.\] \[+(t^{2}+m_{h}^{4}-8tm_{Q}^{2})(t-m_{h}^{2})C_{ac}^{m_{Q}^{2}}+(u^{ 2}+m_{h}^{4}-8um_{Q}^{2})(u-m_{h}^{2})C_{bc}^{m_{Q}^{2}}\] \[\left.\hskip 14.226378pt-\frac{1}{2}(t^{2}+u^{2}-2m_{h}^{4})(t+u- 8m_{Q}^{2})C_{cd}^{m_{Q}^{2}}\right.\] \[-(st(t^{2}/2+m_{h}^{4}/2-4tm_{Q}^{2})+m_{Q}^{2}(t+u-8m_{Q}^{2})(tu -m_{h}^{4}))D_{bac}^{m_{Q}^{2}}\] \[-(su(u^{2}/2+m_{h}^{4}/2-4um_{Q}^{2})+m_{Q}^{2}(t+u-8m_{Q}^{2})( tu-m_{h}^{4}))D_{abc}^{m_{Q}^{2}}\] \[\left.\hskip 14.226378pt-m_{Q}^{2}(t+u-8m_{Q}^{2})(tu-m_{h}^{4})D _{acb}^{m_{Q}^{2}}\right)\,, \tag{50}\]
where the Mandelstam variables \(s,t,u\) are defined as:
\[s=(p_{a}+p_{b})^{2}\,\quad t=(p_{a}-p_{c})^{2}\,\quad u=(p_{b}-p_{c})^{2}. \tag{51}\]
They also involve the following scalar integrals:
\[C_{ab}=\int\frac{d^{4}q}{i\pi^{2}}\frac{1}{\left(q^{2}-m_{X}^{2 }\right)\left((q+p_{a})^{2}-m_{X}^{2}\right)\left((q+p_{a}+p_{b})^{2}-m_{X}^{ 2}\right)}\, \tag{52}\] \[D_{abc}=\int\frac{d^{4}q}{i\pi^{2}}\frac{1}{\left(q^{2}-m_{X}^{ 2}\right)\left((q+p_{a})^{2}-m_{X}^{2}\right)\left((q+p_{a}+p_{b})^{2}-m_{X}^{ 2}\right)\left((q+p_{a}+p_{b}+p_{c})^{2}-m_{X}^{2}\right)}\, \tag{53}\]
where \(X\) stands for the quark or coloured scalar as appropriate. The exact formula for \(C_{ab}\) has been determined and is given by:
\[C_{ab}=-\frac{2}{s}f(\tau)\, \tag{54}\]
where \(f_{\tau}\) is given in Eq. (6) and \(\tau=\frac{4m_{X}^{2}}{s}\).
### Acknowledgements
PG is supported by the Portuguese Foundation for Science and Technology (FCT) with a PhD Grant No. 2022.11377.BD. PG, DN and RS are partially supported by FCT under Contracts no. UIDB/00618/2020, UIDP/00618/2020, PTDC/FIS-PAR/31000/2017 and CERN/FIS-PAR/0014 /2019. The work of MM is supported by the DFG Collaborative Research Center TRR257 "Particle Physics Phenomenology after the Higgs Discovery".
|
2301.04021 | Need for "special" states in a deterministic theory of quantum mechanics | There are several theories or processes which may underlie quantum mechanics
and make it deterministic. Some references are given in the main text. Any such
theory, plus a number of reasonable assumptions, implies the existence of what
I have called ``special" states. The assumptions are conservation laws,
obedience (up to a point) of Schrodinger's equation, and a single world, in the
sense of the many worlds interpretation (the last one a consequence of any
deterministic theory). This article also, for clarity, gives an example of a
``special" state. There is an experimental test of the ``special" state theory. | L. S. Schulman | 2023-01-08T17:19:11Z | http://arxiv.org/abs/2301.04021v2 | # Need for "special" states in a deterministic theory of quantum mechanics
###### Abstract
There are several theories or processes which may underlie quantum mechanics and make it deterministic. Some references are given in the main text. Any such theory, plus a number of reasonable assumptions, implies the existence of what I have called "special" states. The assumptions are conservation laws, obedience (up to a point) of Schrodinger's equation, and a single world, in the sense of the many worlds interpretation (the last one a consequence of any deterministic theory). This article also, for clarity, gives an example of a "special" state. There is an experimental test of the "special" state theory.
## 1 Introduction
Determinism is a loophole in Bell's [1] ideas, which he was aware of. I unwittingly exploited it in 1984 [18] with what I will call the "special" state theory of measurement. (In Sec. 2 an example of a "special" state is given.) The present article reports new motivation for "special" states.
There have been quite a few attempts to find an underlying process that would make the Schrodinger equation deterministic. I am _not_ referring to Bohm's interpretation [2] or that of his followers. Rather I have in mind those theories which would restore determinism, such as (_not exclusively_) those of 'tHooft [22, 23], Palmer [16], De la Pena Auerbach and Cetto [13], Cavalleri et al. [5], Cufaro-Petroni and Vigier [6] and Marshall [15]. For at least some of these the Schrodinger equation is an approximation--a good approximation, but an approximation nevertheless. There has also been discussion about the consequences of determinism [22, 16, 12, 4, 11].
There is an experimental test of the "special" state theory, which, if successful, would lend credence to some of the theories advanced. If negative, it would be challenging to maintain determinism.
## 2 "Special" states
Most of this section is review. It may be skipped by those familiar with the kind of "special" states that I have in mind.
We take, as an example of a "special" state, a spin, initially pointing in the positive \(z\) direction with a 50% probability of overturning at some given time, say at 0.15 (since all is determined the time of observation is also fixed). Moreover, we don't deal with "registration" of the measurement; that will be accomplished by additional degrees of freedom.1 The Hamiltonian is
Footnote 1: The irreversible “registration” of the result of a measurement by the observer has been studied in many contexts. For example, in [9] the “measurement” is accompanied by the bath’s (not the same as the bath in the current Eq. (1)) changing in an irreversible fashion. Other models of measurement (e.g., [3, 10]) show the same feature. As a result, our considerations in the present article do not pursue the registration issue once the observer is coupled to the system, that coupling taking place (in our forthcoming example) at 0.15 time units.
\[H=\frac{\varepsilon}{2}\left(1+\sigma_{z}\right)+\omega a^{\dagger}a+\beta \sigma_{x}(a^{\dagger}+a)\,. \tag{1}\]
The Pauli matrices \(\sigma_{x}\) and \(\sigma_{z}\) are the operators for the 2-state spin system, \(a\) and \(a^{\dagger}\) are the boson operators and \(\varepsilon\), \(\beta\) and \(\omega\) are parameters.
_"Special" states_ are particular initial conditions of the bath such that the _microscopic_ final state of the spin is (either) _all_\(\mathfrak{up}\), \(\left(\begin{smallmatrix}e^{i\phi_{1}}\\ 0\end{smallmatrix}\right)\), or _all_\(\mathfrak{down}\), \(\left(\begin{smallmatrix}0\\ e^{i\phi_{2}}\end{smallmatrix}\right)\). "Final" refers to the time of measurement, namely when (even) more degrees of freedom are involved (we use parameter values \(\epsilon=0.5\), \(\omega=0.1\), \(\beta=0.6\) and a time of 0.15).
The system begins in all \(\mathfrak{up}\) and ordinarily at time 0.15 has probability of half up, half down. As indicated, that is _not_ the case for these "special" initial conditions. If the probabilities are as in Fig. 1a--with fixed phases (not shown)--then the system will be found in an \(\mathfrak{up}\) state. If the initial state is as shown in Fig. 1b (again with particular phases, not shown) then the system will be found in a down state.
There are three points to be raised: the first is what about residual amplitudes? The amplitude for (say) the \(\mathfrak{up}\) state is not perfect and for the given cutoff of the bosons at 250 is about \(10^{-4}\); the same is true for the state which is "fully" decayed. The second question has to do with Schrodinger's cat. And the third issue is how do you find these states?
Now \(10^{-4}\) is a big number, especially since the final state of one interaction is the initial state for the next. I could improve that number if I had better computer power, but I doubt if it could be zero. But it doesn't have to be zero! It only needs to be accurate as far as the Schrodinger equation has been checked. And I don't think it has been checked to \(10^{-12}\) (which I am reasonably confident I could get the discrepancy down to).
The second issue I mentioned is, what about Schrodinger cats? The (possibly) decaying spin could be the determinant of whether the poison is released.2 With "special" states the cat is either alive or dead. It should be noticed that there are only what 'tHooft calls "ontological" states [22, 23]. I believe "special" and "ontological" have the same meaning in this case.
Footnote 2: I assume familiarity with the Schrödinger cat paradox.
Finally there is question of how "special" states are found. You can define a projection operator (cf. [19]) on the spin: \(P\equiv\left(\psi_{\mathrm{up}}\psi_{\mathrm{up}}^{\dagger}\right)\otimes \mathbbm{1}_{\mathrm{boson\ bath}}\). Using this operator, the probability of being all \(\mathfrak{up}\) at time \(t\) is
\[\Pr(\mathfrak{up})=\langle\psi_{\mathrm{up}}\otimes\psi_{\mathrm{bath}}|U^{ \dagger}PU|\psi_{\mathrm{up}}\otimes\psi_{\mathrm{bath}}\rangle=\langle\psi_ {\mathrm{up}}\otimes\psi_{\mathrm{bath}}|PU^{\dagger}PPUP|\psi_{\mathrm{up}} \otimes\psi_{\mathrm{bath}}\rangle\,, \tag{2}\]
with \(U\equiv\exp\left(-iHt/\hbar\right)\) and where \(PP=P\) is used. Defining \(A\equiv PUP\) and using \(P^{\dagger}=P\), we have \(\Pr(\mathfrak{up})=\langle\psi_{\mathrm{up}}\otimes\psi_{\mathrm{bath}}|A^{ \dagger}A|\psi_{\mathrm{up}}\otimes\psi_{\mathrm{bath}}\rangle\). Defining \(B\equiv A^{\dagger}A\), it follows that the issue of whether any initial state (of the bath) can lead to a measurement of \(\mathfrak{up}\), using purely unitary time evolution
is the matter of whether \(B\) has eigenvalues equal to one. For any fully decayed states you must have an eigenvalue (of \(B\)) be zero. (Of course \(A\) and \(B\) are functions of \(t\), since \(U\) is.)
Remark: It also is true that the number of decay states and non-decay ("special") states is roughly equal at time \(0.15\).
This is the main idea of the "special" state theory: no macroscopic superpositions because of particular initial conditions. There is also no entanglement. At time-\(0.15\) the spin state is wholly in one state or the other.
## 3 Determinism implies "special" states
The title of this section needs a bit of enhancement: you need a few more concessions to reality. Besides determinism you need conservation laws and Schrodinger's equation, at least to the extent that it's been checked. It is also understood that there is just one world. These rules, together with determinism, imply "special" states.
You start with a wave function describing some state, say a spin in a Stern-Gerlach experiment. Then it _must_ go to some particular outcome, say spin up. Presumably there were involved other coordinates (such as the bosons in the above example) that fixed its outcome. The final state is definite. But the Schrodinger equation holds also. Therefore it could only have evolved to that final state. How can that be? There must have been a coordination of degrees of freedom on the initial state that forced it to its final form. That is, there must have been a "special" state.
## 4 Experiment
Finally, there is the matter of experiment. In [20] and [21] we have described in detail experimental tests of the "special" state theory. An example is the double Stern-Gerlach experiment
Figure 1: “Special” _time_-0 oscillator states. Figure (**a**) shows the (initial) probability of excitation of oscillator states that contribute to the non-decay state. Only shown are even states, since there is total amplitude zero for the odd states. Phases of the states are not shown, but are also fixed by the non-decay condition. In Figure (**b**) are shown the probabilities for the state that decays; in this case (and for the same reason) only even oscillator states are shown. As in image **a**, the phases, though not shown are crucial to the “special” nature of the state.
([17, 8, 14, 7]) which requires the detection of a magnetic field of \(5\times 10^{-8}\) tesla in an environment of half a tesla, a challenging experiment. A firm absence of the small magnetic field would in my opinion spell the end of efforts to find a deterministic theory (but no-go theorems are made to be disproved).
## 5 Conclusions
You don't have to believe in any of the deterministic theories to reach the conclusion that "special" states are needed in any theory which is deterministic, goes from one "special" state into another, satisfies Schrodinger's equations (as far as has been measured), has a single world and satisfies conservation laws. You only have to believe that it's possible.
Three points are worth mentioning. First--and this is new--you don't need to eliminate "incorrect" choices (by "special" states) at the level of (say) \(10^{-12}\), since the Schrodinger equation has not been checked at that level. Second, there is an experimental test of the special state theory. Failure would eliminate deterministic theories (or leave people struggling for an explanation), while success would encourage attempts to find deterministic theories. Third, it may be that 'tHooft is right, and one should look to extremely small times and distances for theoretical support for determinism. However, given the fragmentary understanding of events at \(10^{-17}\:\mbox{cm}\) I'd be reluctant to make predictions about what happens at \(10^{-33}\:\mbox{cm}\).
|
2307.09853 | A proof of a conjecture of Mao on Beck's partition statistics modulo 8 | Beck introduced two partition statistics $NT(r,m,n)$ and
$M_{\omega}(r,m,n)$,which denote the total number of parts in the partition of
$n$ with rank congruent to $r$ modulo $m$ and the total number of ones in the
partition of $n$ with crank congruent to $r$ modulo $m$, respectively. In
recent years, a number of congruences and identities on $NT(r,m,n)$ and
$M_{\omega}(r,m,n)$ for some small $m $ have been established.In this paper, we
prove an identity on $NT(r,8,n)$ and $M_{\omega}(r,4,n)$ which confirm a
conjecture given by Mao. | Renrong Mao, Ernest X. W. Xia | 2023-07-19T09:21:51Z | http://arxiv.org/abs/2307.09853v1 | # A proof of a conjecture of Mao on Beck's partition statistics modulo \(8\)
# A proof of a conjecture of Mao on Beck's partition statistics modulo \(8\)
\({}^{1}\)Renrong Mao and \({}^{2}\)Ernest X.W. Xia
Department of Mathematics,
Soochow University,
Suzhou, 215006, People's Republic of China
\({}^{2}\)School of Mathematical Sciences,
Suzhou University of Science and Technology,
Suzhou, 215009, Jiangsu Province, P. R. China
Email: [email protected], [email protected]
**Abstract.** Beck introduced two partition statistics \(NT(r,m,n)\) and \(M_{\omega}(r,m,n)\), which denote the total number of parts in the partition of \(n\) with rank congruent to \(r\) modulo \(m\) and the total number of ones in the partition of \(n\) with crank congruent to \(r\) modulo \(m\), respectively. In recent years, a number of congruences and identities on \(NT(r,m,n)\) and \(M_{\omega}(r,m,n)\) for some small \(m\) have been established. In this paper, we prove an identity on \(NT(r,8,n)\) and \(M_{\omega}(r,4,n)\) which confirm a conjecture given by Mao.
**Keywords:** Beck's partition statistics, rank, crank, partition.
**AMS Subject Classification:** 11P81, 05A17
## 1. Introduction
A partition \(\pi=(\pi_{1},\pi_{2},\ldots,\pi_{k})\) of a positive integer \(n\) is a sequence of positive integers \(\pi_{1}\geq\pi_{2}\geq\cdots\geq\pi_{k}>0\) such that \(\pi_{1}+\pi_{2}+\cdots+\pi_{k}=n\). The \(\pi_{i}\) are called the parts of the partition [1]. In this paper, we shall write \(\pi\vdash n\) if \(\pi\) is a partition of \(n\). Let \(\#(\pi)\) and \(\lambda(\pi)\) denote the total number of parts of \(\pi\) and the largest part of \(\pi\), respectively. As usual, let \(p(n)\) denote the number of partitions of \(n\) and set \(p(0)=1\). The following three famous congruences for \(p(n)\) were discovered by Ramanujan [24]:
\[p(5n+4) \equiv 0\pmod{5},\] \[p(7n+5) \equiv 0\pmod{7},\] \[p(11n+6) \equiv 0\pmod{11}.\]
In order to explain the above three congruences combinatorially, two partition statistics, rank and crank, were defined by Dyson [13], and Andrews and Garvan [3], respectively. In 1944, Dyson [13] defined the rank of a partition to be the largest part minus the number of parts, i.e.,
\[\operatorname{rank}(\pi):=\lambda(\pi)-\#(\pi).\]
For example, the rank of the partition \(2+1+1+1\) is \(2-4=-2\). In 1988, Andrews and Garvan [3] defined the crank by
\[\operatorname{crank}(\pi):=\left\{\begin{array}{ll}\lambda(\pi),&\text{if } \omega(\pi)=0,\\ \mu(\pi)-\omega(\pi),&\text{if }\omega(\pi)>0,\end{array}\right.\]
where \(\omega(\pi)\) counts the number of ones in \(\pi\) and \(\mu(\pi)\) counts the number of parts larger than \(\omega(\pi)\). For example, the crank of the partition \(2+1+1+1\) is \(0-3=-3\) while the crank of the partition \(4+2+2\) is \(4\).
Recently, Andrews [2] mentioned that George Beck defined two partition statistics \(NT(r,m,n)\) and \(M_{\omega}(r,m,n)\), which count the total number of parts in the partition of \(n\) with rank congruent to \(r\) modulo \(m\) and the total number of ones in the partition of \(n\) with crank congruent to \(r\) modulo \(m\), respectively. Utilizing the results on rank differences obtained in [5], Andrews [2] proved the following interesting congruences conjectured by Beck:
\[\sum_{m=1}^{4}mNT(m,5,5n+1)\equiv\sum_{m=1}^{4}mNT(m,5,5n+4)\equiv 0\pmod{5}\]
and for \(i\in\{1,5\}\),
\[NT(1,7,7n+i) -NT(6,7,7n+i)+NT(2,7,7n+i)-NT(5,7,7n+i)\] \[-NT(3,7,7n+i)+NT(6,7,7n+i)\equiv 0\pmod{7}.\]
Motivated by Andrews' work, a number of identities and congruences on \(NT(r,m,n)\) and \(M_{\omega}(r,m,n)\) and their variations have been proved; see for example [9, 10, 11, 12, 14, 15, 17, 19, 20, 21, 22, 23, 27, 28]. Very recently, Mao [20] proved some identities on the total number of parts functions associated to ranks of overpartition. At the end of his paper [20], Mao conjectured five identities on \(NT(r,m,n)\) and \(M_{\omega}(r,m,n)\) and three of them were proved by Jin, Liu and Xia [17], and Mao and Xia [23]. The rest two conjectural identities are listed as follows.
**Conjecture 1.1**.: _For \(n\geq 0\),_
\[NT(2,8,4n)-NT(6,8,4n) =M_{\omega}(1,4,4n)-M_{\omega}(3,4,4n),\] \[NT(6,8,4n+2)-NT(2,8,4n+2) =M_{\omega}(1,4,4n+2)-M_{\omega}(3,4,4n+2).\]
The aim of this paper is to present a proof of the following theorem which implies Conjecture 1.1.
**Theorem 1.2**.: _For \(n\geq 0\),_
\[NT(2,8,2n)-NT(6,8,2n)=(-1)^{n}\left(M_{\omega}(1,4,2n)-M_{\omega}(3,4,2n) \right).\]
## 2. The Generating Function for \(M_{\omega}(1,4,2n)-M_{\omega}(3,4,2n)\)
This aim of this section is to establish a generating function for \(M_{\omega}(1,4,2n)-M_{\omega}(3,4,2n)\).
Recall some \(q\)-series notations
\[(a)_{\infty}:=(a;q)_{\infty}: =\prod_{n=0}^{\infty}(1-aq^{n}),\] \[(a_{1},a_{2},\ldots,a_{k})_{\infty}:=(a_{1},a_{2},\ldots,a_{k};q) _{\infty}: =(a_{1};q)_{\infty}(a_{2};q)_{\infty}\cdots(a_{k};q)_{\infty},\] \[[a_{1},a_{2},\ldots,a_{k}]_{\infty}:=[a_{1},a_{2},\ldots,a_{k};q] _{\infty}: =(a_{1},q/a_{1},a_{2},q/a_{2},\ldots,a_{k},q/a_{k};q)_{\infty},\] \[J_{r,m}: =(q^{r},q^{m-r},q^{m};q^{m})_{\infty},\]
\[J_{m}:=(q^{m};q^{m})_{\infty}.\]
**Lemma 2.1**.: _We have_
\[\sum_{n\geq 0}(M_{\omega}(1,4,4n)-M_{\omega}(3,4,4n))q^{n}\] \[= \frac{1}{4J_{1}}A_{0}(q)B_{0}(q)(1-\varphi(q)^{2})+\frac{q}{J_{1} }\bigg{(}\frac{1}{4}A_{2}(q)B_{2}(q)(1-\varphi(q)^{2})-A_{2}(q)B_{1}(q)\psi(q) ^{2}\] \[+(A_{0}(q)B_{2}(q)+A_{2}(q)B_{0}(q))\psi(q^{2})^{2}\bigg{)}-\frac{ q^{2}}{J_{1}}A_{0}(q)B_{3}(q)\psi(q)^{2} \tag{2.1}\]
_and_
\[\sum_{n\geq 0}(M_{\omega}(1,4,4n+2)-M_{\omega}(3,4,4n+2))q^{n}\] \[= \frac{1}{4J_{1}}(A_{0}(q)B_{2}(q)+A_{2}(q)B_{0}(q))(\varphi(q)^{2 }-1)+\frac{1}{J_{1}}A_{0}(q)(B_{1}(q)\psi(q)^{2}-B_{0}(q)\psi(q^{2})^{2})\] \[-\frac{q}{J_{1}}A_{2}(q)B_{2}(q)\psi(q^{2})^{2}+\frac{q^{2}}{J_{1 }}A_{2}(q)B_{3}(q)\psi(q)^{2}, \tag{2.2}\]
_where_
\[A_{0}(q): =\frac{(q^{2},q^{6},q^{8};q^{8})_{\infty}}{(-q,-q^{7};q^{8})_{ \infty}},\ A_{2}(q):=\frac{(q^{2},q^{6},q^{8};q^{8})_{\infty}}{(-q^{3},-q^{5} ;q^{8})_{\infty}},\ B_{0}(q):=\frac{(q^{6},q^{10},q^{16};q^{16})_{\infty}}{(-q ^{3},-q^{13};q^{16})_{\infty}}, \tag{2.3}\] \[B_{1}(q): =\frac{(q^{2},q^{14},q^{16};q^{16})_{\infty}}{(-q,-q^{15};q^{16} )_{\infty}},\ B_{2}(q):=\frac{(q^{6},q^{10},q^{16};q^{16})_{\infty}}{(-q^{5},- q^{11};q^{16})_{\infty}},\ B_{3}(q):=\frac{(q^{2},q^{14},q^{16};q^{16})_{\infty}}{(-q^{7},- q^{9};q^{16})_{\infty}},\] (2.4) \[\varphi(q): =\sum_{n=-\infty}^{\infty}q^{n^{2}}=\frac{J_{2}^{5}}{J_{1}^{2}J_ {4}^{2}},\quad\psi(q):=\sum_{n=0}^{\infty}q^{n(n+1)/2}=\frac{J_{2}^{2}}{J_{1}}. \tag{2.5}\]
Proof.: In [23], Mao and Xia proved that
\[\sum_{n\geq 0}M_{\omega}(a,k,n)q^{n}= \frac{1}{k}\sum_{j=0}^{k-1}\zeta_{k}^{-aj}\frac{J_{1}}{(\zeta_{k} ^{j}q;q)_{\infty}(q/\zeta_{k}^{j};q)_{\infty}}\left(\sum_{n=1}^{\infty}\frac{ \zeta_{k}^{-j}q^{n}}{1-q^{n}\zeta_{k}^{-j}}-S(q)\right)\] \[= T(q)+\frac{1}{k}\sum_{j=1}^{k-1}\zeta_{k}^{-aj}\frac{J_{1}}{( \zeta_{k}^{j}q;q)_{\infty}(q/\zeta_{k}^{j};q)_{\infty}}\left(\sum_{n=1}^{ \infty}\frac{\zeta_{k}^{-j}q^{n}}{1-q^{n}\zeta_{k}^{-j}}-S(q)\right), \tag{2.6}\]
where \(\zeta_{k}=e^{2\pi\mathrm{i}/k}\) and
\[T(q):=\frac{q}{k(1-q)J_{1}},\qquad S(q):=\sum_{n=1}^{\infty}\frac{q^{n+1}}{1- q^{n+1}}. \tag{2.7}\]
It is easy to check that
\[\frac{J_{1}}{(\zeta_{4}^{j}q;q)_{\infty}(q/\zeta_{4}^{j};q)_{\infty}}=\begin{cases} \frac{J_{1}J_{2}}{J_{4}},&\text{if }j=1,3,\\ \frac{J_{1}^{3}}{J_{2}^{2}},&\text{if }j=2.\end{cases} \tag{2.8}\]
Moreover,
\[\sum_{n=1}^{\infty}\frac{\zeta_{4}^{-j}q^{n}}{1-q^{n}\zeta_{4}^{-j}}=\zeta_{4} ^{-j}\sum_{n=1}^{\infty}\frac{q^{n}}{1-q^{4n}}+\zeta_{4}^{-2j}\sum_{n=1}^{ \infty}\frac{q^{2n}}{1-q^{4n}}+\zeta_{4}^{-3j}\sum_{n=1}^{\infty}\frac{q^{3n}} {1-q^{4n}}+\sum_{n=1}^{\infty}\frac{q^{4n}}{1-q^{4n}}. \tag{2.9}\]
Setting \(k=4\) and \(a=1,3\) in (2.6) and employing (2.8) and (2.9), we deduce that
\[\sum_{n\geq 0}(M_{\omega}(1,4,n)-M_{\omega}(3,4,n))q^{n}= \frac{J_{1}J_{2}}{J_{4}}\sum_{n=1}^{\infty}\frac{q^{3n}-q^{n}}{1 -q^{4n}}=-\frac{J_{1}J_{2}}{J_{4}}\sum_{n=1}^{\infty}\frac{q^{n}}{1+q^{2n}}. \tag{2.10}\]
The following identity appears in Berndt's book [7, (3.2.9)]
\[\sum_{n=1}^{\infty}\frac{q^{n}}{1+q^{2n}}=\frac{1}{4}\left(\varphi(q)^{2}-1 \right), \tag{2.11}\]
where \(\varphi(q)\) is defined by (2.5). Combining (2.10) and (2.11) yields
\[\sum_{n\geq 0}(M_{\omega}(1,4,n)-M_{\omega}(3,4,n))q^{n}= \frac{1}{4}\frac{J_{1}J_{2}}{J_{4}}\left(1-\varphi(q)^{2}\right). \tag{2.12}\]
The following identity was proved by Xia and Yao [26, Lemma 3.2, (3.4)]
\[J_{2}=A_{0}(q^{4})-q^{2}A_{2}(q^{4}), \tag{2.13}\]
where \(A_{0}(q)\) and \(A_{2}(q)\) are defined by (2.3). Lewis [18, Corollary 6] proved that
\[J_{1}= B_{0}(q^{4})-qB_{1}(q^{4})-q^{2}B_{2}(q^{4})+q^{7}B_{3}(q^{4}), \tag{2.14}\]
where \(B_{0}(q)\), \(B_{1}(q)\), \(B_{2}(q)\) and \(B_{3}(q)\) are defined by (2.3) and (2.4). It follows from Entry 25 (v) and (vi) in Berndt's book [6, p. 40] that
\[\varphi(q)^{2}= \varphi(q^{2})^{2}+4q\psi(q^{4})^{2}\] \[= \varphi(q^{4})^{2}+4q\psi(q^{4})^{2}+4q^{2}\psi(q^{8})^{2}, \tag{2.15}\]
where \(\psi(q)\) is defined by (2.5). If we substitute (2.13), (2.14) and (2.15) into (2.12), then extract those terms in which the power of \(q\) is congruent to \(i\) (\(i=0,2\)) modulo \(4\), then divide by \(q^{i}\) and replace \(q^{4}\) by \(q\), we arrive at (2.1) and (2.2). This completes the proof of Lemma 2.1.
## 3. The Generating Function for \(NT(2,8,2n)-NT(6,8,2n)\)
In this Section, we establish the generating function for \(NT(2,8,2n)-NT(6,8,2n)\).
**Theorem 3.1**.: _We have_
\[\sum_{n=0}^{\infty}\left(NT(2,8,2n)-NT(6,8,2n)\right)q^{n}=R_{1}(q)+R_{2}(q), \tag{3.1}\]
_where_
\[R_{1}(q): =\left(\frac{[-q^{3};q^{8}]_{\infty}^{2}}{[-1;q^{8}]_{\infty}}-\frac{ q^{2}[-q;q^{8}]_{\infty}^{2}}{[-q^{4};q^{8}]_{\infty}}\right)\times\frac{[q^{2};q^{8}]_{ \infty}J_{8}^{3}}{2[-q^{2},q^{3};q^{8}]_{\infty}J_{1}^{2}}\]
_and_
\[R_{2}(q): =\left(\frac{[q^{2},-q^{3},-q^{3};q^{8}]_{\infty}}{2[-1,q,q,q,q,q^ {3};q^{8}]_{\infty}}-\frac{2[-q^{3},-q^{3},q^{4};q^{8}]_{\infty}}{[-1,-1,-q^{2},-q^{2},-q^{4};q^{8}]_{\infty}}\right.\] \[\qquad-\left.\frac{3q[q^{2},-q^{3}-q^{3};q^{8}]_{\infty}}{2[-1,q, q^{3},q^{3},q^{3};q^{8}]_{\infty}}-\frac{2q^{2}[-q,-q,q^{4};q^{8}]_{\infty}}{[-1,-q ^{2},-q^{2},-q^{4},-q^{4};q^{8}]_{\infty}}\right.\] \[\qquad-\left.\frac{q^{2}[-q,-q,q^{2};q^{8}]_{\infty}}{2[q,q,q,q^{ 3},-q^{4};q^{8}]_{\infty}}+\frac{3q^{3}[-q,-q,q^{2};q^{8}]_{\infty}}{2[q,q^{3},q^{3},q^{3},-q^{4};q^{8}]_{\infty}}\right)\times\frac{[q^{2};q^{8}]_{\infty}^ {3}J_{8}^{5}}{[-q^{2},q^{3};q^{8}]_{\infty}J_{1}^{2}}.\]
In order to prove Theorem 3.1, we first prove some lemmas.
**Lemma 3.2**.: _We have_
\[q[-q^{2};q^{16}]_{\infty}-[-q^{6};q^{16}]_{\infty} =-\frac{\left[q^{2},q^{4},q^{4},q^{6},q^{8};q^{16}\right]_{ \infty}J_{1}J_{16}}{J_{2}^{2}}, \tag{3.2}\] \[X\left(-q^{12};q^{16}\right) =\frac{1}{4}-\frac{[q^{4},q^{4},q^{8};q^{16}]_{\infty}J_{16}^{2}} {2[-q^{4};q^{16}]_{\infty}^{2}[-1,-q^{8};q^{16}]_{\infty}}, \tag{3.3}\]
_and_
\[X\left(q^{22};q^{16}\right) =\frac{7}{8}-\frac{3q^{2}[q^{4};q^{16}]_{\infty}^{3}J_{16}^{2}}{ 8[q^{6};q^{16}]_{\infty}^{3}[q^{2};q^{16}]_{\infty}}+\frac{[q^{4};q^{16}]_{ \infty}^{3}J_{16}^{2}}{8[q^{2};q^{16}]_{\infty}^{3}[q^{6};q^{16}]_{\infty}}, \tag{3.4}\]
_where_
\[X(a;q): =\sum_{n=0}^{\infty}\left(\frac{aq^{n}}{1-aq^{n}}-\frac{q^{n+1} /a}{1-q^{n+1}/a}\right). \tag{3.5}\]
Proof.: Equation (3.2) follows immediately from the two identities (see [4, Lemma 4.1] and [4, Eq.(5.5)], respectively):
\[\frac{\left(q^{16};q^{16}\right)_{\infty}}{\left(q^{2};q^{2}\right)_{\infty}^ {2}}\left(\left[-q^{6};q^{16}\right]_{\infty}+q\left[-q^{2};q^{16}\right]_{ \infty}\right)=\frac{1}{J_{1}} \tag{3.6}\]
and
\[\left[-q^{6};q^{16}\right]_{\infty}^{2}-q^{2}\left[-q^{2};q^{16} \right]_{\infty}^{2}=\left[q^{2},q^{4},q^{4},q^{6},q^{8};q^{16}\right]_{\infty}.\]
Recall [8, Eq. (3.2)]:
\[\frac{[ab,bc,ca]_{\infty}J_{1}^{2}}{[a,b,c,abc]_{\infty}}= 1+\sum_{k=0}^{\infty}\frac{aq^{k}}{1-aq^{k}}-\sum_{k=1}^{\infty} \frac{q^{k}/a}{1-q^{k}/a}+\sum_{k=0}^{\infty}\frac{bq^{k}}{1-bq^{k}} \tag{3.7}\] \[-\sum_{k=1}^{\infty}\frac{q^{k}/b}{1-q^{k}/b}+\sum_{k=0}^{\infty} \frac{cq^{k}}{1-cq^{k}}-\sum_{k=1}^{\infty}\frac{q^{k}/c}{1-q^{k}/c}\] \[-\sum_{k=0}^{\infty}\frac{abcq^{k}}{1-abcq^{k}}+\sum_{k=1}^{ \infty}\frac{q^{k}/abc}{1-q^{k}/abc}.\]
Replacing \(q\) by \(q^{16}\), setting \(a=b=-q^{4},c=-q^{8}\) and noting that
\[X(-q^{8};q^{16})=0,\ X(-q^{16};q^{16})=\frac{1}{2},\]
we obtain
\[\frac{1}{2}+2X\left(-q^{4};q^{16}\right)=\frac{[q^{4},q^{4},q^{8};q^{16}]_{ \infty}J_{16}^{2}}{[-q^{4};q^{16}]_{\infty}^{2}[-1,-q^{8};q^{16}]_{\infty}},\]
which together with \(X(-q^{4};q^{16})=-X(-q^{12};q^{16})\) gives (3.3).
Similarly, applying (3.7), we find that
\[1-3X\left(q^{22};q^{16}\right)-X\left(q^{-18};q^{16}\right) =\frac{[q^{-12};q^{16}]_{\infty}^{3}J_{16}^{2}}{[q^{-6};q^{16}]_{ \infty}^{3}[q^{-18};q^{16}]_{\infty}},\] \[4+3X\left(q^{-18};q^{16}\right)+X\left(q^{22};q^{16}\right) =\frac{[q^{-36};q^{16}]_{\infty}^{3}J_{16}^{2}}{[q^{-18};q^{16}] _{\infty}^{3}[q^{-54};q^{16}]_{\infty}}.\]
Then equation (3.4) follows.
Recall [21, Lemma 2.3]:
\[\frac{(q)_{\infty}^{2}}{[b_{1},b_{2},b_{3}]_{\infty}}\left[X(b_{1 };q)+X(b_{2};q)+X(b_{3};q)\right]\] \[=\frac{1}{[b_{2}/b_{1},b_{3}/b_{1}]_{\infty}}\sum_{n=-\infty}^{ \infty}\frac{(-1)^{n}b_{1}q^{3n(n+1)/2}}{(1-b_{1}q^{n})^{2}}\times\left(\frac{ b_{1}^{2}q}{b_{2}b_{3}}\right)^{n}\] \[\quad+\frac{1}{[b_{1}/b_{2},b_{3}/b_{2}]_{\infty}}\sum_{n=-\infty} ^{\infty}\frac{(-1)^{n}b_{2}q^{3n(n+1)/2}}{(1-b_{2}q^{n})^{2}}\times\left(\frac {b_{2}^{2}q}{b_{1}b_{3}}\right)^{n}\] \[\quad+\frac{1}{[b_{1}/b_{3},b_{2}/b_{3}]_{\infty}}\sum_{n=-\infty} ^{\infty}\frac{(-1)^{n}b_{3}q^{3n(n+1)/2}}{(1-b_{3}q^{n})^{2}}\times\left( \frac{b_{3}^{2}q}{b_{1}b_{2}}\right)^{n} \tag{3.8}\]
and
\[\frac{1}{2[b_{1},b_{2}]_{\infty}}\left\{\sum_{n=1}^{\infty}\frac{ -2q^{n}}{(1-q^{n})^{2}}+\mathcal{S}_{1}(b_{1},b_{2};q)\left[2-\mathcal{S}_{1} (b_{1},b_{2};q)\right]-\mathcal{S}_{2}(b_{1},b_{2};q)\right\}\] \[=\frac{1}{[b_{1},b_{2}]_{\infty}}\sum_{\begin{subarray}{c}n\,=- \infty\\ n\neq 0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{3n(n+1)/2}}{(1-q^{n})^{2}} \times\left(\frac{q}{b_{1}b_{2}}\right)^{n}\] \[\quad+\frac{1}{[b_{2}/b_{1},1/b_{1}]_{\infty}}\sum_{n=-\infty}^{ \infty}\frac{(-1)^{n}b_{1}q^{3n(n+1)/2}}{(1-b_{1}q^{n})^{2}}\times\left(\frac{ b_{1}^{2}q}{b_{2}}\right)^{n}\] \[\quad+\frac{1}{[b_{1}/b_{2},1/b_{2}]_{\infty}}\sum_{n=-\infty}^{ \infty}\frac{(-1)^{n}b_{2}q^{3n(n+1)/2}}{(1-b_{2}q^{n})^{2}}\times\left(\frac {b_{2}^{2}q}{b_{1}}\right)^{n}, \tag{3.9}\]
where
\[\mathcal{S}_{1}(b_{1},b_{2};q): =X(b_{1};q)+X(b_{2};q) \tag{3.10}\]
\[\mathcal{S}_{2}(b_{1},b_{2};q): =\sum_{n=0}^{\infty}\bigg{(}\frac{2b_{1}q^{n}-b_{1}^{2}q^{2n}}{(1-b _{1}q^{n})^{2}}+\frac{q^{2n+2}/b_{1}^{2}}{(1-q^{n+1}/b_{1})^{2}}+\frac{2b_{2}q^{ n}-b_{2}^{2}q^{2n}}{(1-b_{2}q^{n})^{2}}+\frac{q^{2n+2}/b_{2}^{2}}{(1-q^{n+1}/b_{2})^{ 2}}\bigg{)}. \tag{3.11}\]
Applying (3.8) and (3.9), we obtain the following.
**Lemma 3.3**.: _We have_
\[\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+n}}{(1-q ^{4n})^{2}}\] \[=-\frac{J_{1}}{J_{16}}\,\sum_{\begin{subarray}{c}n\,=\,-\infty \end{subarray}}^{\infty}\frac{(-1)^{n}q^{24n^{2}+72n+53}}{(1+q^{16n+22})^{2}}\] \[\quad+\frac{\big{[}X\left(q^{12};q^{16}\right)+X\left(-q^{22};q^ {16}\right)\big{]}\times\big{[}1-X\left(q^{12};q^{16}\right)-X\left(-q^{22};q ^{16}\right)\big{]}}{2}\] \[\quad-\sum_{n=-\infty}^{\infty}\bigg{(}\frac{q^{16n+12}}{2(1-q^{ 16n+12})^{2}}-\frac{q^{16n+22}}{2(1+q^{16n+22})^{2}}\bigg{)}-\sum_{n=1}^{ \infty}\frac{q^{16n}}{(1-q^{16n})^{2}}\] \[\quad-\frac{q^{3}[-q^{2};q^{16}]_{\infty}J_{16}^{2}}{[-q^{6},q^{ 8};q^{16}]_{\infty}}\times\big{[}X\left(q^{4};q^{16}\right)+X\left(-q^{22};q^ {16}\right)\big{]}\,, \tag{3.12}\]
\[\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+2n}}{(1-q ^{4n})^{2}}\] \[=\frac{J_{1}}{J_{16}}\,\sum_{\begin{subarray}{c}n\,=\,-\infty \end{subarray}}^{\infty}\frac{(-1)^{n}q^{24n^{2}+88n+75}}{(1+q^{16n+22})^{2}}\] \[\quad-\frac{\big{[}X\left(q^{12};q^{16}\right)+X\left(-q^{22};q^ {16}\right)-2\big{]}\times\big{[}X\left(q^{12};q^{16}\right)+X\left(-q^{22};q^ {16}\right)-1\big{]}}{2}\] \[\quad-\sum_{n=-\infty}^{\infty}\bigg{(}\frac{q^{16n+12}}{2(1-q^{ 16n+12})^{2}}-\frac{q^{16n+22}}{2(1+q^{16n+22})^{2}}\bigg{)}-\sum_{n=1}^{ \infty}\frac{q^{16n}}{(1-q^{16n})^{2}}\] \[\quad-\frac{q^{3}[-q^{2};q^{16}]_{\infty}J_{16}^{2}}{[-q^{6},q^{ 8};q^{16}]_{\infty}}\times\big{[}X\left(-q^{22};q^{16}\right)+X\left(q^{4};q ^{16}\right)-1\big{]}\,, \tag{3.13}\]
\[\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+9n}}{(1-q ^{4n})^{2}}\] \[=-\frac{J_{1}}{J_{16}}\,\sum_{\begin{subarray}{c}n\,=\,-\infty \end{subarray}}^{\infty}\frac{(-1)^{n}q^{24n^{2}+104n+97}}{(1+q^{16n+22})^{2}}\] \[\quad-\frac{\big{[}X\left(q^{12};q^{16}\right)+X\left(-q^{22};q^{16 }\right)-3\big{]}\times\big{[}X\left(q^{12};q^{16}\right)+X\left(-q^{22};q^{ 16}\right)-2\big{]}}{2}\]
\[-\sum_{n=-\infty}^{\infty}\left(\frac{q^{16n+12}}{2(1-q^{16n+12})^{2}} -\frac{q^{16n+22}}{2(1+q^{16n+22})^{2}}\right)-\sum_{n=1}^{\infty}\frac{q^{16n} }{(1-q^{16n})^{2}}\] \[-\sum_{n=-\infty}^{\infty}\left(\frac{q^{16n+12}}{2(1-q^{16n+12}) ^{2}}-\frac{q^{16n+22}}{2(1+q^{16n+22})^{2}}\right)-\sum_{n=1}^{\infty}\frac{q ^{16n}}{(1-q^{16n})^{2}}\] \[-\frac{q^{3}[-q^{2};q^{16}]_{\infty}J_{16}^{2}}{[-q^{6},q^{8};q^{ 16}]_{\infty}}\times\left[X\left(-q^{22};q^{16}\right)+X\left(q^{4};q^{16} \right)-2\right], \tag{3.14}\]
\[\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+10n}} {(1-q^{4n})^{2}}\] \[=\frac{J_{1}}{J_{16}}\sum_{n\,=\,-\infty}^{\infty}\frac{(-1)^{n} q^{24n^{2}+56n+31}}{(1+q^{16n+22})^{2}}\] \[-\frac{\left[X\left(q^{12};q^{16}\right)+X\left(-q^{22};q^{16} \right)\right]\times\left[X\left(q^{12};q^{16}\right)+X\left(-q^{22};q^{16} \right)+1\right]}{2}\] \[-\sum_{n=-\infty}^{\infty}\left(\frac{q^{16n+12}}{2(1-q^{16n+12} )^{2}}-\frac{q^{16n+22}}{2(1+q^{16n+22})^{2}}\right)-\sum_{n=1}^{\infty}\frac {q^{16n}}{(1-q^{16n})^{2}}\] \[-\frac{q^{3}[-q^{2};q^{16}]_{\infty}J_{16}^{2}}{[-q^{6},q^{8};q^{ 16}]_{\infty}}\times\left[X\left(-q^{22};q^{16}\right)+X\left(q^{4};q^{16} \right)+1\right], \tag{3.15}\]
\[\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+n}}{(1 +q^{4n})^{2}}\] \[=-\frac{J_{1}}{J_{16}}\sum_{n\,=\,-\infty}^{\infty}\frac{(-1)^{n} q^{24n^{2}+72n+53}}{(1-q^{16n+22})^{2}}\] \[\quad+\frac{[q^{4},-q^{6};q^{16}]_{\infty}J_{16}^{2}}{[-1,-q^{4}, q^{6};q^{16}]_{\infty}}\times\left[X\left(-q^{12};q^{16}\right)+X\left(q^{22};q^{16} \right)-\frac{1}{2}\right]\] \[\quad+\frac{q^{3}[-q^{2},q^{4};q^{16}]_{\infty}J_{16}^{2}}{[q^{6},-q^{8},-q^{4};q^{16}]_{\infty}}\times\left[X\left(-q^{12};q^{16}\right)-X \left(q^{22};q^{16}\right)\right], \tag{3.16}\]
\[\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+2n}} {(1+q^{4n})^{2}}\] \[=\frac{J_{1}}{J_{16}}\sum_{n\,=\,-\infty}^{\infty}\frac{(-1)^{n}q^ {24n^{2}+40n+15}}{(1-q^{16n+10})^{2}}\] \[\quad+\frac{[q^{4},-q^{6};q^{16}]_{\infty}J_{16}^{2}}{[-1,q^{6},- q^{4};q^{16}]_{\infty}}\times\left[\frac{3}{2}-X\left(-q^{12};q^{16}\right)-X \left(q^{22};q^{16}\right)\right]\] \[\quad-\frac{q^{3}[-q^{2},q^{4};q^{16}]_{\infty}J_{16}^{2}}{[q^{6},-q^{8},-q^{4};q^{16}]_{\infty}}\times\left[X\left(-q^{12};q^{16}\right)-X \left(q^{22};q^{16}\right)+1\right], \tag{3.17}\]
\[\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+9n}} {(1+q^{4n})^{2}}\] \[=\frac{J_{1}}{J_{16}}\sum_{n\,=\,-\infty}^{\infty}\frac{(-1)^{n}q^ {24n^{2}+40n+15}}{(1-q^{16n+10})^{2}}\] \[\quad+\frac{[q^{4},-q^{6};q^{16}]_{\infty}J_{16}^{2}}{[-1,q^{6},- q^{4};q^{16}]_{\infty}}\times\left[\frac{3}{2}-X\left(-q^{12};q^{16}\right)-X \left(q^{22};q^{16}\right)\right]\] \[\quad-\frac{q^{3}[-q^{2},q^{4};q^{16}]_{\infty}J_{16}^{2}}{[q^{6},-q^{8},-q^{4};q^{16}]_{\infty}}\times\left[X\left(-q^{12};q^{16}\right)-X \left(q^{22};q^{16}\right)+1\right], \tag{3.18}\]
\[\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+9n}} {(1+q^{4n})^{2}}\] \[=\frac{J_{1}}{J_{16}}\sum_{n\,=\,-\infty}^{\infty}\frac{(-1)^{n}q^ {\frac{n(3n+1)}{2}+10n}}{(1-q^{4n})^{2}}\] \[\quad+\frac{[q^{4},-q^{6};q^{16}]_{\infty}J_{16}^{2}}{[-1,q^{6},- q^{4};q^{16}]_{\infty}}\times\left[\frac{3}{2}-X\left(-q^{12};q^{16}\right)-X \left(q^{22};q^{16}\right)\right]\] \[\quad-\frac{q^{3}[-q^{2},q^{4};q^{16}]_{\infty}J_{16}^{2}}{[q^{6},-q^{8},-q^{4};q^{16}]_{\infty}}\times\left[X\left(-q^{12};q^{16}\right)-X \left(q^{22};q^{16}\right)+1\right], \tag{3.19}\]
\[\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+9n}} {(1+q^{4n})^{2}}\] \[=\frac{J_{1}}{J_{16}}\sum_{n\,=\,-\infty}^{\infty}\frac{(-1)^{n}q^ {\frac{n(3n+1)}{2}+10n}}{(1-q^{4n})^{2}}\] \[\quad+\frac{[q^{4},-q^{6};q^{16}]_{\infty}J_{16}^{2}}{[-1,q^{6},- q^{4};q^{16}]_{\infty}}\times\left[\frac{3}{2}-X\left(-q^{12};q^{16}\right)-X\left(q^{22};q^{16} \right)\right]\] \[\quad-\frac{q^{3}[-q^{2},q^{4};q^{16}]_{\infty}J_{16}^{2}}{[q^{6},-q^{8},-q^{4};q^{16}]_{\infty}}\times\left[X\left(-q^{12};q^{16}\right)-X \left(q^{22};q^{16}\right)+1\right], \tag{3.20}\]
\[\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+9n}} {(1+q^{4n})^{2}}\] \[=\frac{J_{1}}{J_{16}}\sum_{n\,=\,-\infty}^{\infty}\frac{(-1)^{n}q^ {\frac{n(3n+1)}{2}+10n}}{(1-q^{4n})^{2}}\] \[\quad+\frac{[q^{4},-q^{6};q^{16}]_{\infty}J_{16}^{2}}{[-1,q^{6},- q^{4};q^{16}]_{\infty}}\times\left[\frac{3}{2}-X\left(-q^{12};q^{16}\right)-X\left(q^{22};q^{16} \right)\right]\] \[\quad-\frac{q^{3}[-q^{2},q^{4};q^{16}]_{\infty}J_{16
\[=-\frac{J_{1}}{J_{16}}\sum_{n\,=\,-\infty}^{\infty}\frac{(-1)^{n}q^{24n ^{2}+8n-15}}{(1-q^{16n-10})^{2}}\] \[\quad+\frac{[q^{4},-q^{6};q^{16}]_{\infty}J_{16}^{2}}{[-1,q^{6},- q^{4};q^{16}]_{\infty}}\times\left[X\left(-q^{12};q^{16}\right)+X\left(q^{22};q^{16} \right)-\frac{5}{2}\right]\] \[\quad-\frac{q^{3}[-q^{2},q^{4};q^{16}]_{\infty}J_{16}^{2}}{[q^{6},-q^{8},-q^{4};q^{16}]_{\infty}}\times\left[-X\left(-q^{12};q^{16}\right)+X \left(q^{22};q^{16}\right)-2\right], \tag{3.18}\]
\[\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+10n}}{ (1+q^{4n})^{2}}\] \[=-\frac{J_{1}}{J_{16}}\sum_{n\,=\,-\infty}^{\infty}\frac{(-1)^{n} q^{24n^{2}+24n-13}}{(1-q^{16n-6})^{2}}\] \[\quad-\frac{[q^{4},-q^{6};q^{16}]_{\infty}J_{16}^{2}}{[-1,q^{6},- q^{4};q^{16}]_{\infty}}\times\left[X\left(-q^{12};q^{16}\right)+X\left(q^{22};q^{ 16}\right)+\frac{1}{2}\right]\] \[\quad-\frac{q^{3}[-q^{2},q^{4};q^{16}]_{\infty}J_{16}^{2}}{[q^{6},-q^{8},-q^{4};q^{16}]_{\infty}}\times\left[X\left(-q^{12};q^{16}\right)-X \left(q^{22};q^{16}\right)-1\right]. \tag{3.19}\]
Proof.: Split the series according to the summation index \(n\) modulo \(4\) to obtain
\[\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+n}}{(1-q^ {4n})^{2}} =\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{q^{24n^{2}+6n}}{(1-q^{16n})^{2}}-\sum_{ n\,=\,-\infty}^{\infty}\frac{q^{24n^{2}+18n+3}}{(1-q^{16n+4})^{2}}\] \[\quad+\sum_{n\,=\,-\infty}^{\infty}\frac{q^{24n^{2}+30n+9}}{(1-q^ {16n+8})^{2}}-\sum_{n\,=\,-\infty}^{\infty}\frac{q^{24n^{2}+42n+18}}{(1-q^{16n +12})^{2}}\] \[=:S_{0}-S_{1}+S_{2}-S_{3}. \tag{3.20}\]
Applying (3.8) with \((q,b_{1},b_{2},b_{3})\) replaced by \((q^{16},q^{4},q^{8},-q^{22})\), multiplying by
\[\frac{[q^{4},-q^{18};q^{16}]_{\infty}}{q}\]
on both sides of the resulting equation and simplifying yields
\[S_{1}-S_{2} =\frac{q^{3}[-q^{2};q^{16}]_{\infty}(q^{16};q^{16})_{\infty}^{2}} {[-q^{6},q^{8};q^{16}]_{\infty}}\times\left[X\left(q^{4};q^{16}\right)+X\left( -q^{22};q^{16}\right)\right]\] \[\quad+\frac{[q^{4};q^{16}]_{\infty}}{[-q^{2};q^{16}]_{\infty}}\sum _{n\,=\,-\infty}^{\infty}\frac{(-1)^{n}q^{24n^{2}+72n+53}}{(1+q^{16n+22})^{2}}. \tag{3.21}\]
Similarly, we apply (3.9) with \((q,b_{1},b_{2})\) replaced by \((q^{16},q^{12},-q^{22})\), multiply by
\[[q^{12},-q^{22};q^{16}]_{\infty}\]
on both sides of the resulting equation and simplify to obtain
\[S_{0}-S_{3} =\frac{\mathcal{S}_{1}(q^{12},-q^{22};q^{16})\left[2-\mathcal{S}_{1 }(q^{12},-q^{22};q^{16})\right]-\mathcal{S}_{2}(q^{12},-q^{22};q^{16})}{2}\]
\[-\sum_{n=1}^{\infty}\frac{q^{16n}}{(1-q^{16n})^{2}}+\frac{[q^{4};q^{16}]_{ \infty}}{[-q^{6};q^{16}]_{\infty}}\sum_{n\,=\,-\infty}^{\infty}\frac{(-1)^{n}q^{24 n^{2}+72n+54}}{(1+q^{16n+22})^{2}}. \tag{3.22}\]
By (3.11), we have
\[\mathcal{S}_{2}(q^{12},-q^{22};q^{16})\] \[=\sum_{n=0}^{\infty}\bigg{(}\frac{2q^{16n+12}-q^{32n+24}}{(1-q^{1 6n+12})^{2}}+\frac{q^{32n+8}}{(1-q^{16n+4})^{2}}-\frac{2q^{16n+22}-q^{32n+44}}{ (1+q^{16n+22})^{2}}+\frac{q^{32n-12}}{(1+q^{16n-6})^{2}}\bigg{)}.\]
Note that
\[\sum_{n=0}^{\infty}\bigg{(}\frac{2q^{16n+12}-q^{32n+24}}{(1-q^{16 n+12})^{2}}+\frac{q^{32n+8}}{(1-q^{16n+4})^{2}}\bigg{)}\] \[=\sum_{n=0}^{\infty}\bigg{(}\frac{q^{16n+12}}{(1-q^{16n+12})^{2}} +\frac{q^{16n+12}-q^{32n+24}}{(1-q^{16n+12})^{2}}+\frac{q^{32n+8}-q^{16n+4}}{( 1-q^{16n+4})^{2}}+\frac{q^{16n+4}}{(1-q^{16n+4})^{2}}\bigg{)}\] \[=\sum_{n=-\infty}^{\infty}\frac{q^{16n+12}}{(1-q^{16n+12})^{2}}+X \left(q^{12};q^{16}\right).\]
With a similar argument, one can verify that
\[\sum_{n=0}^{\infty}\bigg{(}\frac{q^{32n-12}}{(1+q^{16n-6})^{2}}- \frac{2q^{16n+22}-q^{32n+44}}{(1+q^{16n+22})^{2}}\bigg{)}\] \[=X\left(-q^{22};q^{16}\right)-\sum_{n=-\infty}^{\infty}\frac{q^{1 6n+22}}{(1+q^{16n+22})^{2}}.\]
Then
\[\mathcal{S}_{2}(q^{12},-q^{22};q^{16})\] \[=X\left(-q^{22};q^{16}\right)+X\left(q^{12};q^{16}\right)+\sum_{n =-\infty}^{\infty}\bigg{(}\frac{q^{16n+12}}{(1-q^{16n+12})^{2}}-\frac{q^{16n+ 22}}{(1+q^{16n+22})^{2}}\bigg{)}\,. \tag{3.23}\]
Substituting (3.23) into (3.22), invoking (3.10) and simplifying gives
\[S_{0}-S_{3} =\frac{[q^{4};q^{16}]_{\infty}}{[-q^{6};q^{16}]_{\infty}}\sum_{n \,=\,-\infty}^{\infty}\frac{(-1)^{n}q^{24n^{2}+72n+54}}{(1+q^{16n+22})^{2}}- \sum_{n=1}^{\infty}\frac{q^{16n}}{(1-q^{16n})^{2}}\] \[\quad+\frac{\left[X\left(q^{12};q^{16}\right)+X\left(-q^{22};q^{1 6}\right)\right]\times\left[1-X\left(q^{12};q^{16}\right)-X\left(-q^{22};q^{16 }\right)\right]}{2}\] \[\quad-\sum_{n=-\infty}^{\infty}\bigg{(}\frac{q^{16n+12}}{2(1-q^{1 6n+12})^{2}}-\frac{q^{16n+22}}{2(1+q^{16n+22})^{2}}\bigg{)}\,. \tag{3.24}\]
Substitute (3.21) and (3.24) into (3.20) and rearrange to obtain
\[\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+n}}{(1-q^ {4n})^{2}}\] \[=\bigg{(}\frac{q[q^{4};q^{16}]_{\infty}}{[-q^{6};q^{16}]_{\infty} }-\frac{[q^{4};q^{16}]_{\infty}}{[-q^{2};q^{16}]_{\infty}}\bigg{)}\sum_{n\,=\,- \infty}^{\infty}\frac{(-1)^{n}q^{24n^{2}+72n+53}}{(1+q^{16n+22})^{2}}\]
\[+\frac{\left[X\left(q^{12};q^{16}\right)+X\left(-q^{22};q^{16} \right)\right]\times\left[1-X\left(q^{12};q^{16}\right)-X\left(-q^{22};q^{16} \right)\right]}{2}\] \[-\sum_{n=-\infty}^{\infty}\left(\frac{q^{16n+12}}{2(1-q^{16n+12} )^{2}}-\frac{q^{16n+22}}{2(1+q^{16n+22})^{2}}\right)-\sum_{n=1}^{\infty} \frac{q^{16n}}{(1-q^{16n})^{2}}\] \[-\frac{q^{3}[-q^{2};q^{16}]_{\infty}(q^{16};q^{16})_{\infty}^{2}} {[-q^{6},q^{8};q^{16}]_{\infty}}\times\left[X\left(q^{4};q^{16}\right)+X\left(- q^{22};q^{16}\right)\right].\]
This together with (3.2) implies (3.12).
Proceeding with the same steps as in the foregoing proof, we can get (3.13)-(3.19).
We are now in a position to prove Theorem 3.1.
Proof of Theorem 3.1.: Lemma 2.1 of [21] gives that, for \(1\leq b\leq k-1\),
\[\sum_{n=0}^{\infty}\left(NT(b,k,n)-NT(k-b,k,n)\right)q^{n}\] \[=\frac{k}{J_{1}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+(b-1)n}(1 -q^{n})}{(1-q^{kn})^{2}}\] \[\quad-\frac{k-b}{J_{1}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+(b-1)n}(1 -q^{n})}{1-q^{kn}}. \tag{3.25}\]
Setting \((b,k)=(2,8)\) in (3.25), one obtain
\[\sum_{n=0}^{\infty}\left(NT(2,8,n)-NT(6,8,n)\right)q^{n}\] \[=\frac{8}{J_{1}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+n}(1-q^{ n})}{(1-q^{8n})^{2}}-\frac{6}{J_{1}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+n}(1-q^{ n})}{1-q^{8n}}\] \[=\frac{1}{J_{1}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+n}(1-q^{ n})\left\{8-6(1-q^{8n})\right\}}{(1-q^{8n})^{2}}.\]
Invoking
\[\frac{4}{(1-q^{8n})^{2}}=\frac{2-q^{4n}}{(1-q^{4n})^{2}}+\frac{2+q^{4n}}{(1+q^ {4n})^{2}}\]
and simplifying, we find that
\[\sum_{n=0}^{\infty}\left(NT(2,8,n)-NT(6,8,n)\right)q^{n}\]
\[=\frac{1}{2J_{1}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}}\left\{2q^ {n}-2q^{2n}-7q^{5n}+7q^{6n}+3q^{9n}-3q^{10n}\right\}}{(1-q^{4n})^{2}}\] \[\quad+\frac{1}{2J_{1}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}}\left\{2q ^{n}-2q^{2n}-5q^{5n}+5q^{6n}-3q^{9n}+3q^{10n}\right\}}{(1+q^{4n})^{2}}\] \[=\frac{1}{2J_{1}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}}\left\{9q ^{n}-9q^{2n}+3q^{9n}-3q^{10n}\right\}}{(1-q^{4n})^{2}}\] \[\quad+\frac{1}{2J_{1}}\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{ \frac{n(3n+1)}{2}}\left\{7q^{n}-7q^{2n}-3q^{9n}+3q^{10n}\right\}}{(1+q^{4n})^ {2}}, \tag{3.26}\]
where the second equality follows from
\[\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+an}}{(1 \pm q^{4n})^{2}}=\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,\neq\,0\end{subarray}}^{\infty}\frac{(-1)^{n}q^{\frac{n(3n+1)}{2}+(7-a)n}}{( 1\pm q^{4n})^{2}}.\]
Substituting (3.12)-(3.19) into (3.26) and simplifying, we arrive at
\[\sum_{n=0}^{\infty}\left(NT(2,8,n)-NT(6,8,n)\right)q^{n}\] \[=-\frac{9}{2J_{16}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,=\,-\infty\end{subarray}}^{\infty}\frac{(-1)^{n}q^{24n^{2}+72n+53}}{(1+q^ {16n+22})^{2}}-\frac{9}{2J_{16}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,=\,-\infty\end{subarray}}^{\infty}\frac{(-1)^{n}q^{24n^{2}+88n+75}}{(1+q^{16 n+22})^{2}}\] \[-\frac{3}{2J_{16}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,=\,-\infty\end{subarray}}^{\infty}\frac{(-1)^{n}q^{24n^{2}+104n+97}}{(1+q^ {16n+22})^{2}}-\frac{3}{2J_{16}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,=\,-\infty\end{subarray}}^{\infty}\frac{(-1)^{n}q^{24n^{2}+56n+31}}{(1+q^{16 n+22})^{2}}\] \[-\frac{7}{2J_{16}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,=\,-\infty\end{subarray}}^{\infty}\frac{(-1)^{n}q^{24n^{2}+72n+53}}{(1-q^ {16n+22})^{2}}-\frac{7}{2J_{16}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,=\,-\infty\end{subarray}}^{\infty}\frac{(-1)^{n}q^{24n^{2}+40n+15}}{(1-q^{16 n+10})^{2}}\] \[+\frac{3}{2J_{16}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,=\,-\infty\end{subarray}}^{\infty}\frac{(-1)^{n}q^{24n^{2}+8n-15}}{(1-q^{16 n-10})^{2}}-\frac{3}{2J_{16}}\sum_{\begin{subarray}{c}n\,=\,-\infty\\ n\,=\,-\infty\end{subarray}}^{\infty}\frac{(-1)^{n}q^{24n^{2}+24n-13}}{(1-q^{16 n-6})^{2}}+L(q), \tag{3.27}\]
where
\[L(q): =\left\{2X\left(-q^{12};q^{16}\right)\times\left(\frac{[-q^{6};q ^{16}]_{\infty}}{[-1;q^{16}]_{\infty}}+\frac{q^{3}[-q^{2};q^{16}]_{\infty}}{[- q^{8};q^{16}]_{\infty}}\right)\right.\] \[\qquad+2X\left(q^{22};q^{16}\right)\times\left(\frac{[-q^{6};q ^{16}]_{\infty}}{[-1;q^{16}]_{\infty}}-\frac{q^{3}[-q^{2};q^{16}]_{\infty}}{[- q^{8};q^{16}]_{\infty}}\right)\] \[\qquad-\left(\frac{2[-q^{6};q^{16}]_{\infty}}{[-1;q^{16}]_{\infty }}-\frac{q^{3}[-q^{2};q^{16}]_{\infty}}{[-q^{8};q^{16}]_{\infty}}\right) \right\}\times\frac{2[q^{4};q^{16}]_{\infty}J_{16}^{2}}{[q^{6},-q^{4};q^{16}]_{ \infty}J_{1}}.\]
Note that none of the \(q\)-expansion of the series on the right side of (3.27) (except \(L(q)\)) contains terms of the form \(q^{2n}\). We only need to study the 2-dissection for \(L(q)\). Invoking (3.3), (3.4), (3.6) and collecting terms with even exponents, we prove (3.1).
## 4. Proof of Theorem 1.2
We rewrite (2.1) and (2.2) as follows:
\[\sum_{n\geq 0}(M_{\omega}(1,4,4n)-M_{\omega}(3,4,4n))q^{n}=f_{1}(q)+f_{2}(q), \tag{4.1}\]
with
\[f_{1}(q) :=\frac{1}{4J_{1}}A_{0}(q)B_{0}(q)+\frac{q}{4J_{1}}A_{2}(q)B_{2}(q),\] \[f_{2}(q) :=-\frac{1}{4J_{1}}A_{0}(q)B_{0}(q)\varphi(q)^{2}-\frac{q}{J_{1}} \biggl{(}\frac{1}{4}A_{2}(q)B_{2}(q)\varphi(q)^{2}+A_{2}(q)B_{1}(q)\psi(q)^{2}\] \[\qquad-(A_{0}(q)B_{2}(q)+A_{2}(q)B_{0}(q))\psi(q^{2})^{2}\biggr{)} -\frac{q^{2}}{J_{1}}A_{0}(q)B_{3}(q)\psi(q)^{2}\]
and
\[\sum_{n\geq 0}(M_{\omega}(1,4,4n+2)-M_{\omega}(3,4,4n+2))q^{n}=f_{3}(q)+f_{4}(q) \tag{4.2}\]
with
\[f_{3}(q) =-\frac{1}{4J_{1}}(A_{0}(q)B_{2}(q)+A_{2}(q)B_{0}(q)),\] \[f_{4}(q) =\frac{1}{4J_{1}}(A_{0}(q)B_{2}(q)+A_{2}(q)B_{0}(q))\varphi(q)^{2 }+\frac{1}{J_{1}}A_{0}(q)(B_{1}(q)\psi(q)^{2}-B_{0}(q)\psi(q^{2})^{2})\] \[\qquad-\frac{q}{J_{1}}A_{2}(q)B_{2}(q)\psi(q^{2})^{2}+\frac{q^{2} }{J_{1}}A_{2}(q)B_{3}(q)\psi(q)^{2}.\]
Applying (3.1) (4.1) and (4.2), we find that Theorem 1.2 is implied by
\[R_{1}(q)+R_{2}(q)=f_{1}(q^{2})+f_{2}(q^{2})-q(f_{3}(q^{2})+f_{4}(q^{2})).\]
Thus, it suffices to show that
\[R_{1}(q) =f_{1}(q^{2})-qf_{3}(q^{2}), \tag{4.3}\] \[R_{2}(q) =f_{2}(q^{2})-qf_{4}(q^{2}). \tag{4.4}\]
Multiplying by \(\frac{J_{3,16}^{3}J_{4,16}J_{5,16}^{3}J_{8,16}^{3/2}}{q^{6}J_{16}^{29/2}}\) on both sides of (4.3) and simplifying, we find that it is equivalent to
\[\frac{J_{6,16}^{2}J_{8,16}^{8}}{4q^{6}J_{1,16}^{2}J_{3,16}^{2}J_{ 4,16}^{2}J_{5,16}^{2}J_{7,16}^{2}}-\frac{J_{2,16}^{2}J_{8,16}^{6}}{2q^{4}J_{1, 16}^{4}J_{7,16}^{4}}\] \[-\frac{J_{3,64}^{3}J_{5,64}^{3}J_{8,64}^{12}J_{11,64}^{3}J_{12,64} ^{3}J_{13,64}^{3}J_{19,64}^{2}J_{20,64}^{3}J_{21,64}^{12}J_{24,64}^{3}J_{27,64} ^{3}J_{29,64}^{3}}{4q^{6}J_{10,64}J_{16,64}J_{22,64}J_{64}^{48}}\]
\[-\frac{J_{3,64}^{3}J_{4,64}J_{5,64}^{3}J_{8,64}^{12}J_{10,64}J_{11,64}^ {3}J_{12,64}J_{13,64}^{3}J_{19,64}^{3}J_{21,64}^{3}J_{22,64}J_{24,64}^{12}J_{27,64}^ {3}J_{28,64}J_{29,64}^{3}}{4q^{4}J_{2,64}J_{14,64}J_{16,64}J_{18,64}J_{30,64}J_{64} ^{48}}\] \[-\frac{J_{3,64}^{3}J_{5,64}^{3}J_{8,64}^{12}J_{11,64}^{3}J_{12,64} ^{2}J_{13,64}^{3}J_{19,64}^{3}J_{20,64}J_{21,64}^{3}J_{24,64}^{12}J_{27,64}^{3}J _{29,64}^{3}}{4q^{5}J_{6,64}J_{26,64}J_{48}^{48}}\] \[-\frac{J_{3,64}^{3}J_{4,64}J_{5,64}^{3}J_{6,64}J_{11,64}^{12}J_{13,64}^{3}J_{19,64}^{3}J_{19,64}^{3}J_{20,64}^{3}J_{21,64}^{12}J_{26,64}J_{27,64}^ {3}J_{28,64}J_{29,64}^{3}}{4q^{5}J_{2,64}J_{14,64}J_{16,64}J_{18,64}J_{30,64}J_{64 }^{48}}=0. \tag{4.5}\]
Using [25, Theorem 3], we verify that each term on the left side of (4.5) is a modular function on \(\Gamma_{1}(64)\). Then we can prove (4.5) with the MAPLE package _thetads_[16]. For the Maple commands, see [https://github.com/dongpanghu/Code2/blob/main/code.md](https://github.com/dongpanghu/Code2/blob/main/code.md). This proves (4.3). With a completely similar argument, one can obtain (4.4) and the detailed proof is omitted. Then the proof of Theorem 1.2 is completed.
**Acknowledgments.** This work was partially supported by National Natural Science Foundation of China (12071331, 11971341 and 11971203) and the Natural Science Foundation of Jiangsu Province of China (BK20221383).
|
2306.06849 | Mitigating Transformer Overconfidence via Lipschitz Regularization | Though Transformers have achieved promising results in many computer vision
tasks, they tend to be over-confident in predictions, as the standard Dot
Product Self-Attention (DPSA) can barely preserve distance for the unbounded
input domain. In this work, we fill this gap by proposing a novel Lipschitz
Regularized Transformer (LRFormer). Specifically, we present a new similarity
function with the distance within Banach Space to ensure the Lipschitzness and
also regularize the term by a contractive Lipschitz Bound. The proposed method
is analyzed with a theoretical guarantee, providing a rigorous basis for its
effectiveness and reliability. Extensive experiments conducted on standard
vision benchmarks demonstrate that our method outperforms the state-of-the-art
single forward pass approaches in prediction, calibration, and uncertainty
estimation. | Wenqian Ye, Yunsheng Ma, Xu Cao, Kun Tang | 2023-06-12T03:47:43Z | http://arxiv.org/abs/2306.06849v2 | # Mitigating Transformer Overconfidence via Lipschitz Regularization
###### Abstract
Though Transformers have achieved promising results in many computer vision tasks, they tend to be over-confident in predictions, as the standard Dot Product Self-Attention (DPSA) can barely preserve distance for the unbounded input domain. In this work, we fill this gap by proposing a novel Lipschitz Regularized Transformer (LRFormer). Specifically, we present a new similarity function with the distance within Banach Space to ensure the Lipschitzness and also regularize the term by a contractive Lipschitz Bound. The proposed method is analyzed with a theoretical guarantee, providing a rigorous basis for its effectiveness and reliability. Extensive experiments conducted on standard vision benchmarks demonstrate that our method outperforms state-of-the-art single forward pass approaches in prediction, calibration, and uncertainty estimation.
## 1 Introduction
Deep learning (DL) has achieved remarkable performance, making it widely employed in various inference and decision-making systems. However, DL models still make mistakes, making trust and safety an increasingly important topic [1, 17], especially in critical applications like self-driving cars [16] and medical diagnosis [14]. One solution to this problem is for models to not only achieve high accuracy but also refrain from making overly confident predictions.
Transformer [20] and its variants, such as BERT [4], have made significant advances in Natural Language Processing (NLP). Similarly, Vision Transformers (ViT) [15] and its variants [13, 14] have recently achieved state-of-the-art performance on a variety of computer vision tasks [21, 16, 17, 18, 19]. Despite this, their propensity for overconfident predictions is cause for concern, especially as they become one of the foundation architectures of deep learning. To address this issue, we investigate the under-explored problem of overconfidence issue in Transformers, which can aid subsequent tasks in the construction of reliable models.
Overconfidence is a common problem in many machine learning models for both in- and out-of-distribution inputs, including deep neural networks [22, 23]. When a model is overconfident, it tends to make highly confident predictions even when it is uncertain about the truth of a given input. This can lead to poor performance and inaccurate results, especially in real-world settings where uncertainty is prevalent. Uncertainty estimation is a promising approach for addressing the issue of overconfidence in machine learning models. By estimating uncertainty, a model can make more informed predictions and provide a measure of confidence for each prediction. This can help to improve the robustness and reliability of the model and enable it to perform more effectively in a variety of applications, including decision-making and risk assessment.
Previous techniques for estimating the model's predictive uncertainty include Bayesian deep learning [24], Blundell et al. (2015) and ensemble techniques[13, 14]. However, multiple forward passes at the test time are required by most of these methods. In other words, these methods suffer heavy memory and computation cost, which limits their adoption in real-world applications.
Recently, uncertainty quantification via single forward-pass neural networks, which has similar latency as a single deterministic network, has received lots of attention [13, 12, 22]. SNGP [13] replaces the dense output layer
with a Gaussian Process (GP) layer and applies Spectral Normalization (SN) [11] to the hidden residual layers. DUE [13] builds upon GPDNN [1] and introduces additional constraints to the feature extractor in the form of residual connections in combination with SN [11]. These methods perform well on uncertainty estimation. However, they only focus on bounding the Lipschitz constants of certain CNN modules _i.e._, convolution and batch normalization [13] layers. Moreover, according to Lee et al. [2021], Transformer blocks are very sensitive to the magnitude of the Lipschitz constant, and if SN is employed in self-attention modules, training will progress exceptionally slowly. Although some newest proposed Transformer architectures have been proven to be Lipschitz continuous [1, 13, 14, 15, 16], they still have not solved the overconfidence problem of Transformer.
To address the issues above, we contribute as follows:
* We propose a novel regularization technique, termed Lipschitz Regularized Self-Attention (LRSA), that addresses distance awareness in both Lipschitzness and Contraction. LRSA replaces the dot product similarity with the distance within Banach Space and normalizes the term by a theoretical bound of the Lipschitz constant. Furthermore, we provide a theoretical analysis of how our method achieves these properties.
* We develop the LRSA based Transformer called LRFormer1, which integrates distance-preserving hidden mappings in transformer blocks via LRSA and utilizes an optional Gaussian Process (GP) distance-aware output layer for high-quality uncertainty estimation. Footnote 1: [https://github.com/SZCHAI/LRFormer](https://github.com/SZCHAI/LRFormer)
* We conduct extensive experiments on widely used OOD benchmarks, including CIFAR-10/-100 versus SVHN and CIFAR-10/-100 versus CIFAR-100/-10. Compared to state-of-the-art approaches, our experimental results demonstrate that the proposed LRFormer model is superior in terms of prediction, calibration, and uncertainty estimation, with minimal time complexity penalty.
## 2 Problem Statement
In the supervised multi-class classification setting, assume a data sample \((\mathbf{x},y)\in\mathcal{X}\times\mathcal{Y}\) is sampled from an unknown distribution, where \(\mathcal{Y}=\{1,\dots,K\}\) denote the label space with \(K\) classes and \(\mathcal{X}=\mathbb{R}^{d}\) denote the feature space. A learned classifier \(f^{\theta}\colon\mathcal{X}\to\Delta^{K}\) can produce a probability distribution for \(\mathbf{x}\) on \(K\) classes, where \(\Delta^{K}\) is the \(K-1\) dimensional unit simplex. In this context, we introduce a former definition of overconfidence for a general classifier.
**Definition 2.1** (Overconfidence).: Assume \(f^{\theta}\) as a composition of a non-probabilistic \(K\)-way classifier \(\mathbf{h}^{\theta}\) and a softmax function \(\sigma\), i.e. \(\mathbf{f}^{\theta}=\mathbf{h}^{\theta}\circ\sigma\). Given a test data sample \(\mathbf{x},\mathbf{f}^{\theta}\) provides its probability of assigning it to label \(i\) as \(\frac{\exp(h^{\theta}_{i}(\mathbf{x}))}{\sum_{k=1}^{K}\exp(h^{\theta}_{k}(\mathbf{x}))}\), where \(h^{\theta}_{i}(\mathbf{x})\) denotes the
Figure 1: Uncertainty heat map of LRFormer and baseline approaches on the two moons 2D classification benchmark. Orange and blue points are positive and negative training samples respectively. Background color visualizes the predictive uncertainty of each model, where yellow stands for confidence and blue indicates uncertainty. The proposed LRFormer (Figure 1(d)) achieves the closest to ideal uncertainty quantification on this benchmark. Detail refer to Section 4.4.
\(i\)-th element of the logit vector produced by \(\mathbf{h}^{\theta}\). Then, \(\hat{y}:=\arg\max_{i}f_{i}^{\theta}(\mathbf{x})\) can be returned as the predicted label and \(\hat{p}:=\max_{j\neq y}f_{j}^{\theta}(\mathbf{x})\) can be treated as the confidence score. Overconfidence appears when the prediction is wrong with high probability.
Based on Definition 2.1, the logit vector can be decomposed into two components: \(\mathbf{h}(\mathbf{x})=||\mathbf{h}(\mathbf{x})||\cdot\mathbf{h}(\hat{\mathbf{x}})\), where \(||\mathbf{h}(\mathbf{x})||\) is the L2-norm of the logit vector and \(\mathbf{h}(\hat{\mathbf{x}})\) is the unit vector in the same direction as \(\mathbf{h}(\mathbf{x})\). These two terms represent the magnitude and direction of the logit vector, respectively. It is evident that if \(\arg\max_{k}(h_{k})=c\), then \(\arg\max_{k}(\gamma\mathbf{h}_{k})=c\) always holds for any given constant value \(\gamma>1\). This indicates that the magnitude of the logit vector does not affect the predicted class \(c\). Additionally, for any given scalar \(\gamma>1\), if \(c=\arg\max_{k}(h_{k})\), then \(\sigma_{c}(\gamma\mathbf{h})\geq\sigma_{c}(\mathbf{h})\). From the above claims, we observe that increasing the magnitude \(||\mathbf{h}(\mathbf{x})||\) will lead to a higher softmax confidence score while leaving the final prediction unchanged.
During optimization, the cross-entropy loss is given as:
\[\mathcal{L}_{\mathrm{CE}}(\mathbf{h}(\mathbf{x};\theta),y)=-\log p(y\mid\mathbf{x})=-\log \frac{e^{\|\mathbf{h}\|\cdot\hat{h}_{y}}}{\sum_{i=1}^{k}e^{\|\mathbf{h}\|\cdot\hat{h}_ {i}}}\]
While the direction \(\hat{\mathbf{h}(\hat{\mathbf{x}})}\) remains constant, increasing the magnitude will lead to a smaller \(p(y\mid\mathbf{x})\). In the standard Transformer, optimization on the training loss leads to an increase in the magnitude of the network output to produce a higher softmax confidence score, resulting in a smaller loss.
Lipschitzness can help a classifier tackle the overconfidence issue by limiting the amount of change in the classifier's output when the input is perturbed slightly. When a classifier is overconfident, it tends to assign high confidence to incorrect predictions, which can result in poor performance on unseen data. However, if the classifier is Lipschitz continuous, then the amount of change in its output is limited when the input is perturbed, which can prevent the classifier from making overly confident predictions on Out-of-Distribution (OOD) samples.
**Definition 2.2** (Lipschitz Continuity).: Lipschitz constant of a function \(f\) is an upper bound on the ratio between the output and the input variations of a function \(f\). If \(L\in[0,+\infty)\) is such that, for every input \(x\in\mathbb{R}^{d}\) and perturbation \(\Delta x\in\mathbb{R}^{d}\),
\[\|f(x+\Delta x)-f(x)\|_{p}\leqslant L\|\Delta x\|_{p} \tag{1}\]
then \(L\) is a Lipschitz constant of \(f\). \(\|\cdot\|_{p}\) denotes the p-norm. If \(X\) is defined as the \(\epsilon\)-ball at point \(x\), i.e., \(\mathcal{X}^{\prime}=\{x^{\prime}\ |\ \|x-x^{\prime}\|\leqslant\epsilon\}\), then \(L\) is the local Lipschitz constant of \(f\) at \(x\).
Furthermore, the Lipschitz condition can be extended to the Bi-Lipschitz condition. Given any two input samples \(x_{1},x_{2}\) and a non-probabilistic K-way classifier \(\mathbf{h}^{\theta}\), the Bi-Lipschitz condition can be defined as:
\[L_{1}\|x_{1}-x_{2}\|\leq\|\mathbf{h}^{\theta}(x_{1})-\mathbf{h}^{\theta}(x_{2})\| \leq L_{2}\|x_{1}-x_{2}\| \tag{2}\]
where \(\|.\|\) is a semantically meaningful distance for the data manifold for positive and bounded Lipschitz constants \(0<L_{1}<1<L_{2}\). These bounds \(L_{1}\), \(L_{2}\) represent _sensitivity_ and _smoothness_ conditions which prevents the hidden representations \(\mathbf{h}^{\theta}(x)\) from being unnecessarily invariant to the semantically meaningful changes in the input manifold or being overly sensitive to the semantically meaningless perturbations respectively.
Dot-Product Self-Attention is a fundamental building block of the Transformer model. It enables the model to focus
Figure 2: By breaking down a Transformer Layer into its fundamental components, including GeLU Activation, MLP Layer, LayerNorm, and the Attention Module, we gain a comprehensive understanding of their individual contributions to the analysis of Lipschitzness in Transformers. Detailed analysis can be found in Section 3.3.
on the most relevant parts of the input sequence by weighing the contribution of each input vector to the output, as given by the equation \(\operatorname{Attention}(X)=S(X)\cdot V(X)=\operatorname{softmax}\left(\frac{Q \cdot K^{\top}}{\sqrt{d_{k}}}\right)\cdot V(X)\). This mechanism can be further generalized by incorporating a similarity function that measures the relevance between input vectors [11]. While several similarity functions, such as the cosine similarity [13] or the scaled dot product, have been used in the original formulation, they may not be optimal for all scenarios. Therefore, in this paper, we explore the use of a Lipschitz similarity function to mitigate overconfidence issues in the Transformer. We present a method for constructing a suitable Lipschitz similarity function and demonstrate its effectiveness in improving the robustness and accuracy of the model.
\[\operatorname{Attention}(\mathbf{x}_{i},\mathbf{x}_{j})=\operatorname{ softmax}\left(\frac{\operatorname{sim}(\mathbf{x}_{i},\mathbf{x}_{j})}{\sqrt{d_{k}}} \right)\cdot V \tag{3}\]
Our method aims to maintain a reasonable Lipschitz constant at the block level to address the issue of overconfidence in neural networks. While other methods, such as Bayesian approaches and label smoothing [14], have been proposed to tackle overconfidence, our method incorporates block-wise control. By constraining the Lipschitz constant at each block in the Transformer, we can limit the growth of the magnitude of the network output and reduce overconfidence. This block-wise design also allows our method to be easily integrated into various Transformer-based architectures, including those that have undergone large-scale pretraining. As a result, our method can be scaled up to handle a wide range of tasks and datasets.
## 3 Our Method
### Notations and Setup
* \(S^{(i)}:=\operatorname{diag}\left(S_{i:}\right)-S_{i:}^{\top},S_{i:}\in \mathbb{R}^{N\times N}\).
* Binary Matrix with one in the \((i,j)\) the entry and zeros elsewhere: \(E_{ij}\in\mathbb{R}^{N\times N}\)
* Kronecker delta: \(\delta_{ij}\in\{0,1\}\)
* \((\infty,2)\)-norm: \(\|M\|_{(\infty,2)}=\max_{i}\left(\sum_{j}M_{ij}^{2}\right)^{1/2}\)
* Frobenius norm: \(\|M\|_{F}=\left(\sum_{i,j}M_{ij}^{2}\right)^{1/2}\)
* Lipschitz constant \(L_{\mathbb{X},\mathbb{Y}}(f)\): for a function \(f:\mathbb{X}\rightarrow\mathbb{Y}\), \(L_{\mathbb{X},\mathbb{Y}}(f)=\sup_{X\in\mathbb{X}}\|\frac{\partial f(X)}{ \partial X}\|_{\mathbb{X},\mathbb{Y}}\)
### Lipschitz Regularization on Self Attention
Kim et al. [2021] proved that the Scaled Dot-Product Self-Attention does not satisfy the _bi-Lipschitz condition_. To extend the generality of self-attention with high-quality uncertainty estimation, we propose a new regularization method Lipschitz Regularized on Self Attention (LRSA) by replacing the self-attention function with a contractive Bi-Lipschitz expression without losing the original ability of representation. We will explicitly discuss separate aspects to see how to achieve both Lipschitzness and Contraction in our method.
#### 3.2.1 Lipschitzness
Given that _Dot-Product Self-Attention is not Lipschitz_, suppose there exists such mapping \(f(X)\), \(X\in\mathbb{R}^{N}\):
\[f(X)=S\cdot X=\operatorname{softmax}(aX\cdot X^{\top})\cdot X=\left[\begin{array} []{c}f_{1}(X)\\ \vdots\\ f_{N}(X)\end{array}\right]\]
Its Jacobian Matrix is \(J_{f}=[J_{ij}]_{N\times N}\), each entry can be written as:
\[J_{ij}=aX^{\top}S^{(i)}\left[E_{ji}X+\delta_{ij}X\right]+S_{ij}I\in\mathbb{R}^ {N\times 1}\]
Thus for \(i=j\):
\[J_{ii}=aX^{\top}S^{(i)}E_{ii}X+aX^{\top}S^{(i)}X+S_{ii} \tag{4}\]
\(X^{\top}S^{(i)}X\) is in the form of a variance of a discrete distribution. When \(\mathbf{x}_{i}=\mathbf{0}\) for some \(i\), some entries of the Jacobian of \(f\) grow proportionally to the sample variance of \(\mathbf{x}_{\neq i}\).(The softmax probabilities \(S_{i:}\) are constant with respect to \(\mathbf{x}_{\neq i}\) when \(\mathbf{x}_{i}=0\).) This will lead to an unbounded Jacobian matrix.
To avoid this pathology, we replace \(Q\cdot K^{\top}\) by \(\operatorname{sim}(\mathbf{x}_{i},\mathbf{x}_{j})=-\|\mathbf{x}_{i}^{\top}Q- \mathbf{x}_{j}^{\top}K\|_{2}^{2}\) in \(\operatorname{Attention}(X)\). Here, the new similarity measurement lies in the Banach Space (complete vector space with norm \(\|\cdot\|\)), which is a more generalized space over Hilbert Space (complete inner product space) [10]. This modification also gives a strong theoretical guarantee on Lipschitzness with easy matrix multiplications during training.
#### 3.2.2 Contraction
Contraction of the Scaled Dot-Product Self-Attention is another crucial issue for achieving well-calibrated uncertainty. Deriving such contraction scalar requires a theoretical lower bound of the Lipschitz constant on the Dot-Product Self-Attention function. A desirable contraction scalar could be non-strict but easy to compute during training.
**Theorem 3.1** (Dasoulas et al. [2021]).: _For \(\alpha\geq 0\), if \(\tilde{g}\) is Lipschitz and for all \(X\in\mathbb{R}_{d\times n}\), and \(\tilde{g}\) satisfy the following conditions:_
1. \(\|\tilde{g}(X)\|_{\infty}\leqslant\alpha c(X)\)_,_
2. \(\|X^{\top}\|_{(\infty,2)}\|\frac{\partial\tilde{g}(X)}{\partial X}\|_{F,(2, \infty)}\leqslant\alpha c(X)\)_,_
3. \(\|X^{\top}\|_{(\infty,2)}\|\frac{\partial c(X)}{\partial X}\|_{F,1}\|\tilde{g}( X)\|_{(2,\infty)}\leqslant\alpha c(X)^{2}\)_,_
_where \(c\) is a scalar function \(c:\mathbb{R}^{d\times n}\rightarrow\mathbb{R}_{+}\). Then \(g(X)\) is Lipschitz:_
\[g(X)=\frac{\alpha\tilde{g}(X)}{\max\left\{\|\tilde{g}(X)\|_{(2, \infty)},\|X^{\top}\|_{(\infty,2)}\,L_{F,(2,\infty)}(\tilde{g})\right\}} \tag{5}\]
Inspired by 3.1, we introduce a proper regularization scalar function with a Scalar Factor \(\alpha\) by replacing \(\tilde{g}(X)\) with \(Q\cdot K^{\top}\):
\[c(X)=\frac{\alpha}{\|Q\|_{F}\cdot\|X^{\top}\|_{(\infty,2)}} \tag{6}\]
Here, we assign it as a hyperparameter in control of the corresponding Lipschitz constants for proper contraction of the attention block. Small alpha results in a loss of information while a large alpha causes the model tending to be non-Lipschitz.
#### 3.2.3 Summary
Here is the formal definition of the similarity function:
\[S_{ij}:=b\cdot c(X)=-\frac{\alpha\|\mathbf{x}_{i}^{\top}W_{Q}- \mathbf{x}_{j}^{\top}W_{K}\|_{2}^{2}}{\|Q\|_{F}\cdot\|X^{\top}\|_{(\infty,2)}} \tag{7}\]
This pair-wise operation can alternatively be implemented as a matrix version for improved computational efficiency:
\[S(X)=\mathrm{softmax}(-\alpha.\frac{\|Q\|_{\mathrm{row}}^{2}-2 QK^{\top}+\|K\|_{\mathrm{col}}^{2^{\top}}}{\|Q\|_{F}\cdot\|X^{\top}\|_{(\infty,2)}}) \tag{8}\]
LRSA Attention can be represented by the expression \(\mathrm{LRSA}(X)=S(X)\cdot V(X)\), where \(S(X)\) denotes the similarity scores and \(V(X)\) represents the value embeddings. In the following section, we define the Lipschitz Constant of \(S(X)\) as \(L_{\mathrm{LRSA}}\). From Supplementary Material, we can conclude that \(L_{\mathrm{LRSA}}\) is bounded by \(\frac{6\alpha}{\|X\|_{F}}\cdot\frac{(\|W_{Q}\|_{2}+\|W_{K}\|_{2})^{2}}{\|W_{Q} \|_{F}}\).
### Bottom-up analysis on LIRFormer
#### Analysis on Lipschitzness of GeLU Activation
GeLU [1] is the most commonly used activation function in Transformers, especially the GPT series of models and Vision Transformers. GeLU's activation function form is \(\mathrm{GeLU}(x)=x\Phi(x)\), where \(\Phi(x)\) is the standard Gaussian cumulative distribution function. The derivative of GeLU is given as:
\[\mathrm{GeLU}^{\prime}(x)=\frac{xe^{-\frac{x^{2}}{2}}}{\sqrt{2 \pi}}+\frac{\mathrm{erf}(\frac{x}{\sqrt{2}})}{2}+\frac{1}{2} \tag{9}\]
,where \(\mathrm{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-t^{2}dt}\). The second derivative of GeLU is:
\[\mathrm{GeLU}^{\prime\prime}(x)=\frac{1}{\sqrt{2\pi}}((1+\sqrt{2}) \cdot e^{-\frac{x^{2}}{2}}-x^{2}e^{-\frac{x^{2}}{2}}) \tag{10}\]
Compared with ReLU, GeLU is differentiable at zero and keeps the Lipschitz continuous. The Lipschitz constant of GeLU is the value of \(x\) from \(\max(\mathrm{GeLU}^{\prime}(x))\). Let \(\mathrm{GeLU}^{\prime\prime}(x)=0\), we can verify \(\max(\mathrm{GeLU}^{\prime}(x))\approx 1.129\).
#### Analysis on Lipschitzness of LayerNorm
The raw LayerNorm operation (\(\mathrm{LN}(\mathbf{x})=\frac{\mathbf{x}-\mu(\mathbf{x})}{\sqrt{\sigma^{2}( \mathbf{x})}}*\boldsymbol{\gamma}+\boldsymbol{\beta}\)) [1] is not Lipschitz continuous because the ill-defined input with zero variance will lead to a Jacobian matrix filled with infinity.
However, the LayerNorm operation can be changed to a Lipschitz continuous form, which is the LayerNorm used in our models. The form can be expressed as:
\[\mathrm{LayerNorm}(\mathbf{x})=\frac{\mathbf{x}-\mu(\mathbf{x})}{ \sqrt{\sigma^{2}(\mathbf{x})+\epsilon}}*\boldsymbol{\gamma}+\boldsymbol{\beta} \tag{11}\]
where \(\mathbf{x},\boldsymbol{\beta},\boldsymbol{\gamma}\in\mathbb{R}^{N}\), \(\mu(\mathbf{x})=\frac{1}{N}\sum_{i=1}^{N}x_{i}\), \(\sigma^{2}(\mathbf{x})=\frac{1}{N}\sum_{i=1}^{N}(x_{i}-\mu(\mathbf{x}))^{2}\).
From Supplementary Material, we can conclude that LayerNorm is Lipschitz with the constant \(\eta=\epsilon^{-\frac{1}{2}}\max_{i}|\gamma_{i}|N\).
#### Analysis on Lipschitzness of MLP Layer
In each Transformer block, the Attention Module is typically followed by the Multi-Layer Perceptron (MLP) Layer. The MLP layer consists of two Fully Connected (FC) Layers, a Dropout Layer, and a GeLU activation function. Since the Dropout Layer has no impact on the Lipschitz constant, it can be simplified as:
\[\mathrm{MLP}(x)=\mathrm{FC}_{2}\circ\mathrm{GeLU}\circ\mathrm{FC}_{1}(x) \tag{12}\]
Figure 3: GeLU \(g(x)\) and the derivative of GeLU \(g^{\prime}(x)\).
The upper bound on the Lipschitz constant of a fully-connected layer can be derived by analyzing the effect of each layer independently and considering a product of the resulting spectral norms \(\sigma(\cdot)\)[14].
\[L_{\mathrm{MLP}}=1.129\cdot\sigma(W_{1})\cdot\sigma(W_{2}) \tag{13}\]
#### Lipschitz Constant of LRFormer Layer
In the LRFormer, we analyze the LRFormer Layer to demonstrate our method follows the Lipschitzness. Each LRFormer Layer can be expressed as:
\[\begin{split}\mathrm{LRFormer}_{i}(x)&=\mathrm{ LayerNorm}(\mathrm{LayerNorm}(x+\mathrm{LRSA}(x))\\ &+\mathrm{MLP}(\mathrm{LayerNorm}(x+\mathrm{LRSA}(x))))\end{split} \tag{14}\]
Combined with the previous analysis, we conclude the following Lipschitz bound of a single LRFormer layer:
\[\begin{split} L_{\mathrm{Layer}}&=\eta_{1}\cdot \eta_{2}\cdot((1+L_{\mathrm{LRSA}})\\ &+1.129\cdot\sigma(W_{1})\cdot\sigma(W_{2})\cdot(1+L_{\mathrm{LRSA }}))\end{split} \tag{15}\]
, where \(\eta_{1}\), \(\eta_{2}\) are the Lipschitz Constant of two Layer-Norms.
## 4 Experiments
In this section, we verify the effectiveness of LRFormer in OOD detection with several benchmark datasets. We also design ablation experiments including attention module comparison, searching for a proper scalar factor \(\alpha\), and validating the reliability of pretrained models.
### Setup
#### 4.1.1 Benchmarks
We evaluate the performance of the proposed LRFormer model on the OOD benchmark [14] using SVHN [20] as the OOD dataset for the model trained on CIFAR-10/-100 [15]. OOD data is never seen during training, whereas ID samples are semantically similar to training samples. We also show LRFormer's performance on the Two Moons dataset in Figure 1.
#### 4.1.2 Baselines
Our baselines included the deterministic model and two ensemble models: MC Dropout (with 10 dropout samples) and deep ensembles (with 10 models) [13]. All models were trained with a dense output layer and no spectral regularization. Besides, we also compared three single-model approaches: MCD-GP (with 10 samples), DUQ [21], DUE [22], and SNGP series [16] including DNN-SN and DNN-GP. For models that use a GP layer, we kept DL = 1024 and computed the predictive distribution using Monte Carlo averaging with 10 samples. For a fair comparison, we set the backbone with the same parameter magnitude (19.9M parameters for LRFormer, and 36.5M for the SNGP series).
#### 4.1.3 Evaluation Metrics
Expected Calibration Error (ECE) [17] quantifies the difference between a model's expected confidence (e.g., the maximum probability score) and its actual accuracy. It achieves this by partitioning all the samples, with \(n\) representing the total number of samples, into \(M\) equally sized bins based on their confidence scores, then calculating the expected difference between accuracy and the average confidence in each bin. In our task, ECE can indicate the effectiveness of the model in dealing with overconfidence.
In addition to ECE, we employ Negative Log Likelihood (NLL), OOD Area Under the Receiver Operating Characteristic Curve (AUROC), and OOD Area Under the Precision-Recall Curve (AUPR) to evaluate the model's performance in overconfidence and uncertainty estimation ability.
#### 4.1.4 Implementation Details
In the following experiments, we resize the input image to \(224\times 224\) pixels and set the patch size of LRFormer to 16. We employ AdamW [13] as the optimizer with a weight decay of 0.05. We use a cosine learning rate scheduler [13] with the base learning rate set to \(5\times 10^{-5}\). All models are trained for 100 epochs with 10 different random seeds on NVIDIA A100 GPUs.
### Comparison with state-of-the-art models
Following Touvron et al. [20], we adopt an existing training setup, namely the A3 procedure of Wightman et al. [20]. We adjust the learning rate of the A3 procedure when training LRFormer. In our experiments, we set the learning rate to 0.006 for LRFormer when pretraining and 0.004 while finetuning on CIFAR-10/-100. Besides, Different from previous unfair comparison methods [23, 24, 25], pretrained models from the extra datasets and few-shot outlier exposure settings are not used during training.
To evaluate the model's OOD detection performance, we adopt the two OOD tasks suggested by SNGP: (1) using
SVHN as the OOD dataset for a model trained on CIFAR-10/-100; (2) using CIFAR-100/-10 as the OOD dataset for a model trained on CIFAR-10/-100, respectively. Table 1 and Table 2 show the main comparison results. LRFormer outperforms the other single forward pass approaches in all the metrics of CIFAR-10 and most of the metrics of CIFAR-100. Moreover, LRFormer also achieves similar results to Deep Ensemble, which contains 10 models and requires around \(10\times\) as much time to execute as LRFormer and other single forward pass approaches.
### Ablation Study
#### 4.3.1 Attention Blocks
To validate the ability to solve overconfidence issues of Transformers, we compare the LRSA with the scale dot-product attention, L2 attention, and the scaled cosine similarity attention (SCSA) using overconfidence evaluation
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Method & Accuracy (\(\uparrow\)) & ECE (\(\downarrow\)) & NLL (\(\downarrow\)) &
\begin{tabular}{c} OOD AUPR (\(\uparrow\)) \\ SVHN \\ \end{tabular} \\ \hline Deterministic\({}^{*}\) & 96.0 \(\pm\) 0.01 & 0.023 \(\pm\) 0.002 & 0.158 \(\pm\) 0.01 & 0.781 \(\pm\) 0.01 & 0.835 \(\pm\) 0.01 \\ MC Dropout\({}^{*}\) & 96.0 \(\pm\) 0.01 & 0.021 \(\pm\) 0.002 & 0.173 \(\pm\) 0.01 & 0.971 \(\pm\) 0.01 & 0.832 \(\pm\) 0.01 \\ MCD-GP\({}^{*}\) & 95.5 \(\pm\) 0.02 & 0.024 \(\pm\) 0.004 & 0.172 \(\pm\) 0.01 & 0.960 \(\pm\) 0.01 & 0.863 \(\pm\) 0.01 \\ DNN-SN\({}^{*}\) & 96.0 \(\pm\) 0.01 & 0.025 \(\pm\) 0.004 & 0.171 \(\pm\) 0.01 & 0.974 \(\pm\) 0.01 & 0.859 \(\pm\) 0.01 \\ DNN-GP\({}^{*}\) & 95.9 \(\pm\) 0.02 & 0.029 \(\pm\) 0.002 & 0.221 \(\pm\) 0.02 & 0.976 \(\pm\) 0.01 & 0.887 \(\pm\) 0.01 \\ DUQ\({}^{*}\) & 94.7 \(\pm\) 0.02 & 0.034 \(\pm\) 0.002 & 0.239 \(\pm\) 0.02 & 0.973 \(\pm\) 0.01 & 0.854 \(\pm\) 0.01 \\ DUE\({}^{*}\) & 95.6 \(\pm\) 0.04 & 0.018 \(\pm\) 0.002 & 0.187 \(\pm\) 0.01 & - & - \\ SNGP\({}^{*}\) & 95.9 \(\pm\) 0.01 & 0.018 \(\pm\) 0.001 & 0.138 \(\pm\) 0.01 & 0.990 \(\pm\) 0.01 & 0.905 \(\pm\) 0.01 \\ Deep Ensemble\({}^{*\dagger}\) & 96.6 \(\pm\) 0.01 & 0.010 \(\pm\) 0.001 & 0.114 \(\pm\) 0.01 & 0.964 \(\pm\) 0.01 & 0.888 \(\pm\) 0.01 \\ \hline
**LRFormer** & **97.2 \(\pm\) 0.01** & **0.012 \(\pm\) 0.001** & **0.100 \(\pm\) 0.01** & **0.993 \(\pm\) 0.01** & **0.911 \(\pm\) 0.01** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between proposed LRFormer and SOTA methods on CIFAR-10 vs SVHN/CIFAR-100 benchmarks, averaged over 10 seeds. The best method among single-network approaches is highlighted in **bold**. \({}^{*}\)Results from the original papers. \({}^{\dagger}\) with 10 models.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Method & ECE (\(\downarrow\)) & NLL (\(\downarrow\)) \\ \hline DP Attention [20] & 0.066 & 0.580 \\ L2 Attention [15] & 0.048 & 0.582 \\ SCSA [21] & 0.028 & 0.626 \\
**LRSA** & **0.018** & **0.538** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Overconfident evaluation comparison among different attention modules. The backbones are the same except for the kernel. We compare the checkpoint when the accuracy achieves \(0.85\pm 0.01\) with CIFAR-100. Note, we select the best hyper-parameter for SCSA (\(\nu=1.0,\tau=12,\epsilon=1e-8\))
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Method & Accuracy (\(\uparrow\)) & ECE (\(\downarrow\)) & NLL (\(\downarrow\)) &
\begin{tabular}{c} OOD AUPR (\(\uparrow\)) \\ SVHN \\ \end{tabular} \\ \hline Deterministic\({}^{*}\) & 79.8 \(\pm\) 0.02 & 0.085 \(\pm\) 0.004 & 0.872 \(\pm\) 0.01 & 0.882 \(\pm\) 0.01 & 0.745 \(\pm\) 0.01 \\ MC Dropout\({}^{*}\) & 79.6 \(\pm\) 0.02 & 0.050 \(\pm\) 0.003 & 0.825 \(\pm\) 0.01 & 0.832 \(\pm\) 0.01 & 0.757 \(\pm\) 0.01 \\ MCD-GP\({}^{*}\) & 79.5 \(\pm\) 0.04 & 0.085 \(\pm\) 0.005 & 0.937 \(\pm\) 0.01 & 0.873 \(\pm\) 0.01 & 0.754 \(\pm\) 0.01 \\ DNN-SN\({}^{*}\) & 79.9 \(\pm\) 0.02 & 0.098 \(\pm\) 0.004 & 0.918 \(\pm\) 0.01 & 0.879 \(\pm\) 0.03 & 0.745 \(\pm\) 0.01 \\ DNN-GP\({}^{*}\) & 79.2 \(\pm\) 0.03 & 0.064 \(\pm\) 0.005 & 0.885 \(\pm\) 0.01 & 0.876 \(\pm\) 0.01 & 0.746 \(\pm\) 0.02 \\ DUQ\({}^{*}\) & 78.5 \(\pm\) 0.03 & 0.119 \(\pm\) 0.001 & 0.980 \(\pm\) 0.02 & 0.878 \(\pm\) 0.01 & 0.732 \(\pm\) 0.01 \\ SNGP\({}^{*}\) & 79.9 \(\pm\) 0.03 & 0.025 \(\pm\) 0.012 & 0.847 \(\pm\) 0.01 & 0.923 \(\pm\) 0.01 & **0.801 \(\pm\) 0.01** \\ Deep Ensemble\({}^{*\dagger}\) & 80.2 \(\pm\) 0.01 & 0.021 \(\pm\) 0.004 & 0.666 \(\pm\) 0.02 & 0.888 \(\pm\) 0.01 & 0.780 \(\pm\) 0.01 \\ \hline
**LRFormer** & **85.2 \(\pm\) 0.03** & **0.018 \(\pm\) 0.005** & **0.538 \(\pm\) 0.01** & **0.955 \(\pm\) 0.01** & 0.777 \(\pm\) 0.01 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between proposed LRFormer and the SOTA methods on CIFAR-100 vs SVHN and CIFAR-100 vs CIFAR-10 benchmark, averaged over 10 seeds. The best method among single-network approaches is highlighted in **bold**. \({}^{*}\)Results from the original papers. \({}^{\dagger}\) with 10 models.
metrics. In Table3, we show the best Test ECE and NLL across training for each of the Transformer models. The generalization performance of the best model for each setting of self-attention is similar.
We find that L2 attention, SCSA, and LRSA work well under the Lipschitz guarantee. Meanwhile, LRSA works better than SCSA and L2 attention.
#### 4.3.2 Hyperparameter Analysis
The Scalar Factor \(\alpha\) in Equation (6) controls the scale of the Lipschitz constant of the Transformer blocks. In general, we propose running a grid search for \(\alpha\in\{...,100,500,1000,...\}\) to find the highest possible value of \(\alpha\) while retaining the predictive performance of LRFormer. In our experiments (Table 5), we set scalar factor \(\alpha=1000\) in CIFAR-10, and \(\alpha=500\) in CIFAR-100. The model's performance is not very sensitive to the parameters in the GP output layer, we follow Liu et al. (2020)'s suggestion and set the number of random features to \(1024\), the length-scale for the RBF kernel to \(2\), and the \(L_{2}\) regularization to 0. A proper \(\alpha\) value, _i.e._ 100, can preserve both the Lipschitzness and Contraction properties in the model. Small alpha will cause loss of information while large alpha will cause the model tending to be non-Lipschitz, leading to degenerate performance.
#### 4.3.3 Module Comparison
In this section, we compare LRFormer and other OOD detection methods (using Transformer backbone) under the uncertainty estimation setting. We use a depth 6 shallow layer Transformer to conduct this experiment. For the Transformer baseline model, we take the predictive entropy as uncertainty. For SNGP + Transformer, the entropy of the average of the Monte Carlo softmax samples is used as uncertainty. We do not compare with DUE for the CIFAR-100 dataset, as its training does not converge. SGD is used as the optimizer with the initial learning rate set to 0.01. All models are trained with batch size 128.
The accuracy, NLL, AUROC, and AUPR results are shown in Table 6. The AUROC metric indicates the quality of uncertainty since it measures the probability that in-distribution (ID) and OOD samples can be separated [14]. From the results, we have the following observations:
(1) For OOD detection, _The proposed LRFormer model outperforms all other methods with Transformer backbone on both CIFAR-10 vs SVHN and CIFAR-100 vs SVHN benchmarks_. This superior OOD detection performance benefits from the proposed LRSA regularization method, which solves both Lipschitzness and contraction problems in dot-product self-attention layers, and enables distance-preserving mapping in Transformer blocks.
(2) Notably, superior performance in OOD is achieved without sacrificing LRFormer's predictive performance. On the contrary, LRFormer even outperforms the standard Trans
\begin{table}
\begin{tabular}{l|c|c c c c} \hline \hline Dataset & Method & Accuracy (\(\uparrow\)) & NLL (\(\downarrow\)) & OOD AUROC (\(\uparrow\)) & OOD AUPR (\(\uparrow\)) \\ \hline \multirow{4}{*}{CIFAR-10} & Transformer & **0.8592 \(\pm\) 0.01** & 0.6972 \(\pm\) 0.02 & 0.7552 \(\pm\) 0.05 & 0.8521 \(\pm\) 0.02 \\ & DUE + Transformer & 0.8556 \(\pm\) 0.03 & 0.5337 \(\pm\) 0.02 & 0.8348 \(\pm\) 0.04 & 0.8921 \(\pm\) 0.01 \\ & SNGP + Transformer & 0.8542 \(\pm\) 0.02 & 0.4933 \(\pm\) 0.01 & 0.8275 \(\pm\) 0.03 & 0.8960 \(\pm\) 0.01 \\ \cline{2-6} & **LRFormer** & 0.8528 \(\pm\) 0.02 & **0.4447 \(\pm\) 0.01** & **0.8500 \(\pm\) 0.05** & **0.9078 \(\pm\) 0.01** \\ \hline \multirow{4}{*}{CIFAR-100} & Transformer & 0.6304 \(\pm\) 0.02 & 1.7862 \(\pm\) 0.01 & 0.7831 \(\pm\) 0.02 & 0.8701 \(\pm\) 0.02 \\ & SNGP + Transformer & 0.6298 \(\pm\) 0.03 & 1.5413 \(\pm\) 0.01 & 0.8134 \(\pm\) 0.01 & 0.8929 \(\pm\) 0.01 \\ \cline{1-1} \cline{2-6} & **LRFormer** & **0.6404 \(\pm\) 0.02** & **1.4041 \(\pm\) 0.02** & **0.8421 \(\pm\) 0.01** & **0.9165 \(\pm\) 0.01** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study between the proposed LRFormer and existing uncertainty quantification methods with the same training backbone on the CIFAR-10/100 vs SVHN benchmark. The best method among single-network approaches is highlighted in **bold**. \(\downarrow\) means lower is better. \(\uparrow\) means higher is better.
\begin{table}
\begin{tabular}{l|c|c c c c} \hline \hline Dataset & Pretained & Accuracy (\(\uparrow\)) & NLL (\(\downarrow\)) & OOD AUROC (\(\uparrow\)) & OOD AUPR (\(\uparrow\)) \\ \hline \multirow{2}{*}{CIFAR-10} & W/O & 0.8528 \(\pm\) 0.01 & 0.4447 \(\pm\) 0.02 & 0.8500 \(\pm\) 0.01 & 0.9078 \(\pm\) 0.02 \\ & W/ & **0.8616 \(\pm\) 0.01** & **0.4193 \(\pm\) 0.01** & **0.9125 \(\pm\) 0.02** & **0.9499 \(\pm\) 0.01** \\ \hline \multirow{2}{*}{CIFAR-100} & W/O & 0.6404 \(\pm\) 0.01 & 1.4041 \(\pm\) 0.03 & 0.8421 \(\pm\) 0.01 & 0.9165 \(\pm\) 0.01 \\ & W/ & **0.6679 \(\pm\) 0.01** & **1.2122 \(\pm\) 0.02** & **0.8689 \(\pm\) 0.01** & **0.9319 \(\pm\) 0.01** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study between with (W/) and without (W/O) pre-trained weights from ImageNet-1K dataset. The best method among single-network approaches is highlighted in **bold**. \(\downarrow\) means lower is better. \(\uparrow\) means higher is better.
former baseline in terms of classification accuracy on the CIFAR-10 dataset, making LRFormer achieve the best performance in terms of all the metrics compared with all other single-network methods.
(3) Furthermore, the proposed LRSA self-attention can be computed efficiently using matrix operations, with minimal overhead compared to the original dot-product self-attention. This ensures LRFormer's performance gains come without compromising computation costs.
#### 4.3.4 Pre-training
The recent work Plex [17] comprehensively validated the reliability of the large pretrained models. The high performance of the Transformer results from pre-training on large-size datasets such as ImageNet-21K, and LAION-5B[14]. It performs worse than CNNs if trained from scratch on small-size datasets. We use a depth 6 shallow layer Transformer to conduct pretrained weight experiments. The pretrained weights are loaded from the standard Transformer, sharing the same weight schemes of MLP layers and position embedding layer. This is because these layers in LRFormer have the same structure as the standard Transformer, so pre-trained weights can be directly applied to them. Our experiments in Table 5 show that LRFormer can also benefit from these pre-trained weights.
In summary, pre-trained weights of models can be directly transferred to LRFormer, which is very convenient for real-world applications.
### Visualization
In order to show the interpretability of our model, we visualize the uncertainty heat map generated by LRFormer together with the baseline methods on the two moons 2D classification benchmark, which consists of two moon-shaped data clusters separable by a non-linear decision boundary. We employ a tiny Transformer architecture for this task, in which the depth is set to 9, the hidden dimension is set to 24 and the number of heads is set to 8.
The uncertainty heat map comparisons are shown in Figure 1. Background color visualizes the predictive uncertainty of each model, where yellow stands for confidence and blue indicates uncertainty. All methods achieve 100% test accuracy. From the results, we can observe that Deep Ensemble (Figure 1(a)) estimates its uncertainty based on how far away test samples are from the decision boundary, without considering the data distribution. In Figure 1(b), we can see that DUE without restrictions in the feature extractor, produces similar predictive uncertainty to Deep Ensemble which is heavily influenced by the distance from the decision boundary. Although SNGP can make allowances for data distribution, the decision boundary still has an impact on uncertainty estimation. The proposed LRFormer model, on the other hand, achieves near-ideal uncertainty quantification of this benchmark thanks to its bi-Lipschitz constraint in the LRSA self-attention layers, which allows it to maintain better distance awareness.
## 5 Conclusion
In this paper, we present LRSA, a regularization method designed to address overconfidence issues in Transformer structure models. By enforcing Bi-Lipschitz constraints and self-attention mapping contractions with theoretical guarantees, LRSA encourages the model to generate conservative predictions for out-of-distribution (OOD) inputs, thereby improving its ability to separate in-distribution (ID) data.
Our approach primarily focuses on the attention mechanism of the Transformer architecture, which is a powerful and widely-used component in various natural language processing and vision tasks. Moving forward, it would be beneficial to extend our approach to incorporate these other modules within the Transformer architecture. Exploring how different combinations of modules can be leveraged to enhance performance across various tasks represents a promising avenue for future research. Additionally, investigating the relationship between Lipschitz regularity and other regularization techniques, including weight decay, dropout, and label smoothing, would provide valuable insights. Although these techniques have demonstrated effectiveness in preventing overfitting and improving generalization, their connection to Lipschitz regularity is not yet well-understood. Gaining a deeper understanding of this relationship could unlock insights into the inner workings of deep learning models and potentially lead to further performance improvements.
In conclusion, our proposed LRSA method addresses overconfidence issues in Transformer structure models by encouraging conservative predictions for OOD inputs. While our focus has been on the attention mechanism, future research directions involve incorporating other modules, exploring the relationship between Lipschitz regularity and other regularization techniques, and expanding LRFormer's applicability to diverse models and domains. These efforts contribute to advancing the field of deep learning and improving the robustness and performance of state-of-the-art models.
## Acknowledgements
We thank all the anonymous reviewers for their insightful and thoughtful comments. |
2301.03996 | Collaborative Semantic Communication for Edge Inference | We study the collaborative image retrieval problem at the wireless edge,
where multiple edge devices capture images of the same object from different
angles and locations, which are then used jointly to retrieve similar images at
the edge server over a shared multiple access channel (MAC). We propose two
novel deep learning-based joint source and channel coding (JSCC) schemes for
the task over both additive white Gaussian noise (AWGN) and Rayleigh slow
fading channels, with the aim of maximizing the retrieval accuracy under a
total bandwidth constraint. The proposed schemes are evaluated on a wide range
of channel signal-to-noise ratios (SNRs), and shown to outperform the
single-device JSCC and the separation-based multiple-access benchmarks. We also
propose two novel SNR-aware JSCC schemes with attention modules to improve the
performance in the case of channel mismatch between training and test
instances. | Wing Fei Lo, Nitish Mital, Haotian Wu, Deniz Gündüz | 2023-01-10T14:42:13Z | http://arxiv.org/abs/2301.03996v2 | # Collaborative Semantic Communication for Edge Inference
###### Abstract
We study the collaborative image retrieval problem at the wireless edge, where multiple edge devices capture images of the same object from different angles and locations, which are then used jointly to retrieve similar images at the edge server over a shared multiple access channel (MAC). We propose two novel deep learning-based joint source and channel coding (JSCC) schemes for the task over both additive white Gaussian noise (AWGN) and Rayleigh slow fading channels, with the aim of maximizing the retrieval accuracy under a total bandwidth constraint. The proposed schemes are evaluated on a wide range of channel signal-to-noise ratios (SNRs), and shown to outperform the single-device JSCC and the separation-based multiple-access benchmarks. We also propose a channel state information-aware JSCC scheme with attention modules to enable our method to adapt to varying channel conditions.
Semantic communication, Internet of Things, person re-identification, deep joint source and channel coding, collaborative image retrieval
## I Introduction
In recent years, machine learning tasks at the wireless edge have been studied extensively in the literature, including distributed and remote inference problems over wireless channels [1, 2, 3]. In distributed and remote inference problems, it is often assumed that centrally trained models, e.g. deep neural networks (DNNs) are employed across multiple distributed nodes, which have limited communication resources. Particularly in image retrieval, images of an object or a person taken by edge devices are used to identify the images of the same object or person, taken by different cameras, from different angles, and at different times, in a gallery database. Note that for image retrieval, unlike most conventional classification or inference problems, which can be carried out locally at the edge device, remote inference is essential even if the edge devices have unlimited computational power, as the gallery database is only available at the edge server. On the other hand, due to latency and bandwidth constraints, sending the whole image over a noisy wireless channel is not feasible. Instead, learning-based feature extraction is done at the edge, and only the most relevant features of the source image, representing the semantic content of the image, are sent to the edge server over the wireless channel. This calls for semantic communication, since the inference classes are not pre-defined, and the edge server must infer the similarity of the semantics of the image captured by the edge device with those of the images in the gallery database. This approach is also called _goal-oriented communication_ in the _semantic communication_ literature [4].
In [5], both separation-based and joint source channel coding (JSCC) approaches have been studied for feature transmission in remote image retrieval. While Shannon's separation theorem [6] states that separating source and channel coding can achieve asymptotic optimality, this theorem breaks down in finite block-lengths. We typically have much more stringent latency constraints on edge inference applications compared to the delivery of images or videos; hence, our interest is in very short blocklengths, where separation typically has very poor performance. An autoencoder-based JSCC (JSCC-AE) scheme is proposed in [5], and it is shown to outperform its digital counterpart under all channel conditions.
In this paper, we study the collaborative re-identification (ReID) problem, where two edge devices capture images of the same scene and communicate with the edge server, in a distributed manner, to predict the image identity based on similar images in a gallery database. The distributed nature of the problem poses unique challenges, where the edge devices must "collaborate" implicitly to derive the relevant semantic information from their respective images of the scene, in a manner which complements the other and therefore improves the communication or inference accuracy at the receiver. We highlight that such collaboration is implicit, and not explicit where the edge devices would share messages with each other.
The goal of this paper is to develop a deep learning-based JSCC scheme for the two-device scenario, which maximizes the accuracy of the retrieval task while communicating over a shared multiple access channel (MAC). To explore different transmission schemes for the multi-source collaborative edge inference, we first consider an orthogonal multiple access (OMA) scheme employing time division multiple access (TDMA) with distributed JSCC, and show that it outperforms the schemes in [5], as well as a conventional separate source-channel coding scheme, where each device transmits a quantized version of its features to the receiver using capacity
Fig. 1: Illustration of the two-device collaborative image retrieval problem at the wireless edge.
achieving channel codes. In addition, we study an alternative non-orthogonal multiple access (NOMA) approach. Benefits of NOMA transmission in various distributed inference and training problems have recently received significant interest [7, 8, 9]. In the NOMA approach, our goal is to exploit the superposition property of the wireless medium, and the features transmitted as analog values over the shared wireless channel get aggregated "over-the-air", thus boosting the signal associated with the common semantic information in the two transmitted signals, correlated with the common identity viewed by the edge devices. We evaluate these schemes on the additive white Gaussian noise (AWGN) and Rayleigh slow fading channels. Inspired by the attention mechanism in adaptive JSCC [10, 11, 12], we also propose an SNR-aware scheme for the AWGN channel to adjust the networks depending on the SNRs. Our main contributions can be summarized as follows:
* To the best of our knowledge, this is the first paper to study collaborative inference among edge devices for joint retrieval. We propose two new collaborative JSCC schemes for OMA and NOMA transmissions, and show the superiority of the latter.
* We construct and analyze DNN architectures for a channel state information (CSI)-aware JSCC scheme (SNR-aware and channel fading-aware), where a single network is trained to exploit the channel state information for channel equalization and SNR-adaptation.
## II Related work
### _Image retrieval_
Image retrieval task aims to improve the quality of identity recognition. Given a query image, an image retrieval model assesses its similarities with gallery images, and matches it to the 'nearest' ones. Performance can be evaluated through top-1 retrieval accuracy [13]. Image retrieval task has received significant attention in recent years thanks to the tremendous success of deep learning technologies [14].
### _Remote inference at the wireless edge_
With the rapid growth of machine intelligence and the associated machine-to-machine communications, the goal of emergent communication systems is shifting towards making accurate inferences about a remote signal rather than reconstructing it [1], unlike conventional communication systems which are designed to serve data packets without regarding the content of the packets or the task at the receiver. Therefore, remote inference problems are attracting significant interest in the context of the emerging semantic communication paradigm [4]. Literature on joint edge-device inference mostly focus on a rate-limited scenario [15, 16], while ignoring channel effects. Jankowski et al. [5] proposed a JSCC transmission scheme for image retrieval, showing a marked improvement over previous works based on digital schemes.
### _Multi-device collaborative learning_
Existing multi-device collaborative algorithms mainly focus on signal transmission [17], classification tasks [18], visual question answering [19], and multi-agent coordination [20]. Shao et al. [18] propose a deterministic distributed information bottleneck (DDIB) principle for distributed feature encoding. Different from previous work, our paper studies collaborative inference over the wireless edge, in which the effects of a wireless channel are considered.
## III System model
We consider two transmitters, each having access to images of the same object taken by a different camera. We denote the image observed by transmitter \(i\) by \(\mathbf{s}_{i}\in\mathbb{R}^{p}\), \(i=1,2\). Transmitter \(i\) employs an encoding function \(\mathcal{E}_{i}:\mathbb{R}^{p}\rightarrow\mathbb{C}^{q}\), where \(\mathbf{x}_{i}=\mathcal{E}_{i}(\mathbf{s}_{i})\in\mathbb{C}^{q}\) and \(\mathbf{x}_{i}\) is subject to the power constraint as: \(\frac{1}{2}\|\mathbf{x}_{i}\|_{2}^{2}\leq 1\). Here, \(q\) represents the available channel bandwidth. The decoder function \(\mathcal{D}:\mathbb{C}^{q}\rightarrow\mathbb{D}\) is employed at the receiver, where \(\mathbb{D}\equiv\{1,2,\ldots,D\}\), and \(D\) is the size of the database, maps the received signal \(\mathbf{y}\) to the result of the retrieval task.
**Channel model:** Devices transmit their signals over a MAC. The received signal is given by \(\mathbf{y}=h_{1}\mathbf{x}_{1}+h_{2}\mathbf{x}_{2}+\mathbf{z}\), where \(\mathbf{z}\in\mathbb{C}^{q}\) is the additive noise vector, assumed to be independent and identically distributed (i.i.d.) according to the complex normal distribution \(\mathcal{CN}(0,\sigma_{z}^{2})\). For the AWGN channel, we set \(h_{1}=h_{2}=1\). We also consider a slow fading MAC, where the fading coefficients \(h_{1}\) and \(h_{2}\in\mathbb{C}\), assumed to remain constant during each retrieval task, but changes across tasks in an i.i.d. fashion sampled from \(\mathcal{CN}(0,\sigma_{h}^{2})\).
We will consider and compare three alternative transmission schemes, separation-based transmission, JSCC with OMA, and JSCC with NOMA, as well as the single-user benchmark [5].
### _Separate Digital Transmission_
In the digital scheme, transmitter \(\mathcal{E}_{i}\) extracts a semantic feature vector \(\mathbf{v}_{i}\in\mathbb{R}^{r}\) from the source \(\mathbf{s}_{i}\), which is quantized to \(\tilde{\mathbf{v}}_{i}\in\mathbb{Z}^{r}\), and then mapped to a channel codeword \(\mathbf{x}_{i}\in\mathbb{C}^{q}\). The two transmitters transmit their codewords over the MAC.
The receiver first decodes the two channel codewords to recover the quantized semantic features \(\tilde{\mathbf{v}}_{1}\) and \(\tilde{\mathbf{v}}_{2}\). In the asymptotic limit of infinite blocklength, the transmitted codewords can be decoded with a vanishing error probability if the transmission rates are within the capacity of the corresponding channels. In that case, the only source of error in the computation of the desired function is quantization. The receiver then performs the retrieval task on the recovered source signals.
### _JSCC_
In this scheme, source signals \(\mathbf{s}_{i}\in\mathbb{R}^{p},i=1,2,\) are first mapped to semantic feature vectors \(\mathbf{v}_{i}\in\mathbb{R}^{r},i=1,2\), which are then mapped to the channel codewords \(\mathbf{x}_{i}\in\mathbb{C}^{q}\). We consider two JSCC schemes:
**JSCC with OMA:** Each transmitter is allocated half the available channel bandwidth, i.e., \(\frac{q}{2}\) channel uses.
**JSCC with NOMA:** In this scheme, each transmitter occupies the full channel bandwidth of \(q\).
In both cases, the receiver first decodes the received signal, using two JSCC decoders \(\mathcal{D}_{i}:\mathbb{C}^{q}\rightarrow\mathbb{R}^{r},i=1,2\), to
recover estimates \(\hat{\mathbf{v}}_{1}\) and \(\hat{\mathbf{v}}_{2}\) of the semantic features, and then performs the retrieval task using the recovered semantic features.
## IV Distributed image retrieval
In this section, we focus on the image retrieval task, which is evaluated by the top-1 retrieval accuracy [13].
### _Separate Digital Transmission_
Each transmitter consists of a semantic feature encoder, modeled as a ResNet50 [21] network, followed by a feature compressor, employing quantization and arithmetic coding modules, which are the same as the state-of-the-art pipeline in [5]. The compressed bits are then channel coded and transmitted on the wireless channel. The receiver decodes the received signal to obtain estimates of the quantized semantic features, which are then passed to the image retrieval module.
**Training strategy:** We perform end-to-end training for the digital scheme, with the following loss function: \(l=\frac{1}{3}(l_{c_{eaux1}}+l_{c_{main}}+l_{c_{eaux2}})+\lambda\cdot(\log_{2}p (\hat{\mathbf{v}}_{1})+\log_{2}p(\hat{\mathbf{v}}_{2}))\), where \(l_{c_{eaux1}},l_{c_{eaux2}},l_{c_{emain}}\) are the cross-entropy losses between the identity prediction result from each classifier (two auxiliary and a main classifier, see Fig. 2) and the ground truth, same as [5, 14]. \(\log_{2}p(\hat{\mathbf{v}}_{1})\) and \(\log_{2}p(\hat{\mathbf{v}}_{2})\) are entropies of the quantized semantic features, same as in [5].
### _JSCC_
In this scheme (illustrated in Fig. 2), the feature compressor, quantizer, arithmetic coder, and channel coder at the transmitter, and the channel decoder and arithmetic decoder at the receiver, are replaced by a single autoencoder architecture. The received signal is fed to two joint semantic-JSCC decoders, which decode estimates of the semantic features sent by the two transmitters. Once the semantic features are recovered, they are used for the image retrieval task.
**Training strategy:** A three-step training strategy is adopted, which consists of pre-training of the semantic feature encoders (T\({}_{1}\)), pre-training of the JSCC autoencoders (T\({}_{2}\)), and end-to-end training (T\({}_{3}\)). In T\({}_{1}\), the semantic feature encoder is pre-trained, using the average cross-entropy loss function: \(l_{cls}=\frac{1}{3}(l_{c_{eaux1}}+l_{c_{eaux2}}+l_{c_{eaux2}})\). In T\({}_{2}\), the pre-trained semantic feature encoders are frozen, and only the JSCC autoencoders are trained, using the average mean squared error (MSE) loss between the transmitted and reconstructed semantic features: \(l_{jsec}=\frac{1}{2}(l_{MSE_{1}}+l_{MSE_{2}}),\) where \(l_{MSE_{i}},i=1,2\) is the mean squared error between the transmitted features \(\mathbf{v_{i}}\) and reconstructed semantic features \(\mathbf{\hat{v_{i}}}\) of the \(i\)-th transmitter. In T\({}_{3}\), the whole network is trained jointly, with the loss function in T\({}_{1}\).
We also propose a CSI-aware architecture variation for AWGN and slow fading channel with CSI at the receiver only (CSIR), where the available CSI (SNR or channel gain) is fed to the model via attention feature (AF) modules [10, 12] inserted before, after and between each layer of the autoencoder. For the AWGN channel, the AF modules at the encoder and decoder scale the intermediate feature maps to adapt to the channel SNR. For slow fading with CSIR, the AF modules scale the received signal and the intermediate feature maps by a channel-dependent constant, intuitively playing the role of channel equalization.
## V Experimental Results
### _Performance against channel SNR_
The proposed schemes for JSCC with OMA and NOMA are trained and tested on a pre-processed Market-1501 [22] dataset over a wide range of channel SNRs from -6dB to 15dB, and compared with the separation-based scheme and the single-device JSCC scheme in [5].
In Fig. (a)a, we plot the top-1 accuracy in an AWGN channel. In Fig. (b)b, we plot the top-1 accuracy in a slow fading channel without CSI at the receiver. The digital scheme is not plotted in Fig. (b)b because such a scheme is not possible to decode without CSI at the receiver, while JSCC allows communication even without the availability of CSI at the receiver. In Fig. (c)c, we plot the top-1 accuracy in a slow fading channel with CSI available at the receiver. As expected, CSIR provides better accuracy than when CSI is absent at the receiver.
In Figs. (a)a, (b)b and (c)c, the proposed JSCC schemes outperform the separate digital scheme at almost all SNRs, except at high SNRs. However, note that we assume MAC capacity-achieving codes with equal rate allocation for each transmitter in this separate digital scheme, and therefore the reported performance of the digital scheme is not achievable in practice, particularly for the very low channel bandwidth of \(q=32\) per user considered here. The two-device JSCC schemes outperform the single-device JSCC scheme for a wide range of channel SNRs, especially higher SNRs, showing that incorporating two views of the same identity to make a collaborative decision at the edge server improves the retrieval performance. It is also observed in Fig. (a)a, (b)b and (c)c that JSCC with NOMA outperforms its orthogonal counterpart. In Fig. (a)a, it is shown that while the OMA JSCC scheme outperforms the single-device JSCC benchmark at most SNRs, they are surpassed by it at very low SNRs. This is because, in the low SNR regime, it is more beneficial to allocate all the channel resources to one transmitter to acquire the features from that one with sufficient quality for retrieval, rather than receiving very low quality features from two queries. However,
Fig. 2: DNN architecture for the JSCC transmission schemes.
the NOMA JSCC scheme brings the benefits of both schemes together, and outperforms both schemes at all SNRs. In Fig. (c)c, the single-device JSCC as well as the proposed two-device JSCC schemes (both OMA and NOMA) outperform the separation-based scheme. These observations match our expectations. The suboptimality of separate source and channel coding used in the digital transmission scheme stems from two reasons. First of them is the usual suboptimality of separation in the finite blocklength regime. This was already observed in [5] for a point-to-point scenario. On the other hand, even in the infinite blocklength regime, separation becomes suboptimal when the two sources transmitted over the MAC are correlated. It is known that exploiting the correlation between the sources to generate correlated codewords at the encoders can strictly increase the end-to-end performance [23, 24]. To allow partial cooperation between the distributed transmitters, we must allow the transmitted signals to depend statistically on the source outputs, thus inducing correlation between the transmitted signals. Separation-based schemes operate in the opposite manner, where the dependence between the sources is destroyed by separate source and channel coding, thus making the transmitted signals independent.
We observe that the orthogonal JSCC architecture learns to transmit uncorrelated signals, as shown in Table I, where the correlation between \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) is computed using squared cosine similarity, defined as \(\cos^{2}(\mathbf{x}_{1},\mathbf{x}_{2})\triangleq\frac{\langle\mathbf{x}_{1}, \mathbf{x}_{2}\rangle^{2}}{\|\mathbf{x}_{1}\|^{2}\|\mathbf{x}_{2}\|^{2}}\). By sending independent symbols, the JSCC encoders capture non-overlapping information from the two views, thus avoiding redundancy, and maximising the use of communication resources. However, this mechanism is unable to make the distributed transmitters cooperate through the dependence of transmitted signals; hence, the lower accuracy achieved compared to the NOMA scheme. In contrast, JSCC with NOMA learns to transmit correlated signals. Higher correlation between the transmitted signals for the NOMA scheme results in higher performance. In fact, in Fig. (a)a, we plot the effect of the amount of correlation between the transmitted signals on the performance of the NOMA JSCC scheme, which we control by introducing a cosine similarity regularization term in the loss function as follows: \(l=\frac{1}{3}(l_{ce_{aust}}+l_{ce_{main}}+l_{ce_{awaz}})+\lambda\cos^{2}\left( \mathbf{x}_{1},\mathbf{x}_{2}\right)\). Higher values of \(l\) force \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) to be less correlated. We observe in Fig. (a)a that the accuracy drops as the correlation between the transmitted signals decreases. Interestingly, when the cosine similarity in the NOMA scheme is reduced to approach \(0\), its accuracy approaches that of the orthogonal JSCC scheme.
### _SNR-aware JSCC_
The SNR-aware JSCC scheme, introduced in Section IV-B, is trained over a range of SNR\({}_{train}\), and tested over a wide range of SNR\({}_{test}\) values, from -6 to 18dB. In Fig. (b)b and (c)c, the performance of the SNR-aware schemes for the two JSCC schemes is compared with that of non-SNR-aware architectures trained over a single SNR\({}_{train}\) but tested on different SNR\({}_{test}\) values.
Note that the non-SNR-aware architectures exhibit graceful degradation when there is channel mismatch, that is, when the test channel conditions are worse than that of the training conditions. Thus, the JSCC scheme is able to avoid the cliff effect which conventional digital communication suffers from, where the performance of the digital schemes drops sharply when channel conditions are worse than those for which the encoder and decoder are designed. However, the SNR-aware architectures are observed to achieve strictly higher retrieval accuracies than the non-SNR-aware architectures (see Fig. (b)b and (c)c), providing a single DNN that performs the same or better on all SNRs than employing a distinct DNN optimised for each particular SNR value or range.
## VI Conclusion
We proposed two JSCC schemes for deep-learning based distributed retrieval at the wireless edge, with OMA and
\begin{table}
\begin{tabular}{c|c}
**Scheme** & **Squared cosine similarity** \\ \hline OMA (AWGN) & 0.0151 \\ OMA (slow fading) & 0.0165 \\ NOMA (AWGN) & 0.7523 \\ NOMA (slow fading) & 0.8234 \\ \end{tabular}
\end{table} TABLE I: Squared cosine similarity between input symbols of the OMA and NOMA schemes.
Fig. 3: Top-1 retrieval accuracies of the proposed two-device schemes and the single-device scheme under different channel SNRs, with a total channel bandwidth of \(q=64\).
NOMA, respectively. These schemes are shown to outperform conventional separation based alternative with capacity-achieving channel codes, and the JSCC scheme with a single source [5]. We observed that the NOMA JSCC scheme outperforms its OMA counterpart with TDMA. We also observed that the DNN architecture, when trained for NOMA, learns to transmit correlated signals to induce partial cooperation between the transmitters and to improve the final accuracy. The OMA JSCC scheme, in contrast, learns to transmit uncorrelated signals. With these observations in mind, in our future work, we will study how the correlation between the transmitted signals can be optimized to improve performance.
|
2305.11865 | Regularity for Minimizers of a Planar Partitioning Problem with Cusps | We study the regularity of minimizers for a variant of the soap bubble
cluster problem: \begin{align*}
\min \sum_{\ell=0}^N c_{\ell} P( S_\ell)\,, \end{align*} where $c_\ell>0$,
among partitions $\{S_0,\dots,S_N,G\}$ of $\mathbb{R}^2$ satisfying $|G|\leq
\delta$ and an area constraint on each $S_\ell$ for $1\leq \ell \leq N$. If
$\delta>0$, we prove that for any minimizer, each $\partial S_{\ell}$ is
$C^{1,1}$ and consists of finitely many curves of constant curvature. Any such
curve contained in $\partial S_{\ell} \cap \partial S_{m}$ or $\partial S_\ell
\cap \partial G$ can only terminate at a point in $\partial G \cap \partial
S_\ell \cap \partial S_{m}$ at which $G$ has a cusp. We also analyze a similar
problem on the unit ball $B$ with a trace constraint instead of an area
constraint and obtain analogous regularity up to $\partial B$. Finally, in the
case of equal coefficients $c_\ell$, we completely characterize minimizers on
the ball for small $\delta$: they are perturbations of minimizers for
$\delta=0$ in which the triple junction singularities, including those possibly
on $\partial B$, are ``wetted" by $G$. | Michael Novack | 2023-05-19T17:56:16Z | http://arxiv.org/abs/2305.11865v2 | # Regularity for minimizers of a planar partitioning problem with cusps
###### Abstract.
We study the regularity of minimizers for a variant of the soap bubble cluster problem:
\[\min\sum_{\ell=0}^{N}c_{\ell}P(S_{\ell})\,,\]
where \(c_{\ell}>0\), among partitions \(\{S_{0},\ldots,S_{N},G\}\) of \(\mathbb{R}^{2}\) satisfying \(|G|\leq\delta\) and an area constraint on each \(S_{\ell}\) for \(1\leq\ell\leq N\). If \(\delta>0\), we prove that for any minimizer, each \(\partial S_{\ell}\) is \(C^{1,1}\) and consists of finitely many curves of constant curvature. Any such curve contained in \(\partial S_{\ell}\cap\partial S_{m}\) or \(\partial S_{\ell}\cap\partial G\) can only terminate at a point in \(\partial G\cap\partial S_{\ell}\cap\partial S_{m}\) at which \(G\) has a cusp. We also analyze a similar problem on the unit ball \(B\) with a trace constraint instead of an area constraint and obtain analogous regularity up to \(\partial B\). Finally, in the case of equal coefficients \(c_{\ell}\), we completely characterize minimizers on the ball for small \(\delta\): they are perturbations of minimizers for \(\delta=0\) in which the triple junction singularities, including those possibly on \(\partial B\), are "wetted" by \(G\).
## 1. Introduction
### Overview
A classical problem in the calculus of variations is the soap bubble cluster problem, which entails finding the configuration, or cluster, of least area separating \(N\) regions with prescribed volumes, known as chambers. Various generalizations have been studied extensively as well and may involve different coefficients penalizing the interfaces between pairs of regions (the immiscible fluids problem) or anisotropic energies. The existence of minimal clusters and almost everywhere regularity for a wide class of problems of this type were obtained by Almgren in the foundational work [1]. The types of singularities present in minimizers in the physical dimensions are described by Plateau's laws, which were verified in \(\mathbb{R}^{3}\) by Taylor [20]. In the plane, regions in a minimizing cluster are bounded by finitely many arcs of constant curvature meeting at \(120^{\circ}\) angles [13]. We refer to the book [13] for further discussion on the literature for soap bubble clusters.
In this article we study the interaction of the regularity/singularities of 2D soap bubbles with other physical properties such as thickness. Soap bubbles are generally modeled as surfaces, or "dry" soap bubbles. This framework is quite natural for certain questions, e.g. singularity analysis as observed above, but it does not capture features related to thickness or volume of the soap. Issues such as which other types of singularities can be stabilized by "wetting" the film [14, 15] require the addition of a small volume parameter to the model corresponding to the enclosed liquid; see for example [1, 2]. In the context of least-area surfaces with fixed boundary (Plateau problem), the authors in [17, 18, 19] have formulated a soap film capillarity model that selects surface tension energy minimizers enclosing a small volume and spanning a given wire frame. The analysis of minimizers is challenging, for example due to the higher multiplicity surfaces that arise if the thin film "collapses."
Here we approach these issues through the regularity analysis of minimizers of a version of the planar minimal cluster problem. In the model, there are \(N\) chambers of fixed area (the soap bubbles) and an exterior chamber whose perimeters are penalized, and there is also an un-penalized region \(G\) of small area at most \(\delta>0\). This region may be thought of as the "wet" part of the soap film where soap accumulates (see Remarks 1.2-1.3 and 1.9). Our first main result, Theorem 1.1, is
a sharp regularity result for minimizers: each of the \(N\) chambers as well as the exterior chamber have \(C^{1,1}\) boundary, while \(\partial G\) is regular away from finitely many cusps. In particular, each bubble is regular despite the fact that the bubbles in the \(\delta\to 0\) limit may exhibit singularities. We also study a related problem on the ball in which the area constraints on the chambers are replaced by boundary conditions on the circle and prove a similar theorem up to the boundary (Theorem 1.4). As a consequence, in Theorem 1.8, we completely resolve minimizers on the ball for small \(\delta\) in terms of minimizers for the limiting "dry" problem: near each triple junction singularity of the limiting minimizer, there is a component of \(G\) "wetting" the singularity and bounded by three circular arcs meeting in cusps inside the ball and corners or cusps at the boundary; see Figure 1.1.
### Statement of the problem
For an \((N+2)\)-tuple \(\mathcal{S}=(S_{0},S_{1},\ldots,S_{N},G)\) of disjoint sets of finite perimeter partitioning \(\mathbb{R}^{2}\) (\(N\geq 2\)), called a cluster, we study minimizers of the energy
\[\mathcal{F}(\mathcal{S}):=\sum_{\ell=0}^{N}c_{\ell}P(S_{\ell})\,,\qquad c_{ \ell}>0\quad\forall 0\leq\ell\leq N\,,\]
among two admissible classes. First, we consider the problem on all of space
\[\inf_{\mathcal{S}\in\mathcal{A}^{\mathbf{m}}_{\delta}}\mathcal{F}(\mathcal{S })\,, \tag{1.1}\]
where the admissible class \(\mathcal{A}^{\mathbf{m}}_{\delta}\) consists of all clusters satisfying
\[|G|=|\mathbb{R}^{2}\setminus\cup_{\ell=0}^{N}S_{\ell}|\leq\delta \tag{1.2}\]
and, for some fixed \(\mathbf{m}\in(0,\infty)^{N}\), \((|S_{1}|,\ldots,|S_{N}|)=\mathbf{m}\). We also consider a related problem on the unit ball \(B=\{(x,y):x^{2}+y^{2}<1\}\). We study the minimizers of
\[\inf_{\mathcal{S}\in\mathcal{A}^{\mathbf{h}}_{\delta}}\mathcal{F}(\mathcal{S })\,, \tag{1.3}\]
where \(\mathcal{A}^{\mathbf{h}}_{\delta}\) consists of all clusters such that, for fixed \(h\in BV(\partial B;\{1,\ldots,N\})\),
\[S_{\ell}\cap\partial B=\{x\in\partial B:h(x)=\ell\}\text{ for }1\leq\ell\leq N \text{ in the sense of traces}\,, \tag{1.4}\]
\(S_{0}=\mathbb{R}^{2}\setminus B\) is the exterior chamber, and \(G\) satisfies (1.2). We remark that since \(\mathcal{A}^{\mathbf{m}}_{\delta}\subset\mathcal{A}^{\mathbf{m}}_{\delta^{ \prime}}\) and \(\mathcal{A}^{\mathbf{h}}_{\delta}\subset\mathcal{A}^{\mathbf{h}}_{\delta^{ \prime}}\) if \(\delta<\delta^{\prime}\), the minimum energy decreases in \(\delta\) for both (1.1) and (1.3).
The main energetic mechanism at work in (1.1) that distinguishes it from the classical minimal cluster problem is that the set \(G\) prohibits the creation of corners in the chambers \(S_{\ell}\). If \(r\ll 1\), the amount of perimeter saved by smoothing out a corner of \(S_{\ell}\) in \(B_{r}(x)\) using the set \(G\) scales like \(r\), and this can be accomplished while simultaneously preserving the area constraint by fixing areas elsewhere with cost \(\approx r^{2}\)[1, 10-12]. On the other hand, the regularizing effect of \(G\) only extends to the other chambers and not to its own boundary since its perimeter is not penalized.
Figure 1.1. On the left is a minimizing cluster \(\mathcal{S}^{0}\) for the \(\delta=0\) problem on the ball with chambers \(S^{0}_{\ell}\). On the right is a minimizer \(\mathcal{S}^{\delta}\) for small \(\delta\), with \(|G^{\delta}|=\delta\). Near the triple junctions of \(\mathcal{S}^{0}\), \(\partial G^{\delta}\) consists of three circular arcs meeting in cusps; see Theorem 1.8.
### Main results
We obtain optimal regularity results for minimizers of (1.1) and (1.3). In addition, for the problem with equal weights \(c_{\ell}\), we completely resolve minimizers of (1.3) for small \(\delta>0\) in terms of minimizers for \(\delta=0\). In the following theorems and throughout the paper, the term "arc of constant curvature" may refer to either a single circle arc or a straight line segment.
**Theorem 1.1** (Regularity on \(\mathbb{R}^{2}\) for \(\delta>0\)).: _If \(\mathcal{S}^{\delta}\) is a minimizer for \(\mathcal{F}\) among \(\mathcal{A}_{\delta}^{\mathbf{m}}\) for \(\delta>0\), then \(\partial S^{\delta}_{\ell}\) is \(C^{1,1}\) for each \(\ell\), and there exists \(\kappa^{\delta}_{\ell m}\) such that each \(\partial S^{\delta}_{\ell}\cap\partial S^{\delta}_{m}\) is a finite union of arcs of constant curvature \(\kappa^{\delta}_{\ell m}\) that can only terminate at a point in \(\partial S^{\delta}_{\ell}\cap\partial S^{\delta}_{m}\cap\partial G^{\delta}\). Referring to those points in \(\partial S^{\delta}_{\ell}\cap\partial S^{\delta}_{m}\cap\partial G^{\delta}\) as cusp points, there exist \(\kappa^{\delta}_{\ell}\) for \(0\leq\ell\leq N\) such that \(\partial S^{\delta}_{\ell}\cap\partial G^{\delta}\) is a finite union of arcs of constant curvature \(\kappa^{\delta}_{\ell}\), each of which can only terminate at a cusp point where \(\partial S^{\delta}_{\ell}\cap\partial G^{\delta}\) and \(\partial S^{\delta}_{m}\cap\partial G^{\delta}\) meet a component of \(\partial S^{\delta}_{\ell}\cap\partial S^{\delta}_{m}\) tangentially._
**Remark 1.2** (Interpretation of \(G^{\delta}\)).: For the case \(c_{\ell}=1\), a possible reformulation of (1.1) that views the interfaces as thin regions of liquid rather than surfaces is
\[\inf\{\mathcal{F}(\mathcal{S}):\mathcal{S}\in\mathcal{A}_{\delta}^{\mathbf{m}},\,S_{\ell}\text{ open }\forall\ell,\,\mathrm{cl}\,S_{\ell}\cap\mathrm{cl}\,S_{m}=\emptyset\,\, \forall\ell\neq m\}\,. \tag{1.5}\]
This is because if \(\mathcal{S}\) belongs to this class, then each bubble \(S_{\ell}\) for \(1\leq\ell\leq N\) must be separated from the others and the exterior chamber \(S_{0}\) by the soap \(G\), and \(\mathcal{F}(\mathcal{S})=P(G)\), which is the energy of the soap coming from surface tension. Theorem 1.1 allows for a straightforward construction showing that in fact, (1.1) and (1.5) are equivalent, in that a minimizer for (1.1) can be approximated in energy by clusters in the smaller class (1.5). Therefore, for a minimizer \(\mathcal{S}^{\delta}\) of (1.1), \(G^{\delta}\) can be understood as the "wet" part of the interfaces between bubbles where soap accumulates in the limit of a minimizing sequence for (1.5), as opposed to \(\partial S^{\delta}_{\ell}\cap\partial S^{\delta}_{m}\) which is the "dry" part; see Figure 1.2.
**Remark 1.3** (Constraint on \(G^{\delta}\)).: We have incorporated \(G^{\delta}\) with a soft constraint \(|G^{\delta}|\leq\delta\) rather than a hard constraint \(|G^{\delta}|=\delta\) to allow the minimizers to "select" the area of \(G^{\delta}\). A consequence of Theorem 1.1 is that if some minimizer \(\mathcal{S}^{0}\) of (1.1) for \(\delta=0\) has a singularity, then every minimizer \(\mathcal{S}^{\delta}\) for given \(\delta>0\) satisfies \(|G^{\delta}|>0\). Indeed, if \(|G^{\delta}|=0\), then \(\mathcal{F}(\mathcal{S}^{0})\leq\mathcal{F}(\mathcal{S}^{\delta})=\inf_{ \mathcal{A}_{\delta}^{\mathbf{m}}}\mathcal{F}\), so that \(\mathcal{S}^{0}\) is minimal among \(\mathcal{A}_{\delta}^{\mathbf{m}}\) and the regularity in Theorem 1.1 for \(\mathcal{S}^{0}\) yields a contradiction. As we prove in Theorem 1.8, the minimizer on the ball for small \(\delta\) and equal coefficients saturates the inequality \(|G^{\delta}|\leq\delta\), and we suspect this should hold in generality for (1.1) and (1.3) with small \(\delta\).
We turn now to our results regarding the problem (1.3) on the ball. Here regularity holds up to the boundary \(\partial B\), at which \(G^{\delta}\) may have corners, rather than cusps, at jump points of \(h\).
**Theorem 1.4** (Regularity on the Ball for \(\delta>0\)).: _If \(\mathcal{S}^{\delta}\) is a minimizer for \(\mathcal{F}\) among \(\mathcal{A}_{\delta}^{h}\) for \(\delta>0\), then for \(\ell,m>0\), \(\partial S^{\delta}_{\ell}\) is \(C^{1,1}\) except at jump points of \(h\), and \(\partial S^{\delta}_{\ell}\cap\partial S^{\delta}_{m}\cap B\) is a finite union of line segments terminating on \(\partial B\) at a jump point of \(h\) between \(\ell\) and \(m\) or at a point in \(\partial S^{\delta}_{\ell}\cap\partial S^{\delta}_{m}\cap\partial G^{\delta}\cap B\). Referring to those points in \(\partial S^{\delta}_{\ell}\cap\partial S^{\delta}_{m}\cap\partial G^{\delta}\cap B\) and \(\partial S^{\delta}_{\ell}\cap\partial S^{\delta}_{m}\cap\partial G^{\delta}\cap\partial B\) as cusp and corner points, respectively, there exist \(\kappa^{\delta}_{\ell}\) for \(1\leq\ell\leq N\) such that_
\[c_{1}\kappa^{\delta}_{1}=c_{2}\kappa^{\delta}_{2}=\cdots=c_{N}\kappa^{\delta}_{N} \tag{1.6}\]
_and \(\partial S^{\delta}_{\ell}\cap\partial G^{\delta}\) consists of a finite union of arcs of constant curvature \(\kappa^{\delta}_{\ell}\), each of whose two endpoints are either a cusp point in \(B\) or a corner point in \(\partial B\) at a jump point of \(h\). Furthermore, at cusp points, \(\partial S^{\delta}_{\ell}\cap\partial G^{\delta}\) and \(\partial S^{\delta}_{m}\cap\partial G^{\delta}\) meet a segment of \(\partial S^{\delta}_{\ell}\cap\partial S^{\delta}_{m}\) tangentially. Finally, any connected component of \(S^{\delta}_{\ell}\) for \(1\leq\ell\leq N\) is convex._
**Remark 1.5**.: In the case of equal weights \(c_{\ell}=1\), Theorems 1.1 and 1.4 can be found in [10]; see also the paper [11] for methods of existence and regularity.
To state our asymptotic resolution theorem on the ball, we require some knowledge of the regularity for minimizers of the \(\delta=0\) problem. In the general immiscible fluids problem, there may be singular points where more than three chambers meet; see [12, Figure 1.1], [13, Figure 7]. Since we are interested in triple junction singularities, below is a description of the behavior of minimizers on the ball in some cases where all singularities are triple junctions.
**Theorem 1.6** (Regularity on the Ball for \(\delta=0\)).: _If \(N=3\) or \(c_{\ell}=1\) for \(0\leq\ell\leq N\) and \(\mathcal{S}^{0}\) is a minimizer for \(\mathcal{F}\) among \(\mathcal{A}^{\hbar}_{0}\), then every connected component of \(\partial S^{0}_{\ell}\cap\partial S^{0}_{m}\cap B\) for non-zero \(\ell\) and \(m\) is a line segment terminating at an interior triple junction \(x\in\partial S^{0}_{\ell}\cap\partial S^{0}_{m}\cap\partial S^{0}_{n}\cap B\), at \(x\in\partial S^{0}_{\ell}\cap\partial S^{0}_{m}\cap\partial B\) which is a jump point of \(h\), or at a boundary triple junction \(x\in\partial S^{0}_{\ell}\cap\partial S^{0}_{m}\cap\partial S^{0}_{n}\cap\partial B\) which is a jump point of \(h\). Moreover, for each triple \(\{\ell,m,n\}\) of distinct non-zero indices there exists angles \(\theta_{\ell}\), \(\theta_{m}\), \(\theta_{n}\) satisfying_
\[\frac{\sin\theta_{\ell}}{c_{m}+c_{n}}=\frac{\sin\theta_{m}}{c_{\ell}+c_{n}}= \frac{\sin\theta_{n}}{c_{\ell}+c_{m}} \tag{1.7}\]
_such that if \(x\in B\) is an interior triple junction between \(S^{0}_{\ell}\), \(S^{0}_{m}\), and \(S^{0}_{m}\), then there exists \(r_{x}>0\) such that \(S^{0}_{\ell}\cap B_{r_{x}}(x)\) is a circular sector determined by \(\theta_{\ell}\), and similarly for \(m\), \(n\). Finally, any connected component of \(S^{0}_{\ell}\) for \(1\leq\ell\leq N\) is convex._
**Remark 1.7**.: The proof of Theorem 1.6 also applies when \(N>3\) or \(c_{\ell}\) are merely positive to show that the interfaces of a minimizer are finitely many segments meeting at isolated points. For the immiscible fluids problem on the ball, this has been observed in [11, Corollary 4.6]; see also [14]. Therefore, one may prove Theorem 1.6 by classifying the possible tangent cones if \(N=3\) or \(c_{\ell}=1\) (Theorem 4.15). Since the proof of Theorem 1.4, which is in the language of sets of finite perimeter, can be easily modified to include a full proof of Theorem 1.6, we provide these arguments for completeness and as an alternative to the approach in [11] via rectifiable chains.
Our last main result is a complete resolution of minimizers on the ball for small \(\delta\) and equal weights.
**Theorem 1.8** (Resolution for Small \(\delta\) on the Ball).: _Suppose that \(c_{\ell}=1\) for \(0\leq\ell\leq N\) and \(h\in BV(\partial B;\{1,\dots,N\})\). Then there exists \(\delta_{0}>0\), a function \(f(\delta)\to 0\) as \(\delta\to 0\), and \(r>0\), all depending on \(h\), such that if \(0<\delta<\delta_{0}\) and \(\mathcal{S}^{\delta}\) is a minimizer in (1.3), then \(|G^{\delta}|=\delta\) and there exists a minimizer \(\mathcal{S}^{0}\) among \(\mathcal{A}^{\hbar}_{0}\) such that_
\[\max\big{\{}\sup_{x\in S^{\delta}_{\ell}}\operatorname{dist}(x,S^{0}_{\ell}) \,,\sup_{x\in S^{0}_{\ell}}\operatorname{dist}(x,S^{\delta}_{\ell})\big{\}} \leq f(\delta)\quad\text{for $1\leq\ell\leq N$} \tag{1.8}\]
_and, denoting by \(\Sigma\) the set of interior and boundary triple junctions of \(\mathcal{S}^{0}\),_
\[\max\big{\{}\sup_{x\in G^{\delta}}\operatorname{dist}(x,\Sigma)\,,\sup_{x\in \Sigma}\operatorname{dist}(x,G^{\delta})\big{\}}\leq f(\delta) \tag{1.9}\]
_and for each \(x\in\Sigma\), \(B_{r}(x)\cap\partial G^{\delta}\) consists of three circle arcs of curvature \(\kappa=\kappa(\mathcal{S}^{\delta})\)._
**Remark 1.9** (Wetting of Singularities).: For the soap bubble capillarity analogue of (1.5) on \(B\),
\[\inf\{\mathcal{F}(\mathcal{S}):\mathcal{S}\in\mathcal{A}^{\hbar}_{\delta},\,S_{ \ell}\text{ open, }\operatorname{cl}S_{\ell}\cap\operatorname{cl}S_{m}\subset\{x\in\partial B:h \text{ jumps between }\ell,m\}\}\,, \tag{1.10}\]
we may also use Theorem 1.4 to approximate a minimizer in (1.3) by a sequence satisfying the restrictions in (1.10). Therefore, if \(\delta>0\) is small, a minimizing sequence for (1.10) converges to a
minimizer \(\mathcal{S}^{\delta}\) of (1.3), which in turn is close to a minimizer \(\mathcal{S}^{0}\) for the \(\delta=0\) problem. Furthermore, by Theorem 1.8, if \(\delta<\delta_{0}\) and the weights \(c_{\ell}\) are equal, each singularity of \(\mathcal{S}^{0}\) is "wetted" by a component of \(G^{\delta}\) bounded by three circular arcs; see Figure 1.1. Also, (1.9) shows that \(\Sigma\) coincides with the set of accumulation points of the "wet" regions \(G^{\delta}\) as \(\delta\to 0\). In the context of the Plateau problem in \(\mathbb{R}^{2}\), this equivalence has been conjectured in [10, Remark 1.7].
**Remark 1.10** (Triple Junctions for Vector Allen-Cahn).: Theorem 1.4 is used in a construction by E. Sandier and P. Sternberg of an entire solution \(U:\mathbb{R}^{2}\to\mathbb{R}^{2}\) to the system \(\Delta U=\nabla_{u}W(U)\) for a triple-well potential \(W\) without symmetry assumptions on the potential [11].
### Idea of proof
The outline to prove Theorems 1.1 and 1.4 can be summarized in two main steps: first, classifying the possible blow-ups at any interfacial point of a minimizer \(\mathcal{S}^{\delta}\), and; second, using one of the (a priori non-unique) blow-ups at \(x\) to resolve \(\mathcal{S}^{\delta}\) in a small neighborhood of \(x\). To demonstrate the ideas, we describe these steps for a minimizer \(\mathcal{S}^{\delta}\) for the problem (1.1) on \(\mathbb{R}^{2}\) at \(x=0\). For the classification of blow-ups, we use a blow-up version of the observation below (1.4) to show that no blow-up of any chamber \(S^{\delta}_{\ell}\) can be anything other than a halfspace. This of course differs from the usual blow-ups in two-dimensional cluster problems, in which three or more chambers can meet at a point.
Armed now with a list of the possible blow-ups at \(0\), which we do not yet know are unique, we must use them to resolve the minimizer in a small neighborhood of \(0\). In the case that there exists a blow-up coming from \(G^{\delta}\) and a single chamber \(S^{\delta}_{\ell}\), lower area density estimates on the remaining chambers imply that in a small ball \(B_{r}(0)\), \(S^{\delta}_{\ell^{\prime}}\cap B_{r}(0)=\emptyset\) for \(\ell\neq\ell^{\prime}\), so that \(\partial S^{\delta}_{\ell}\cap B_{r}(0)\) is regular by the classical theory for volume-constrained perimeter minimizers. The main hurdle is when the blow-up at \(0\) is two halfspaces coming from \(S^{\delta}_{\ell_{i}}\) for \(i=1,2\). In the classical regularity theory for planar clusters (see [25, Section 11] or [12, Corollary 4.8]), this would imply that on \(B_{r}(0)\), the interface must be an arc of constant curvature separating each \(S^{\delta}_{\ell_{i}}\cap B_{r}(0)\). Here, there is the possibility that \(0\in\partial G^{\delta}\) but \(G^{\delta}\) has density \(0\) at \(0\). This behavior cannot be detected at the blow-up level, although one suspects the interfaces near \(0\) should be two ordered graphs over a common line which coincide at \(0\) and possible elsewhere also. To prove this and thus complete the local resolution, we use the convergence along a sequence of blow-ups to a pair of halfspaces and the density estimates on the other chambers to locate a small rectangle \(Q=[-r,r]\times[r,r]\) such that \(Q\subset S^{\delta}_{\ell_{1}}\cup S^{\delta}_{\ell_{2}}\cup G^{\delta}\) and \(\partial Q\cap\partial S^{\delta}_{\ell_{i}}=\{(-r,a_{i}),(r,b_{i})\}\) for some \(a_{1}\leq a_{2}\) and \(b_{1}\leq b_{2}\). At this point, since we have the desired graphicality on \(\partial Q\), we can combine a symmetrization inequality for sets which are graphical on the boundary of a cube (Lemma 2.3), the minimality of \(\mathcal{S}^{\delta}\), and the necessary conditions for equality in Lemma 2.3 to conclude that \(\partial S^{\delta}_{\ell_{i}}\cap Q\) are two ordered graphs.
### Organization of the paper
In Section 2, we recall some preliminary facts. Next, we prove the existence of minimizers in Section 3. Section 4 contains the proof of the existence and classification of blow-up cones at any interfacial point. In Sections 5 and 6, we prove Theorems 1.1 and 1.4 and Theorem 1.6, respectively. Finally, in Section 7, we prove Theorem 1.8.
### Acknowledgments
This work was supported by the NSF grant RTG-DMS 1840314. I am grateful to Etienne Sandier and Peter Sternberg for several discussions during the completion of this work and to Frank Morgan for valuable comments on the literature for such problems.
## 2. Notation and Preliminaries
### Notation
Throughout the paper, \(B_{r}(x)=\{y\in\mathbb{R}^{2}:|y-x|<r\}\). When \(x=0\), we set \(B_{R}:=B_{R}(0)\) and \(B=B_{1}(0)\). Also, for any Borel measurable \(U\), we set
\[\mathcal{F}(\mathcal{S};U)=\sum_{\ell=0}^{N}c_{\ell}P(S_{\ell};U)\,.\]
We will use the notation \(E^{(t)}\) for the points of Lebesgue density \(t\in[0,1]\).
We remark that since \(h\in BV(\partial B;\{1,\dots,N\})\), there exists a partition of \(\partial B\) into \(N\) pairwise disjoint sets \(\{A_{1},\dots,A_{N}\}\) such that \(h=\sum_{\ell=1}^{N}\ell\,1_{A_{\ell}}\), and each \(A_{\ell}\) is a finite union of pairwise disjoint arcs:
\[A_{\ell}:=\cup_{i=1}^{I_{\ell}}a_{i}^{\ell}\,. \tag{2.1}\]
For each \(1\leq\ell\leq N\) and \(1\leq i\leq I_{\ell}\), we let
\[c_{i}^{\ell} \tag{2.2}\]
be the chord that shares endpoints with \(a_{i}^{\ell}\). Finally, we call
\[C_{i}^{\ell} \tag{2.3}\]
the open circular segments (regions bounded by an arc and its corresponding chord) corresponding to the pair \((a_{i}^{\ell},c_{i}^{\ell})\).
### Preliminaries
Regarding the functional \(\mathcal{F}\), we observe that when \(\delta=0\),
\[\mathcal{F}(\mathcal{S})=\sum_{0\leq\ell<m\leq N}c_{\ell m}\mathcal{H}^{1}( \partial^{*}S_{\ell}\cap\partial^{*}S_{m})\,,\]
where \(c_{\ell m}:=c_{\ell}+c_{m}\), and the positivity of \(c_{\ell}\) for \(1\leq\ell\leq N\) is equivalent to the strict triangle inequalities
\[c_{\ell m}<c_{\ell i}+c_{im}\quad\forall\ell\neq m\neq i\neq\ell\,. \tag{2.4}\]
We also note that for any \(h\in BV(\partial B;\{1,\dots,N\})\), the energy of any cluster \(\mathcal{S}\) satisfying the boundary condition (1.4) can be decomposed as
\[\mathcal{F}(\mathcal{S})=2\pi c_{0}+\sum_{\ell=1}^{N}c_{\ell} \mathcal{H}^{1}(A_{\ell})+\sum_{\ell=1}^{N}c_{\ell}P(S_{\ell};B)=:C(h)+ \mathcal{F}(\mathcal{S};B)\,, \tag{2.5}\]
where \(C(h)\) is a constant independent of \(\mathcal{S}\). Therefore, minimizing \(\mathcal{F}\) among \(\mathcal{A}_{\delta}^{h}\) for any \(\delta>0\) is equivalent to minimizing \(\mathcal{F}(\cdot;B)\), so we will often ignore the boundary term for the problem on the ball.
We now recall some facts regarding sets of finite perimeter. Unless otherwise stated, we will always adhere to the convention that among the Lebesgue representatives of a given set of finite perimeter \(E\), we are considering one that satisfies [12, Proposition 12.19]
\[\operatorname{spt}\mathcal{H}^{1}\operatorname{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}\partial^{*}E=\partial E \tag{2.6}\]
and
\[\partial E=\{x:0<|E\cap B_{r}(x)|<\pi r^{2}\ \forall r>0\}\,. \tag{2.7}\]
We will need some facts regarding slicing sets of finite perimeter by lines or circles.
**Lemma 2.1** (Slicing sets of finite perimeter).: _Let \(u(x)=x\cdot\nu\) for some \(\nu\in\mathbb{S}^{1}\) or \(u(x)=|x-y|\) for some \(y\in\mathbb{R}^{2}\), and, for any set \(A\), let \(A_{t}\) denote \(A\cap\{u=t\}\). Suppose that \(E\subset\mathbb{R}^{2}\) is a set of finite perimeter._
1. _For every_ \(t\in\mathbb{R}\)_, there exist traces_ \(E_{t}^{+}\)_,_ \(E_{t}^{-}\subset\{u=t\}\) _such that_ \[\int_{\{u=t\}}|\mathbf{1}_{E_{t}^{+}}-\mathbf{1}_{E_{t}^{-}}|\,d\mathcal{H}^{1 }=P(E;\{u=t\})\,.\] (2.8)
2. _Letting_ \(S=\{x:x\cdot\nu^{\perp}\in[a,b]\}\) _for compact_ \([a,b]\) _when_ \(u=x\cdot\nu\) _or_ \(S=\mathbb{R}^{2}\) _when_ \(u=|x-y|\)_,_ \[\lim_{s\downarrow t}\int_{\{u=s\}\cap S}\mathbf{1}_{E_{t}^{-}}\,d\mathcal{H}^{1 }=\int_{\{u=t\}\cap S}\mathbf{1}_{E_{t}^{+}}\,d\mathcal{H}^{1}\,.\] (2.9)
_._
3. _For almost every_ \(t\in\mathbb{R}\)_,_ \(E_{t}^{+}=E_{t}^{-}=E_{t}\) _up to an_ \(\mathcal{H}^{1}\)_-null set,_ \(E_{t}\) _is a set of finite perimeter in_ \(\{u=t\}\)_, and_ \[\mathcal{H}^{0}((\partial^{*}E)_{t}\Delta\partial^{*}_{\{u=t\}}E_{t})=0\,.\] (2.10)
Proof.: The first item can be found in [10, (2.15)]. We prove the second item when \(u=\vec{e}_{1}\cdot x\); the proof with any other \(\nu\) or when \(u=|x-y|\) is similar. By the divergence theorem [10, Theorem 2.10],
\[0 =\int_{(t,s)\times(a,b)\cap E}\operatorname{div}\vec{e}_{1}\] \[=\int_{\{u=s\}\cap S}\mathbf{1}_{E_{t}^{-}}\,d\mathcal{H}^{1}- \int_{\{u=t\}\cap S}\mathbf{1}_{E_{t}^{+}}\,d\mathcal{H}^{1}+\int_{\partial^ {*}E\cap(t,s)\times(a,b)}\vec{e}_{1}\cdot\nu_{E}\,d\mathcal{H}^{1}\] \[\qquad+\int_{\partial^{*}(E\cap(t,s)\times(a,b))\cap(t,s)\times \{a,b\}}\vec{e}_{1}\cdot\nu_{E\cap(t,s)\times(a,b)}\,d\mathcal{H}^{1}\,.\]
Now the last term on the right hand side is bounded by \(2(s-t)\) and vanishes as \(s\to t\). Also, the third term on the right hand side is bounded by \(P(E;(t,s)\times(a,b))\), which vanishes as \(s\to t\) since \((t,s)\times(a,b)\) is a decreasing family of bounded open sets whose intersection is empty and \(B\to P(E;B)\) is a Radon measure. The limit (2.9) follows from letting \(s\) decrease to \(t\).
Moving on to \((iii)\), we recall that for \(\mathcal{H}^{1}\)-a.e. \(x\in\{u=t\}\cap E_{t}^{+}\),
\[1=\lim_{r\to 0}\frac{|B_{r}(x)\cap E\cap\{u>t\}|}{\pi r^{2}/2} \tag{2.11}\]
and similarly for \(E_{t}^{-}\)[10, 2.13]. Next, by (2.8),
\[\mathcal{H}^{1}(E_{t}^{+}\Delta E_{t}^{-})=0\quad\text{ if }P(E;\{u=t\})=0\,, \tag{2.12}\]
which is all but at most countably many \(t\). Now, for any \(x\in\{u=t\}\) that is also a Lebesgue point of \(E\),
\[1=\lim_{r\to 0}\frac{|B_{r}(x)\cap E|}{\pi r^{2}}=\lim_{r\to 0}\frac{|B_{r}(x) \cap E\cap\{u>t\}|}{\pi r^{2}/2}=\lim_{r\to 0}\frac{|B_{r}(x)\cap E\cap\{u<t\}|}{ \pi r^{2}/2}\,. \tag{2.13}\]
Since \(\mathcal{L}^{2}\)-a.e. \(x\in E\) is a Lebesgue point, we conclude from (2.11), (2.12), and (2.13) that \(\mathcal{H}^{1}(E_{t}\Delta E_{t}^{\pm})=0\) for \(\mathcal{H}^{1}\)-a.e. \(t\). Lastly, (2.10) when slicing by lines can be found in [11, Theorem 18.11] for example. The case of slicing by circles follows from the case of lines and the fact that smooth diffeomorphisms preserve reduced boundaries [10, Lemma A.1].
We will use the following fact regarding the intersection of a set of finite perimeter with a convex set.
**Lemma 2.2**.: _If \(E\) is a bounded set of finite perimeter and \(K\) is a convex set, then_
\[P(E\cap K)\leq P(E)\,,\]
_with equality if and only if \(|E\setminus K|=0\)._
Proof.: The argument is based on the facts that the intersection of such \(E\) with a halfspace \(H\) decreases perimeter (with equality if and only \(|H\setminus E|=0\)) and any convex set is an intersection of halfspaces. We omit the details.
Our last preliminary regarding sets of finite perimeter is a symmetrization inequality, which for convenience, we state in the setting it will be employed later.
**Lemma 2.3**.: _Let \(Q^{\prime}=[t_{1},t_{2}]\times[-1,1]\). Suppose that \(E\subset Q^{\prime}\) is a set of finite perimeter such that \((t_{1},t_{2})\times(-1,-1/4)\subset E^{(1)}\subset(t_{1},t_{2})\times(-1,1/4)\) and, for some \(a_{1},a_{2}\in[-1/4,1/4]\),_
\[E^{+}_{t_{1}}=[-1,a_{1}]\,,\quad E^{-}_{t_{2}}=[-1,a_{2}]\quad\text{up to $\mathcal{H}^{1}$-null sets}\,, \tag{2.14}\]
_where \(E^{+}_{t_{1}}\), \(E^{-}_{t_{2}}\), viewed as subsets of \(\mathbb{R}\), are the traces from the right and left, respectively, slicing by \(u(x)=x\cdot e_{1}\). Then the set \(E^{h}=\{(x_{1},x_{2}):-1\leq x_{2}\leq\mathcal{H}^{1}(E_{x_{1}})-1\}\) satisfies \(|E^{h}|=|E|\),_
\[(E^{h})^{+}_{t_{1}}=[-1,a_{1}]\,,\quad(E^{h})^{-}_{t_{2}}=[-1,a_{2}]\quad\text {up to $\mathcal{H}^{1}$-null sets} \tag{2.15}\]
_and_
\[P(E^{h};\operatorname{int}Q^{\prime})\leq P(E;\operatorname{int}Q^{\prime})\,. \tag{2.16}\]
_Moreover, if equality holds in (2.16), then for every \(t\in(t_{1},t_{2})\), \((E^{(1)})_{t}\) is an interval._
**Remark 2.4**.: The superscript \(h\) is for "hypograph."
Proof.: The preservation of area \(|E^{h}|=|E|\) is immediate by Fubini's theorem, so we begin with the first equality in (2.15), and the second is analogous. We recall from (2.11) that for \(\mathcal{H}^{1}\)-a.e. \(x\in\{t_{1}\}\times[-1,1]\cap(E^{h})^{+}_{t_{1}}\),
\[1=\lim_{r\to 0}\frac{|B_{r}(x)\cap E^{h}\cap Q^{\prime}|}{\pi r^{2}/2}\,. \tag{2.17}\]
From this property and the fact that the vertical slices of \(E^{h}\) are intervals of height at least \(3/4\), it follows that \((E^{h})^{+}_{t_{1}}\) is \(\mathcal{H}^{1}\)-equivalent to an interval \([-1,a]\) for some \(a\geq-1/4\). Furthermore, \(a=a_{1}\) is a consequence of (2.9) and the fact that the rearrangement \(E^{h}\) preserves the \(\mathcal{H}^{1}\)-measure of each vertical slice:
\[a_{1}=\int_{\{t_{1}\}\times[-1,1]}\mathbf{1}_{E^{+}_{t_{1}}}\, d\mathcal{H}^{1} =\lim_{s\downarrow t_{1}}\int_{\{s\}\times[-1,1]}\mathbf{1}_{E^{-} _{s}}\,d\mathcal{H}^{1}\] \[=\lim_{s\downarrow t_{1}}\int_{\{s\}\times[-1,1]}\mathbf{1}_{(E^{ h})^{-}_{s}}\,d\mathcal{H}^{1}=\int_{\{t_{1}\}\times[-1,1]}\mathbf{1}_{(E^{h})^{+}_{t_ {1}}}\,d\mathcal{H}^{1}=a\,.\]
Moving on to (2.16), let consider the sets \(E^{r}\) which is the reflection of \(E\) over \(\{x_{2}=-1\}\), and \(G=E\cup E^{r}\). We denote by the superscript \(s\) the Steiner symmetrization of a set over \(\{x_{2}=-1\}\). We note that
\[G^{s}\cap Q^{\prime}=E^{h}\,.\]
Since \((t_{1},t_{2})\times(-1,-1/4)\subset E^{(1)}\subset(t_{1},t_{2})\times(-1,1/4)\) and Steiner symmetrizing decreases perimeter, we therefore have
\[P(E;\operatorname{int}Q^{\prime})=\frac{P(G;\{x_{1}\in(t_{1},t_{2})\})}{2} \geq\frac{P(G^{s};\{x_{1}\in(t_{1},t_{2})\})}{2}=P(G^{s};\operatorname{int}Q^ {\prime})=P(E^{h};\operatorname{int}Q^{\prime})\,,\]
Figure 2.1. Both the sets \(E\) and \(E^{h}\) have the same trace on \(\partial Q^{\prime}\), and \(P(E^{h};\operatorname{int}Q^{\prime})<P(E;\operatorname{int}Q^{\prime})\) because \(E\) has vertical slices which are not intervals.
which is (2.16). Furthermore, equality can only hold if every almost every vertical slice of \(G\) is an interval, which in turn implies that \(E_{t}\) is an interval for almost every \(t\in(t_{1},t_{2})\). By [14, Lemma 4.12], every slice \((E^{(1)})_{t}\) is an interval.
We conclude the preliminaries with a lemma regarding of the convergence of convex sets.
**Lemma 2.5**.: _If \(\{C_{n}\}\) is a sequence of equibounded, compact, and convex sets in \(\mathbb{R}^{n}\), then there exists compact and convex \(C\subset\mathbb{R}^{n}\) such that \(\mathbf{1}_{C_{n}}\to\mathbf{1}_{C}\) almost everywhere and_
\[\max\big{\{}\sup_{x\in C_{n}}\operatorname{dist}(x,C)\,,\sup_{x\in C} \operatorname{dist}(x,C_{n})\big{\}}\to 0\,. \tag{2.18}\]
Proof.: By the Arzela-Ascoli Theorem, there exists a compact set \(C\subset\mathbb{R}^{n}\) such that \(\operatorname{dist}(\cdot,C_{n})\to\operatorname{dist}(\cdot,C)\) uniformly. Therefore, \(C_{n}\to C\) in the Kuratowski sense [13, Section 2], \(C\) is convex, and \(\mathbf{1}_{C_{n}}\to\mathbf{1}_{C}\) almost everywhere [13, Remark 2.1]. Since \(C_{n}\) are equibounded and \(C\) is compact, the Kuratowski convergence is equivalent to Hausdorff convergence, which is (2.18).
## 3. Existence of Minimizers
First we establish the existence of minimizers for the problem (1.3) on the ball. A byproduct of the proof is a description of minimizers on each of the circular segments from (2.3); see Fig. 3.1.
**Theorem 3.1** (Existence on the ball).: _For any \(\delta\geq 0\) and \(h\in BV(\partial B;\{1,\ldots,N\})\), there exists a minimizer of \(\mathcal{F}\) among the class \(\mathcal{A}^{h}_{\delta}\). Moreover, any minimizer \(\mathcal{S}^{\delta}\) for \(\delta\geq 0\) satisfies_
\[\cup_{i=1}^{I_{\ell}}C_{i}^{\ell}\subset S_{\ell}^{(1)}\quad\text{for each }1\leq\ell\leq N\,. \tag{3.1}\]
Proof.: The proof is divided into two steps. The closed convex sets
\[K_{\ell}:=\operatorname{cl}\left(B\setminus(\cup_{i=1}^{I_{\ell}}C_{i}^{\ell })\right),\quad 1\leq\ell\leq N\]
will be used throughout.
_Step one_: First we show that given any \(\mathcal{S}\in\mathcal{A}^{h}_{\delta}\), the cluster \(\tilde{\mathcal{S}}\) defined via
\[\tilde{S}_{\ell}:=\Big{(}S_{\ell}\cap\bigcap_{j\neq\ell}K_{j}\Big{)}\cup \bigcup_{i=1}^{I_{\ell}}C_{i}^{\ell}\quad 1\leq\ell\leq N\,,\qquad\tilde{S}_{0}=B^{c} \,,\qquad\tilde{G}=(\tilde{S}_{0}\cup\cdots\cup\tilde{S}_{N})^{c}\]
satisfies \(\tilde{\mathcal{S}}\in\mathcal{A}^{h}_{\delta}\) and
\[\mathcal{F}(\tilde{\mathcal{S}})\leq\mathcal{F}(\mathcal{S})\,, \tag{3.2}\]
with equality if only if
\[\cup_{i=1}^{I_{\ell}}C_{i}^{\ell}\subset S_{\ell}^{(1)}\qquad\forall 1\leq \ell\leq N\,. \tag{3.3}\]
The proof relies on Lemma 2.2, which states that if \(E\) is a set of finite perimeter, \(|E|<\infty\), and \(K\) is a closed convex set, then \(E\cap K\) is a set of finite perimeter and
\[P(E\cap K)\leq P(E)\,, \tag{3.4}\]
with equality if and only if \(|E\setminus K|=0\). For given \(\mathcal{S}\in\mathcal{A}_{\delta}^{h}\), let us first consider the cluster \(\mathcal{S}^{\prime}\), where
\[S_{1}^{\prime}:=S_{1}\cup\bigcup_{i=1}^{I_{1}}C_{i}^{1}\,,\ S_{\ell}^{\prime}: =S_{\ell}\cap K_{1}\,,\ 2\leq\ell\leq N\qquad S_{0}^{\prime}=B^{c}\,,\qquad G^{ \prime}=(S_{0}^{\prime}\cup\cdots\cup S_{N}^{\prime})^{c}\,.\]
By the trace condition (1.4) and the definition of \(S_{\ell}^{\prime}\),
\[S_{\ell}^{\prime}\cap\partial B=\{x\in\partial B:h(x)=\ell\}\text{ for }1\leq \ell\leq N\text{ in the sense of traces}\,. \tag{3.5}\]
Also, since \(G^{\prime}=B\setminus\cup_{\ell}S_{\ell}^{\prime}\) satisfies
\[|G^{\prime}|=|(B\cap K_{1})\setminus\cup_{\ell}S_{\ell}|\leq|B\setminus\cup_ {\ell}S_{\ell}|\leq\delta\,,\]
we have
\[\mathcal{S}^{\prime}\in\mathcal{A}_{\delta}^{h}\,. \tag{3.6}\]
Now for \(2\leq\ell\leq N\), we use (3.4) to estimate
\[c_{\ell}P(S_{\ell})\geq c_{\ell}P(S_{\ell}\cap K_{1})=c_{\ell}P(S_{\ell}^{ \prime})\,. \tag{3.7}\]
For \(\ell=1\), we first recall the fact for any set of finite perimeter \(E\),
\[P(E;B)=P(E^{c};B)\,. \tag{3.8}\]
Applying (3.8) with \(S_{1}\), then (1.4), (3.4), and (3.5), and finally (3.8) with \(S_{1}^{\prime}\), we find that
\[P(S_{1};B) =P((\cup_{\ell=2}^{N}S_{\ell}\cup G);B)\] \[=P(\cup_{\ell=2}^{N}S_{\ell}\cup G)-\mathcal{H}^{1}(\cup_{\ell=2 }^{N}A_{\ell})\] \[\geq P((\cup_{\ell=2}^{N}S_{\ell}\cup G)\cap K_{1})-\mathcal{H}^{1} (\cup_{\ell=2}^{N}A_{\ell})\] \[=P(\cup_{\ell=2}^{N}S_{\ell}^{\prime}\cup G^{\prime};B)\] \[=P(S_{1}^{\prime};B)\,. \tag{3.9}\]
Adding \(\mathcal{H}^{1}(A_{\ell})\) to (3.9), multiplying by \(c_{1}\), and combining with (3.7) gives
\[\mathcal{F}(\mathcal{S})=\sum_{\ell=0}^{N}c_{\ell}P(S_{\ell})\geq\sum_{\ell=0} ^{N}c_{\ell}P(S_{\ell}^{\prime})=\mathcal{F}(\mathcal{S}^{\prime})\,, \tag{3.10}\]
and so we have a new cluster \(\mathcal{S}^{\prime}\), belonging to \(\mathcal{A}_{\delta}^{h}\) by (3.6), that satisfies
\[\cup_{i=1}^{I_{1}}C_{i}^{1}\subset(S_{1}^{\prime})^{(1)}\,. \tag{3.11}\]
Repeating this argument for \(2\leq\ell\leq N\) yields \(\tilde{\mathcal{S}}\in\mathcal{A}_{\delta}^{h}\) satisfying (3.2) as desired. Turning now towards the proof that equality in (3.2) implies (3.3), we prove that the containment for \(\ell=1\) in (3.3) is entailed by equality; the other \(N-1\) implications are analogous. If (3.2) holds as an equality, then (3.7) and (3.9) must hold as equalities as well. But by the characterization of equality in (3.4), this can only hold if \((\cup_{\ell=2}^{N}S_{\ell}\cup G)\cap K_{1}=\cup_{\ell=2}^{N}S_{\ell}\cup G\), which yields the first containment in (3.3).
Finally, let us also remark that an immediate consequence of this step is that if a minimizer of \(\mathcal{F}\) exists among \(\mathcal{A}_{\delta}^{h}\), then (3.1) must hold. It remains then to prove the existence of a minimizer.
_Step two_: Let \(\{\mathcal{S}^{m}\}_{m}\) be a minimizing sequence of clusters for \(\mathcal{F}\) among \(\mathcal{A}_{\delta}^{h}\) (the infimum is finite). Due to the results of step one, we can modify our minimizing sequence so that
\[\cup_{i=1}^{I_{\ell}}C_{i}^{\ell}\subset(S_{\ell}^{m})^{(1)}\quad\forall m\,, \ \forall 1\leq\ell\leq N \tag{3.12}\]
while also preserving the asymptotic minimality of the sequence. By compactness in \(BV\) and (3.12), after taking a subsequence, we obtain a limiting cluster \(\mathcal{S}\) that satisfies the trace condition (1.4), and, by lower-semicontinuity in \(BV\), minimizes \(\mathcal{F}\) among \(\mathcal{A}_{\delta}^{h}\).
**Remark 3.2** (Existence of minimizer for a functional with boundary energy).: One might also consider the minimizing the energy
\[\mathcal{F}(\mathcal{S};B)+\sum_{m=1}^{N}\sum_{\ell\neq m}(c_{\ell}+c_{m}) \mathcal{H}^{1}(\partial^{*}S_{\ell}\cap\{h=m\})\,,\]
which penalizes deviations from \(h\) rather than enforcing a strict trace condition, among the class
\[\{(S_{0},\dots,S_{N},G):|S_{\ell}\cap S_{m}|=0\text{ if }\ell\neq m,\,|G|\leq \delta,\,B^{c}=S_{0}\}\,.\]
For this problem, the same convexity-based argument as in step one of the proof of Theorem 3.1 shows that in fact, minimizers exists and attain the boundary values \(h\)\(\mathcal{H}^{1}\)-a.e. on \(\partial B\). When \(\delta=0\), this problem arises as the \(\Gamma\)-limit of a Modica-Mortola problem with Dirichlet condition [1].
Next, we prove existence for the problem on all of space. Since we are in the plane, the proof utilizes the observation that perimeter and diameter scale the same in \(\mathbb{R}^{2}\). Existence should also hold in \(\mathbb{R}^{n}\) for \(n\geq 3\) using the techniques of [1].
**Theorem 3.3** (Existence on \(\mathbb{R}^{2}\)).: _For any \(\mathbf{m}\in(0,\infty)^{N}\), there exists \(R=R(\mathbf{m})\) such that for all \(\delta\geq 0\), there exists a minimizer of \(\mathcal{F}\) among the class \(\mathcal{A}_{\delta}^{\mathbf{m}}\) satisfying \(\mathbb{R}^{2}\setminus B_{R}\subset S_{0}\)._
Proof.: Let \(\{\mathcal{S}^{j}\}_{j}\subset\mathcal{A}_{\delta}^{\mathbf{m}}\) be a minimizing sequence with remnants \(G^{j}\). The existence of a minimizer is straightforward if we can find \(R>0\) such that, up to modifications preserving the asymptotic minimality, \(B^{c}_{R}\subset S_{0}^{j}\) for each \(j\). We introduce the sets of finite perimeter \(E_{j}=\cup_{\ell=1}^{N}S_{\ell}^{j}\cup G^{j}\), which satisfy \(P(E_{j})\leq\max\{c_{\ell}^{-1}\}\mathcal{F}(\mathcal{S}^{j})\) and \(\partial^{*}E_{j}\subset\cup_{\ell=1}^{N}\partial^{*}S_{\ell}^{j}\). Decomposing \(E_{j}\) into its indecomposable components \(\{E_{k}^{j}\}_{k=1}^{\infty}\)[1, Theorem 1], we have \(\mathcal{H}^{1}(\partial^{*}E_{k}^{j}\cap\partial^{*}E_{k^{\prime}}^{j})=0\) for \(k\neq k^{\prime}\). Therefore, for the clusters \(\mathcal{S}_{k}^{j}=((E_{k}^{j})^{c},S_{1}^{j}\cap E_{k}^{j},\dots,S_{N}^{j} \cap E_{k}^{j},G^{j}\cap E_{k}^{j})\),
\[\mathcal{F}(\mathcal{S}^{j})=\sum_{k=1}^{\infty}\mathcal{F}(\mathcal{S}_{k}^{j })\,.\]
Furthermore, by the indecomposability of any \(E_{k}^{j}\), there exists \(x_{k}^{j}\in\mathbb{R}^{2}\) such that
\[(G^{j}\cap E_{k}^{j})\cup\cup_{\ell=1}^{N}S_{\ell}^{j}\cap E_{k}^{j}\subset E _{k}^{j}\subset B_{P(E_{k}^{j})}(x_{k}^{j})\,.\]
By the uniform energy bound along the minimizing sequence and this containment, for any \(j\), we may translate each \(\mathcal{S}_{k}^{j}\) so that the resulting sequence of clusters satisfies \(B^{c}_{R}\subset S_{0}^{j}\). Finally, we note that \(R\leq 2\max\{c_{\ell}^{-1}\}\inf_{\mathcal{A}_{\delta}^{\mathbf{m}}}\mathcal{F}\), and since that infimum is bounded independently of \(\delta\), it depends only on \(\mathbf{m}\).
## 4. Existence and Classification of Blow-up Cones
In this section, we prove the existence of blow-up cones for minimizers and classify the possibilities. Since the proofs are mostly modified versions of standard arguments, we will often be brief in this section and describe the main ideas and adjustments. Also, we do not include any arguments for the case \(\mathcal{A}_{0}^{\mathbf{m}}\) as that regularity is known in \(\mathbb{R}^{2}\)[21, 10].
### Perimeter-almost minimizing clusters
Lemma 4.1 allows us to test minimality of \(\mathcal{S}^{\delta}\) against competitors that do not satisfy the constraint required for membership in \(\mathcal{A}^{h}_{\delta}\) or \(\mathcal{A}^{\mathbf{m}}_{\delta}\).
**Lemma 4.1**.: _If \(\mathcal{S}^{\delta}\) is a minimizer for \(\mathcal{F}\), then there exist \(r_{0}>0\) and \(0\leq\Lambda<\infty\), both depending on \(\mathcal{S}^{\delta}\), with the following property:_
1. _if_ \(\delta>0\)_,_ \(\mathcal{S}^{\delta}\) _minimizes_ \(\mathcal{F}\) _among_ \(\mathcal{A}^{h}_{\delta}\)_,_ \(\mathcal{S}^{\prime}\) _satisfies the trace condition (_1.4_), and_ \(S^{\delta}_{\ell}\Delta S^{\prime}_{\ell}\subset B_{r}(x)\) _for_ \(r<r_{0}\) _and_ \(1\leq\ell\leq N\)_, then, setting_ \(G^{\delta}=B\setminus\cup_{\ell}S_{\ell}\) _and_ \(G^{\prime}=B\setminus\cup_{\ell}S^{\prime}_{\ell}\)_,_ \[\mathcal{F}(\mathcal{S}^{\delta})\leq\mathcal{F}(\mathcal{S}^{\prime})+\Lambda \big{|}|G^{\delta}|-|G^{\prime}|\big{|}\,;\] (4.1)
2. _if_ \(\delta>0\)_,_ \(\mathcal{S}^{\delta}\) _minimizes_ \(\mathcal{F}\) _among_ \(\mathcal{A}^{\mathbf{m}}_{\delta}\) _and_ \(\mathcal{S}^{\prime}\) _satisfies_ \(S^{\delta}_{\ell}\Delta S^{\prime}_{\ell}\subset B_{r}(x)\) _for_ \(r<r_{0}\) _and_ \(1\leq\ell\leq N\)_, then_ \[\mathcal{F}(\mathcal{S}^{\delta})\leq\mathcal{F}(\mathcal{S}^{\prime})+\Lambda \sum_{\ell=1}^{N}\big{|}|S^{\delta}_{\ell}|-|S^{\prime}_{\ell}|\big{|}\,.\] (4.2)
Proof.: For \((i)\), since we do not have to fix the areas of each chamber but only the remnant set, the proof is an application of the standard volume-fixing variations construction for sets of finite perimeter along the lines of [15, Lemma 17.21 and Example 21.3]. For \((ii)\), we use volume-fixing variations idea for clusters originating in [1, VI.10-12]. More specifically, by considering the \((N+1)\)-cluster \((S^{\delta}_{0},\ldots,S^{\delta}_{N},G^{\delta})\), (4.2) follows directly from using [15, Equations (29.80)-(29.82)] on this \((N+1)\)-cluster to modify \(\mathcal{S}^{\prime}\) so that its energy may be tested against \(\mathcal{S}^{\delta}\).
### Preliminary regularity when \(\delta>0\)
Density estimates and regularity along \((G^{\delta})^{1/2}\cap(S^{\delta}_{\ell})^{1/2}\) can be derived from Lemma 4.1.
**Lemma 4.2** (Infiltration Lemma for \(\delta>0\)).: _If \(\mathcal{S}^{\delta}\) is a minimizer for \(\mathcal{F}\) among \(\mathcal{A}^{h}_{\delta}\) or \(\mathcal{A}^{\mathbf{m}}_{\delta}\) for \(\delta>0\), then there exist constants \(\varepsilon_{0}=\varepsilon_{0}>0\) and \(r_{*}>0\) with the following property:_
_if \(x\in\operatorname{cl}B\) when \(\mathcal{S}^{\delta}\in\mathcal{A}^{h}_{\delta}\) or \(x\in\mathbb{R}^{2}\) when \(\mathcal{S}^{\delta}\in\mathcal{A}^{\mathbf{m}}_{\delta}\), \(r<r_{*}\), \(0\leq\ell\leq N\), and_
\[|S^{\delta}_{\ell}\cap B_{r}(x)|\leq\varepsilon_{0}r^{2}\,, \tag{4.3}\]
_then_
\[|S^{\delta}_{\ell}\cap B_{r/4}(x)|=0\,. \tag{4.4}\]
Proof.: We prove the lemma for \(\mathcal{A}^{h}_{\delta}\) case in steps one and two. The case for \(\mathcal{A}^{\mathbf{m}}_{\delta}\) is the the same except that one uses (4.2) instead of (4.1) when testing minimality in (4.14) below.
_Step one_: In the first step, we show that there exists \(\varepsilon(h)>0\) such that if \(x\in\operatorname{cl}B\), \(r<1\), and
\[|S^{\delta}_{\ell}\cap B_{r}(x)|\leq\varepsilon r^{2}\quad\text{ for some }1\leq\ell\leq N\,, \tag{4.5}\]
for a minimizer among \(\mathcal{A}^{h}_{\delta}\), then
\[B_{r/2}(x)\cap\{h=\ell\}=\emptyset\,. \tag{4.6}\]
If \(B_{r}(x)\cap\partial B=\emptyset\), (4.6) is immediate, so we may as well assume in addition that
\[B_{r}(x)\cap\partial B\neq\emptyset\,. \tag{4.7}\]
In order to choose \(\varepsilon\), we recall the inclusion (3.1) from Theorem 3.1, which allows us to pick \(\varepsilon\) small enough (independent of \(\delta\) or the particular minimizer) so that if \(y\in\{h=\ell\}\), then
\[\inf_{0<r<1}\frac{|S^{\delta}_{\ell}\cap B_{r}(y)|}{r^{2}}>4\varepsilon\,. \tag{4.8}\]
Now if \(B_{r}(x)\) satisfies (4.5)-(4.7), we claim that
\[B_{r/2}(x)\cap\{h=\ell\}=\emptyset\,, \tag{4.9}\]
which is (4.6). Indeed, if (4.9) did not hold, then we could find \(y\in B_{r/2}(x)\) such that \(h(y)=\ell\), in which case by (4.8),
\[\frac{|S_{\ell}^{\delta}\cap B_{r/2}(y)|}{r^{2}/4}>4\varepsilon\,. \tag{4.10}\]
But \(B_{r/2}(y)\subset B_{r}(x)\), so that (4.10) implies \(|S_{\ell}^{\delta}\cap B_{r}(x)|>\varepsilon r^{2}\), which contradicts our assumption (4.5).
_Step two_: Let \(\varepsilon_{0}<\varepsilon\) and \(r_{*}<1\) to be positive constants to be specified later, and suppose that (4.3) holds for some \(1\leq\ell\leq N\) and \(x\in\operatorname{cl}B\) with \(r<r_{*}\). We set \(m(r)=|S_{\ell}^{\delta}\cap B_{r}(x)|\), so that for almost every \(r\), the coarea formula gives
\[m^{\prime}(r)=\mathcal{H}^{1}((S_{\ell}^{\delta})^{(1)}\cap \partial B_{r}(x))=\mathcal{H}^{1}((S_{\ell}^{\delta})^{(1)}\cap\partial B_{r }(x)\cap B)\,. \tag{4.11}\]
By the conclusion (4.6) of step one,
\[B_{r/2}(x)\cap\{h=\ell\}=\emptyset\,. \tag{4.12}\]
Therefore, for \(s<r/2\),
\[(S_{\ell}^{\delta}\setminus B_{s}(x))\cap\partial B=\{x\in \partial B:h(x)=\ell\}\quad\text{in the sense of traces}\,. \tag{4.13}\]
In particular, removing \(B_{s}(x)\) from \(S_{\ell}^{\delta}\) does not disturb the trace condition (1.4). Then we may apply (4.1) from Lemma 4.1, yielding for almost every \(s<r/2\)
\[\mathcal{F}(\mathcal{S}^{\delta}) \leq\mathcal{F}(B^{c},S_{1}^{\delta},\ldots,S_{\ell}^{\delta} \setminus B_{s}(x),\ldots,S_{N}^{\delta},G^{\delta}\cup(S_{\ell}^{\delta} \cap B_{s}(x)))+\Lambda|S_{\ell}^{\delta}\cap B_{s}(x)|\] \[=\mathcal{F}(\mathcal{S}^{\delta})-c_{1}P(S_{1}^{\delta};B_{s}(x ))+c_{1}\mathcal{H}^{1}((S_{1}^{\delta})^{(1)}\cap\partial B_{s}(x))+\Lambda| S_{1}^{\delta}\cap B_{s}(x)|\,; \tag{4.14}\]
in the second line we have used the formula
\[P(S_{\ell}^{\delta}\setminus B_{s}(x);B)=P(S_{\ell}^{\delta};B \setminus\operatorname{cl}B_{s}(x))+\mathcal{H}^{1}((S_{\ell}^{\delta})^{(1) }\cap\partial B_{s}(x))\,,\]
which holds for all but those countably many \(s\) with \(\mathcal{H}^{1}(\partial^{*}S_{\ell}^{\delta}\cap\partial B_{s}(x))>0\). After rearranging (4.14) and using the isoperimetric inequality to obtain
\[2c_{\ell}\pi^{1/2}m(s)^{1/2}\leq 2c_{\ell}m^{\prime}(s)+\Lambda m(s)\,,\]
we may reabsorb the term \(\Lambda m(s)\) onto the left hand side and integrate to obtain the requisite decay on \(m\).
**Corollary 4.3** (Regularity along \((G^{\delta})^{1/2}\cap(S_{\ell}^{\delta})^{1/2}\)).: _If \(\delta>0\) and, for a minimizer \(\mathcal{S}\in\mathcal{A}_{\delta}^{\mathbf{h}}\) or \(\mathcal{S}\in\mathcal{A}_{\delta}^{\mathbf{m}}\) and point \(x\in B\) or \(x\in\mathbb{R}^{2}\), respectively, there exists \(r_{j}\to 0\) and \(\ell\) such that_
\[1=\lim_{j\to\infty}\frac{|(G^{\delta}\cup S_{\ell}^{\delta}) \cap B_{r_{j}}(x)|}{\pi r_{j}^{2}}\,, \tag{4.15}\]
_then for large \(j\), \(\partial G^{\delta}\cap\partial S_{\ell}^{\delta}\cap B_{r_{j}}(x)\) is an arc of constant curvature and \(S_{\ell^{\prime}}\cap B_{r_{j}}(x)=\emptyset\) for \(\ell^{\prime}\neq\ell\)._
Proof.: By our assumption (4.15) and the infiltration lemma, for some \(j\) large enough, \(B_{r_{j}}(x)\subset S_{\ell}^{\delta}\cup G^{\delta}\), in which case the classical regularity theory for volume-constrained minimizers of perimeter gives the conclusion.
**Corollary 4.4** (Density Estimates).: _If \(\mathcal{S}^{\delta}\) minimizes \(\mathcal{F}\) among \(\mathcal{A}_{\delta}^{h}\) or \(\mathcal{A}_{\delta}^{\mathbf{m}}\) for some \(\delta>0\), then there exists \(0<\alpha_{1}\), \(\alpha_{2}<1\) and \(r_{**}>0\) such that if \(x\in\partial S_{\ell}^{\delta}\), then for all \(r<r_{**}\),_
\[\alpha_{1}\pi r^{2} \leq|S_{\ell}^{\delta}\cap B_{r}(x)| \leq(1-\alpha_{1})\pi r^{2} \tag{4.16}\] \[P(S_{\ell}^{\delta};B_{r}(x)) \leq\alpha_{2}r\,. \tag{4.17}\]
_Also, \(\mathcal{H}^{1}(\partial S_{\ell}^{\delta}\setminus\partial^{*}S_{\ell}^{ \delta})=0\) and each \((S_{\ell}^{\delta})^{(1)}\) and \((G^{\delta})^{(1)}\) is open and satisfies (2.6)-(2.7)._
Proof.: We consider the case \(\mathcal{S}^{\delta}\in\mathcal{A}^{h}_{\delta}\) and \(1\leq\ell\leq N\), and the other cases are similar. First we prove the lower bound in (4.16). Let \(x\in\partial S^{\delta}_{\ell}\). Then by our convention (2.6)-(2.7) regarding topological boundaries,
\[|S^{\delta}_{\ell}\cap B_{r}(x)|>0\quad\text{for all }r>0\,.\]
By the infiltration lemma, the lower area density bound follows with \(\alpha_{1}=\varepsilon_{0}\) and \(r_{**}=r_{*}\).
For the upper area bound, let us choose \(r_{**}\leq r_{*}\) such that
\[\Lambda r_{**}\leq 1\,. \tag{4.18}\]
We claim that for any \(x\in\partial S^{0}_{\ell}\) and \(r<r_{**}\),
\[|S^{\delta}_{\ell}\cap B_{r}(x)|\leq\max\left\{\pi-\varepsilon_{0},c_{*} \right\}r^{2} \tag{4.19}\]
for a dimensional constant \(c_{*}\) to be specified shortly. Suppose that this were not the case. Then by the smoothness of \(\partial B\) and the containment of \(S^{0}_{\ell}\) in \(B\),
\[\operatorname{dist}(x,\partial B)\geq c(B)r \tag{4.20}\]
for some constant \(c(B)<1/2\) depending \(B\), so that \(B_{c(B)r/2}(x)\subset B\). Also, by the infiltration lemma, \(S^{\delta}_{\ell^{\prime}}\cap B_{r/4}(x)=\emptyset\) for \(\ell^{\prime}\neq\ell\). These two facts combined imply that \(B_{c(B)/2}(x)\subset\subset S^{\delta}_{\ell}\cup G^{\delta}\). By Lemma 4.1, \(S^{\delta}_{\ell}\) is a \((\Lambda,r_{**})\)-minimizer of perimeter in \(B_{c(B)/2}(x)\) with \(\Lambda r_{**}\leq 1\) by (4.18). Then the density estimates [15, Theorem 21.11] for these minimizers give
\[|S^{\delta}_{\ell}\cap B_{c(B)/2}(x)|\leq\frac{15\pi}{64}c(B)^{2}\,. \tag{4.21}\]
By choosing \(c_{**}\) close enough to \(\pi\), we have a contradiction.The upper area bound follows from a construction which we omit, and the mild regularity on \(\partial S^{\delta}_{\ell}\) follows from our normalization (2.6)-(2.7), the area bounds, and Federer's theorem [11, 4.5.11].
**Remark 4.5** (Lebesgue representatives).: For the rest of the paper, we will always assume that we are considering the open set \((S^{\delta}_{\ell})^{(1)}\) or \((G^{\delta})^{(1)}\) as the Lebesgue representative of \(S^{\delta}_{\ell}\) or \(G^{\delta}\).
### Preliminary regularity when \(\delta=0\)
The following infiltration (or "elimination") lemma for a minimizer among \(\mathcal{A}^{\mathbf{m}}_{0}\) is due to [10, Theorem 3.1] and can be adapted easily to the problem on the ball; the reader may also consult [14, Section 11] for a similar statement.
**Lemma 4.6** (Infiltration Lemma for \(\delta=0\)).: _If \(\mathcal{S}^{0}\) is a minimizer for \(\mathcal{F}\) among \(\mathcal{A}^{h}_{0}\), then there exist constants \(\varepsilon_{0}=\varepsilon_{0}>0\) and \(r_{*}>0\) with the following property:_
_if \(x\in\mathbb{R}^{2}\), \(r<r_{*}\), and \(0\leq\ell_{0}<\ell_{1}\leq N\) are such that_
\[|B_{r}(x)\setminus(S^{0}_{\ell_{0}}\cup S^{0}_{\ell_{1}})|\leq \varepsilon_{0}r^{2}\,, \tag{4.22}\]
_then_
\[|(S^{0}_{\ell_{0}}\cup S^{0}_{\ell_{1}})\cap B_{r/4}(x)|=0\,. \tag{4.23}\]
Proof.: Repeating the argument from step one of Lemma 4.2, there exists \(\varepsilon(h)>0\) such that if \(x\in\operatorname{cl}B\), \(r<1\), and
\[|S^{\delta}_{\ell}\cap B_{r}(x)|\leq\varepsilon r^{2}\quad\text{ for some }\ell\in\{0,\ldots,N\}\setminus\{\ell_{0},\ell_{1}\}\,, \tag{4.24}\]
for a minimizer among \(\mathcal{A}^{h}_{\delta}\), then
\[B_{r/2}(x)\cap\{h=\ell\}=\emptyset\,. \tag{4.25}\]
In particular, by using Lemma 4.1, we may compare the minimality of \(\mathcal{S}^{0}\) against competitors constructed by donating \(B_{s}(x)\setminus(S^{0}_{\ell_{0}}\cup S^{0}_{\ell_{1}})\) to \(S_{\ell_{0}}\) or \(S_{\ell_{1}}\). The remainder of the argument is the same as in [10].
The next two results may be proved as in Corollary 4.3 and Corollary 4.4.
**Corollary 4.7** (Regularity along \((S^{0}_{\ell})^{1/2}\cap(S^{0}_{\ell^{\prime}})^{1/2}\)).: _If \(\mathcal{S}^{0}\) is a minimizer among \(\mathcal{A}^{h}_{0}\) and \(x\in(S^{0}_{\ell})^{1/2}\cap(S^{0}_{\ell^{\prime}})^{1/2}\) for \(\ell,\ell^{\prime}\in\{1,\ldots,N\}\), then in a neighborhood of \(x\), every other chamber is empty and \(\partial S^{0}_{\ell}\cap\partial S^{0}_{\ell^{\prime}}\) is a segment._
**Lemma 4.8** (Upper Area and Perimeter Bounds).: _If \(\mathcal{S}^{0}\) minimizes \(\mathcal{F}\) among \(\mathcal{A}^{h}_{0}\), then there exists \(\alpha_{3}>0\), \(\alpha_{4}<1\), and \(r_{3}>0\), such that_
\[\mathcal{F}(\mathcal{S}^{0};B_{r}(x))\leq\alpha_{3}r\quad\forall\,r>0\,,\quad x \in\mathbb{R}^{2}\,, \tag{4.26}\]
_and_
\[|B_{r}(x)\cap S^{0}_{\ell}|\leq\alpha_{4}\pi r^{2}\quad\forall x\in\partial S ^{0}_{\ell}\,,\quad r<r_{3}\,. \tag{4.27}\]
### Monotonicity formula
This is the last technical tool necessary for obtaining blow-up cones.
**Theorem 4.9** (Monotonicity Formula).: _If \(\mathcal{S}^{\delta}\) minimizes \(\mathcal{F}\) among \(\mathcal{A}^{h}_{\delta}\) for \(\delta\geq 0\) or \(\mathcal{A}^{\mathbf{m}}_{\delta}\) for \(\delta>0\), then there exists \(\Lambda_{0}\geq 0\) such that if \(x\in\mathbb{R}^{2}\),_
\[\sum_{\ell=0}^{N}\frac{c_{\ell}}{2}\int_{\partial^{*}S^{\delta}_{\ell}\cap(B_ {r}(x)\setminus B_{s}(x))}\frac{\left((y-x)\cdot\nu_{S^{\delta}_{\ell}}\right) ^{2}}{|y-x|^{3}}\,d\mathcal{H}^{1}(y)\leq\frac{\mathcal{F}(\mathcal{S}^{\delta };B_{r}(x))}{r}-\frac{\mathcal{F}(\mathcal{S}^{\delta};B_{s}(x))}{s}+\Lambda_{ 0}(r-s) \tag{4.28}\]
_for any \(0<s<r<r_{x}\)._
Proof.: We consider the case \(\mathcal{S}^{\delta}\) is minimal among \(\mathcal{A}^{h}_{\delta}\) and \(x\in\partial B\) is a jump point of \(h\); the other cases are simpler since the trace constraint may be avoided. First, we observe that by the smoothness of the circle, there exists \(\Lambda^{\prime}>0\) such that
\[\sum_{\ell=0}^{N}\frac{c_{\ell}}{2}\int_{\partial^{*}S^{\delta}_ {\ell}\cap(B_{r}(x)\setminus B_{s}(x)\cap\partial B)}\frac{\left((y-x)\cdot \nu_{S^{\delta}_{\ell}}\right)^{2}}{|y-x|^{3}}\,d\mathcal{H}^{1}(y)\leq\frac{ \mathcal{F}(\mathcal{S}^{\delta};B_{r}(x)\cap\partial B)}{r}-\frac{\mathcal{F} (\mathcal{S}^{\delta};B_{s}(x)\cap\partial B)}{s}\\ +\Lambda^{\prime}\pi(r-s)\quad\forall 0<s<r<r_{x}\]
for some \(r_{x}\); that is, we have the desired monotonicity for energy along \(\partial B\). For the remainder of the proof, we therefore focus on the energy inside \(B\). We define the increasing function
\[p(r):=\sum_{\ell=1}^{N}c_{\ell}P(S^{\delta}_{\ell};B_{r}(x)\cap B)=\mathcal{F }(\mathcal{S};B_{r}(x)\cap B) \tag{4.29}\]
where, since it will be clear by context, we have suppressed the dependence of \(p\) on \(x\). The proof requires two steps: first, deriving a differential inequality for \(p\) using comparison with cones (see (4.30)), and second, integrating and employing a slicing argument. The computations in the second step are the same as those in the proof of the monotonicity formula for almost minimizing integer rectifiable currents [1, Proposition 2.1], so we omit them.
We prove that given \(x\in\partial B\) which is a jump point of \(h\), there exists \(r_{x}>0\) such that
\[\frac{p(r)}{r^{2}}\leq\frac{1}{r}\sum_{\ell=1}^{N}c_{\ell}\mathcal{H}^{0}( \partial^{*}S^{\delta}_{\ell}\cap\partial B_{r}(x)\cap B)+\Lambda\pi\quad \text{ for a.e. }r<r_{x}\,, \tag{4.30}\]
where \(\Lambda\) is from Lemma 4.1. As mentioned above, the monotonicity formula can then be derived from (4.30). For concreteness, suppose that \(h\) jumps between \(1\) and \(2\) at \(x\). Then, recalling (2.2) there are chords \(c_{i}^{1}\) and \(c_{j}^{2}\) connecting \(x\) to the nearest jump points on either side and corresponding circular segments \(C_{i}^{1}\) and \(C_{j}^{2}\). Let \(0<r_{x}<r_{0}\) be small enough such that \(\operatorname{cl}B_{r_{x}}(x)\) intersects no other circular segments from (2.3) other than those two. By the inclusion (3.1) for the minimizer and our choice of \(r_{x}\), for every \(r<r_{x}\),
\[C_{i}^{1}\cap\operatorname{cl}B_{r}(x)\subset S_{1}^{\delta}\,,\quad C_{j}^{2} \cap\operatorname{cl}B_{r}(x)\subset S_{2}^{\delta}\,,\quad\text{and}\quad \partial B_{r}(x)\cap\partial B\subset\partial C_{i}^{1}\cup\partial C_{j}^{2 }\,. \tag{4.31}\]
For \(r<r_{x}\) to be specified momentarily, we consider the cluster \(\tilde{\mathcal{S}}\) defined by
\[\tilde{S}_{1} =(S_{1}^{\delta}\setminus\operatorname{cl}B_{r}(x))\cup\{y\in B_{r} (x)\setminus C_{i}^{1}:x+r(y-x)/|y-x|\in S_{1}^{\delta}\}\cup C_{i}^{1}\,,\] \[\tilde{S}_{2} =(S_{2}^{\delta}\setminus\operatorname{cl}B_{r}(x))\cup\{y\in B _{r}(x)\setminus C_{j}^{2}:x+r(y-x)/|y-x|\in S_{2}^{\delta}\}\cup C_{j}^{2}\,,\] \[\tilde{S}_{\ell} =(S_{\ell}^{\delta}\setminus\operatorname{cl}B_{r}(x))\cup\{y\in B _{r}(x):x+r(y-x)/|y-x|\in S_{\ell}^{\delta}\}\,,\quad 3\leq\ell\leq N\,.\]
Note that by (4.31), each \(\partial\tilde{S}_{\ell}\cap B_{r}(x)\) consists of radii of \(B_{r}(x)\) contained in \(B_{r}(x)\setminus(C_{i}^{1}\cup C_{j}^{2})\). Then by Lemma 2.1, for almost every \(r<r_{x}\), each \(\tilde{S}_{\ell}\) is a set of finite perimeter and
\[\sum_{\ell=1}^{N}c_{\ell}P(\tilde{S}_{\ell};B) =\sum_{\ell=1}^{N}c_{\ell}P(\tilde{S}_{\ell};B_{r}(x)\cap B)+\sum _{\ell=1}^{N}c_{\ell}P(\tilde{S}_{\ell};B\setminus\operatorname{cl}B_{r}(x))\] \[=r\sum_{\ell=1}^{N}c_{\ell}\mathcal{H}^{0}(\partial^{*}S_{\ell}^{ \delta}\cap\partial B_{r}(x)\cap B)+\sum_{\ell=1}^{N}c_{\ell}P(S_{\ell}^{ \delta};B\setminus\operatorname{cl}B_{r}(x))\,. \tag{4.32}\]
Also, by (4.31) and our definition of the \(\tilde{\mathcal{S}}\), \(\tilde{\mathcal{S}}\) satisfies the trace condition (1.4). If we set \(\tilde{G}=B\setminus(\cup_{\ell}\tilde{S}_{\ell})\), then we can plug (4.32) into the comparison inequality (4.1) from Lemma 4.1 and cancel like terms, yielding
\[p(r)=\sum_{\ell=1}^{N}c_{\ell}P(S_{\ell}^{\delta};B\cap B_{r}(x))\leq r\sum_{ \ell=1}^{N}c_{\ell}\mathcal{H}^{0}(\partial^{*}S_{\ell}^{\delta}\cap\partial B _{r}(x)\cap B)+\Lambda\pi r^{2}\quad\text{for a.e. }r<r_{x}\,.\]
This is precisely (4.30) multiplied by \(r^{2}\).
### Existence of blow-up cones
The monotonicity formula allows us to identify blow-up minimal cones at interfacial points of a minimizer. It will be convenient to identify interfacial points for minimizers among \(\mathcal{A}_{\delta}^{\mathbf{m}}\) with interfacial points in \(B\) for minimizers among \(\mathcal{A}_{\delta}^{h}\), since, at the level of blow-ups, the behavior is the same.
**Definition 4.10** (Interior and boundary interface points).: If \(\mathcal{S}^{\delta}\) is minimal among \(\mathcal{A}_{\delta}^{h}\) and \(x\in B\cap\partial S_{\ell}^{\delta}\) or \(\mathcal{S}^{\delta}\) is minimal among \(\mathcal{A}_{\delta}^{\mathbf{m}}\) and \(x\in\partial S_{\ell}^{\delta}\) for some \(\ell\), we say \(x\) is an **interior interface point**. If \(\mathcal{S}^{\delta}\) is minimal among \(\mathcal{A}_{\delta}^{h}\) and \(x\in\partial B\), we call \(x\) a **boundary interface point**.
The blow-ups at a boundary interface point will be minimal in a halfspace among competitors satisfying a constraint coming from the trace condition (1.4) and the inclusion (3.1) from Theorem 3.1.
**Definition 4.11** (Admissible blow-ups at jump points of \(h\)).: Let \(x\in\partial B\) be a jump point of \(h\), and let \(C_{i}^{\ell_{0}}\) and \(C_{j}^{\ell_{1}}\) be the circular segments from (2.3) meeting at \(x\). Let
\[C_{\infty}^{\ell_{0}}=\bigcup_{\lambda>0}\lambda(C_{i}^{\ell_{0}}-x)\,,\quad C _{\infty}^{\ell_{1}}=\bigcup_{\lambda>0}\lambda(C_{j}^{\ell_{1}}-x) \tag{4.33}\]
be the blow-ups of the convex sets \(C_{i}^{\ell_{0}}\) and \(C_{j}^{\ell_{1}}\) at their common boundary point \(x\). We define
\[\mathcal{A}_{x}:=\{\mathcal{S}:\forall\ell\neq 0,\,S_{\ell}\subset\{y\cdot x<0 \}=S_{0}^{c}\,,\;S_{\ell}\cap S_{\ell^{\prime}}=\emptyset\text{ if }\ell\neq\ell^{\prime}\,,\;C_{\infty}^{\ell_{k}}\subset S_{\ell_{k}}\text{ for }k=0,1\}\,. \tag{4.34}\]
**Theorem 4.12** (Existence of Blow-up Cones).: _If \(\mathcal{S}^{\delta}\) minimizes \(\mathcal{F}\) among \(\mathcal{A}_{\delta}^{h}\) for some \(\delta\geq 0\) or among \(\mathcal{A}_{\delta}^{\mathbf{m}}\) for \(\delta>0\), then for any sequence \(r_{j}\to 0\), there exists a subsequence \(r_{j_{k}}\to 0\) and cluster \(\mathcal{S}=(S_{0},\ldots,S_{N},G)\) partitioning \(\mathbb{R}^{2}\), satisfying the following properties:_
1. \((S_{\ell}^{\delta}-x)/r_{j_{k}}\overset{L^{1}_{\infty}}{\to}S_{\ell}\) _for each_ \(0\leq\ell\leq N\)_;_
2. \(\mathcal{H}^{1}\operatorname{\Ldot{}}(\partial S_{\ell}^{\delta}-x)/r_{j_{k}} \overset{*}{\rightharpoonup}\mathcal{H}^{1}\operatorname{\Ldot{}}\partial S_ {\ell}\) _for each_ \(0\leq\ell\leq N\)_;_
3. \(S_{\ell}\) _is an open cone for each_ \(0\leq\ell\leq N\)
_;_
* _if_ \(x\) _is an interior interface point and_ \(\tilde{\mathcal{S}}\) _is such that for_ \(0\leq\ell\leq N\)_,_ \(\tilde{S}_{\ell}\Delta S_{\ell}\subset\subset B_{R}\) _and, for the problem on the ball,_ \(S_{0}=\emptyset\)_, then_ \[\mathcal{F}(\mathcal{S};B_{R})\leq\mathcal{F}(\tilde{\mathcal{S}};B_{R});\] (4.35)
* _if_ \(x\in\partial S^{\delta}_{\ell_{0}}\cap\partial B\) _is not a jump point of_ \(h\)_, then_ \(S_{\ell_{0}}=\{y:y\cdot x<0\}\) _and_ \(S_{0}=\{y:y\cdot x>0\}\)_;_
* _if_ \(x\in\partial S^{\delta}_{\ell_{0}}\cap\partial B\) _is a jump point of_ \(h\)_, then_ \(\mathcal{S}\subset\mathcal{A}_{x}\) _and if_ \(\tilde{\mathcal{S}}\in\mathcal{A}_{x}\) _is such that for_ \(0\leq\ell\leq N\)_,_ \(\tilde{S}_{\ell}\Delta S_{\ell}\subset\subset B_{R}\)_, then_ \[\mathcal{F}(\mathcal{S};B_{R})\leq\mathcal{F}(\tilde{\mathcal{S}};B_{R})\,.\] (4.36)
Proof.: When \(x\) is a boundary interface point and is not a jump point of \(h\), then \(S^{\delta}_{\ell_{0}}\cap B\cap B_{r_{x}}(x)=B\cap B_{r_{x}}(x)\) for some \(r_{x}>0\) by (3.1) from Theorem 3.1. In this case, items \((i)\)-\((iii)\) and \((v)\) are trivial. Also, the case of interior interface points is essentially a simpler version of the argument when \(x\in\partial S^{\delta}_{\ell_{0}}\cap\partial B\) is a jump point of \(h\). Therefore, for the rest of the proof, we focus on items \((i)\)-\((iii)\) and \((vi)\) when \(x\in\partial B\cap\partial S^{\delta}_{\ell_{0}}\) is a jump point of \(h\).
The upper perimeter bounds from Corollary 4.4 or Lemma 4.8 and compactness in \(BV\) give the existence of \(r_{j_{k}}\to 0\) and \(\mathcal{S}\) such that the convergence in \((i)\) holds. In addition, this compactness gives
\[\mu^{\ell}_{k}:=(\nu_{S^{\delta}_{\ell}}\mathcal{H}^{1}\mathbin{ \vrule height 0.0pt width 0.0pt depth 0.0pt\vrule height 0.0pt width 0.0pt depth 0.0pt}\partial S^{\delta}_{\ell}-x)/r_{j_{k}} \stackrel{{*}}{{\rightharpoonup}}\nu_{S_{\ell}}\mathcal{H}^{1} \mathbin{\vrule height 0.0pt width 0.0pt depth 0.0pt}\partial^{*}S_{\ell}=:\mu_{\ell} \quad\forall 0\leq\ell\leq N\,. \tag{4.37}\]
It is easy to check from the inclusion (3.1) that \(\mathcal{S}\in\mathcal{A}_{x}\). We now discuss in order (4.36), \((ii)\), and \((iii)\). The proofs of (4.36) and \((ii)\) are standard compactness arguments that proceed mutatis mutandis as the proof of the compactness theorem for \((\Lambda,r_{0})\)-perimeter minimizers [13, Theorem 21.14]. Finally, \((iii)\) follows from the monotonicity formula (4.28), which implies that \(\mathcal{F}(\mathcal{S};B_{r})/r\) is constant in \(r\), and the characterization of cones [13, Proposition 28.8].
### Classification of blow-up cones
We classify the possible blow-up cones for a minimizer using the terminology set forth in Definition 4.10.
**Theorem 4.13** (Classification of Blow-up Cones for \(\delta>0\)).: _If \(\mathcal{S}^{\delta}\) minimizes \(\mathcal{F}\) among \(\mathcal{A}^{h}_{\delta}\) or \(\mathcal{A}^{\mathbf{m}}_{\delta}\) for some \(\delta>0\), and \(\mathcal{S}\) is a blow-up cluster for \(x\in\partial S^{\delta}_{\ell_{0}}\) and some \(r_{j}\to 0\), then exactly one of the following is true:_
* \(x\in\partial S^{\delta}_{\ell_{0}}\) _is an interior interface point and_ \(S_{\ell_{0}}=\{y\cdot\nu_{S^{\delta}_{\ell_{0}}}(x)<0\}\)_,_ \(G=\mathbb{R}^{2}\setminus S_{\ell_{0}}\)_;_
* \(x\in\partial S^{\delta}_{\ell_{0}}\) _is an interior interface point,_ \(S_{\ell_{0}}=\{y\cdot\nu<0\}\) _for some_ \(\nu\in\mathbb{S}^{1}\)_, and_ \(S_{\ell_{1}}=\mathbb{R}^{2}\setminus S_{\ell_{0}}\) _for some_ \(\ell_{1}\neq\ell_{0}\)_;_
* \(x\in\partial S^{\delta}_{\ell_{0}}\cap\partial B\) _is a boundary interface point and jump point of_ \(h\)_,_ \(S_{\ell_{0}}=\{y\cdot\nu<0,\,y\cdot x<0\}\)_, and_ \(S_{\ell_{1}}=\{y\cdot\nu>0,\,y\cdot x<0\}\) _for some_ \(\nu\in\mathbb{S}^{1}\) _and_ \(\ell_{1}\neq\ell_{0}\)_;_
* \(x\in\partial S^{\delta}_{\ell_{0}}\cap\partial B\) _is a boundary interface point and jump point of_ \(h\)_,_ \(S_{\ell_{0}}=\{y\cdot\nu_{0}<0,\,y\cdot x<0\}\)_,_ \(S_{\ell_{1}}=\{y\cdot\nu_{1}>0,\,y\cdot x<0\}\)_, and_ \(G=(S_{0}\cup S_{\ell_{0}}\cup S_{\ell_{1}})^{c}\) _for some_ \(\nu_{0},\,\nu_{1}\in\mathbb{S}^{1}\) _and_ \(\ell_{1}\neq\ell_{0}\)_;_
* \(x\in\partial S^{\delta}_{\ell_{0}}\cap\partial B\) _is a boundary interface point, not a jump point of_ \(h\)_,_ \(S_{\ell_{0}}=\{y\cdot x<0\}=S^{c}_{0}\)_._
Proof of Theorem 4.13.: _Step one_: In this step we consider an interior interface point \(x\) and show that either \((i)\) or \((ii)\) holds. First, we note that since \(x\in\partial S^{\delta}_{\ell_{0}}\) and the density estimates (4.16) pass to the blow-up limit, \(S_{\ell_{0}}\neq\emptyset\) and \(S_{\ell_{0}}\neq\mathbb{R}^{2}\), so \(\mathcal{S}\) is non-trivial. We claim that no non-empty connected component \(S_{\ell}\) of \(\mathcal{S}\) for \(0\leq\ell\leq N\) can be anything other than a halfspace; from this claim it follows that \((i)\) or \((ii)\) holds. Indeed, suppose that there was such a component \(C\), say of \(S_{1}\), defined by an angle \(\theta\neq\pi\) with \(\partial C\cap\partial B=\{c_{1},c_{2}\}\). Let \(K\) be the convex hull of \(c_{1}\), \(c_{2}\), and \(0\). If \(\theta<\pi\), then the triangle inequality implies that the cluster \((S_{0},S_{1}\setminus K,S_{2},\ldots,S_{N},G\cup K)\) satisfies \(\mathcal{F}(\mathcal{S}^{\prime};B_{2})<\mathcal{F}(\mathcal{S};B_{2})\), contradicting the minimality property (4.35). On the other hand,
if \(\theta>\pi\), then the cluster \(\mathcal{S}^{\prime}=(S_{0}\setminus K,S_{1}\cup K,S_{2}\setminus K,\ldots,S_{N} \setminus K,G\setminus K)\) also contradicts (4.35) due to the triangle inequality.
_Step two_: Moving on to the case of a boundary interface point, we begin by observing that \((v)\) is trivial by (3.1) when \(x\) is not a jump point of \(h\). If \(x\) is a jump point of \(h\), say between \(h=1\) and \(h=2\), then \(S_{0}=\{y\cdot x>0\}\), and \(\{S_{1},\ldots,S_{N},G\}\) partition \(S_{0}^{c}\). Now the same argument as in the previous step using the triangle inequality shows that \(S_{\ell}=\emptyset\) for \(3\leq\ell\leq N\) and \(S_{1}\) and \(S_{2}\) each only have one connected component bordering \(S_{0}\). It follows that either \((iii)\) or \((iv)\) holds.
**Corollary 4.14** (Regularity for \(\partial G^{\delta}\) away from \((G^{\delta})^{(0)}\)).: _If \(\mathcal{S}^{\delta}\) minimizes \(\mathcal{F}\) among \(\mathcal{A}^{h}_{\delta}\) or \(\mathcal{A}^{\mathbf{m}}_{\delta}\) for some \(\delta>0\) and \(x\) is an interior interface point such that_
\[\limsup_{r\to 0}\frac{|G^{\delta}\cap B_{r}(x)|}{\pi r^{2}}>0\,, \tag{4.38}\]
_then there exists \(r_{x}>0\) such that \(\partial G^{\delta}\cap B_{r_{x}}(x)\) is an arc of constant curvature dividing \(B_{r_{x}}(x)\) into \(G^{\delta}\cap B_{r_{x}}(x)\) and \(S_{\ell}^{\delta}\cap B_{r_{x}}(x)\)._
Proof.: If \(r_{j}\to 0\) is a sequence achieving the limit superior in (4.38), then any subsequential blow-up at \(x\) must be characterized by case \((i)\) of Theorem 4.13. The desired conclusion now follows from Corollary 4.3.
Lastly, we classify blow-up cones for \(\delta=0\) when either \(N=3\) or the weights are equal.
**Theorem 4.15** (Classification of Blow-up Cones for \(\delta=0\) on the Ball).: _If \(N=3\) or \(c_{\ell}=1\) for \(0\leq\ell\leq N\), \(\mathcal{S}^{0}\) minimizes \(\mathcal{F}\) among \(\mathcal{A}^{h}_{0}\), and \(\mathcal{S}\) is a blow-up cluster at an interface point \(x\), then exactly one of the following is true:_
1. \(x\in\partial^{*}S^{0}_{\ell_{0}}\cap\partial^{*}S^{0}_{\ell_{1}}\) _is an interior interface point and_ \(S_{\ell_{0}}=\{y:y\cdot\nu_{S^{\delta}_{\ell_{0}}}\,(x)<0\}\)_,_ \(S_{\ell_{1}}=\mathbb{R}^{2}\setminus S_{\ell_{0}}\)_;_
2. \(x\) _is an interior interface point, and the non-empty chambers of_ \(\mathcal{S}\) _are three connected cones_ \(S_{\ell_{i}}\)_,_ \(i=0,1,2\)_, with vertex at the origin satisfying_ \[\frac{\sin\theta_{\ell_{0}}}{c_{\ell_{1}}+c_{\ell_{2}}}=\frac{\sin\theta_{ \ell_{1}}}{c_{\ell_{0}}+c_{\ell_{2}}}=\frac{\sin\theta_{\ell_{2}}}{c_{\ell_{0} }+c_{\ell_{1}}}\] (4.39) _where_ \(\theta_{\ell_{i}}=\mathcal{H}^{1}(S_{\ell_{i}}\cap\partial B)\)_;_
3. \(x\in\partial S^{\delta}_{\ell_{0}}\cap\partial B\) _is not a jump point of_ \(h\)_, and_ \(S_{\ell_{0}}=\{y:y\cdot x<0\}=S^{c}_{0}\)_;_
4. \(x\in\partial S^{\delta}_{\ell_{0}}\cap\partial B\) _is a jump point of_ \(h\)_,_ \(S_{\ell_{0}}=\{y:y\cdot\nu<0,\,y\cdot x<0\}\)_, and_ \(S_{\ell_{1}}=\{y:y\cdot\nu>0,\,y\cdot x<0\}\) _for some_ \(\nu\in\mathbb{S}^{1}\) _and_ \(\ell_{1}\neq\ell_{0}\)_, and_ \(S_{0}=\{y:y\cdot x>0\}\)_;_
5. \(x\in\partial S^{\delta}_{\ell_{0}}\cap\partial B\) _is a jump point of_ \(h\)_, and the non-empty chambers of_ \(\mathcal{S}\in\mathcal{A}_{x}\) _are_ \(S_{0}=\{y:y\cdot x>0\}\) _and three connected cones_ \(S_{\ell_{i}}\)_,_ \(i=0,1,2\)_, partitioning_ \(S^{c}_{0}\)_._
Proof.: We begin with the observation that no blow-up at \(x\) can consist of a single chamber. To see this, since \(x\) is an interface point, it belongs to \(\partial S^{0}_{\ell}\) for some \(\ell\). By our normalization (2.6)-(2.7) for reduced and topological boundaries, \(x\in\operatorname{spt}\mathcal{H}^{1}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 1px}}\nolimits \partial^{*}S^{0}_{\ell}\). Therefore, due to the upper area bound (4.27), no blow-up limit at \(x\) can consist of a single chamber \(S_{\ell^{\prime}}\); if so, the \(L^{1}\) convergence and the infiltration lemma would imply that \(x\in\operatorname{int}S^{0}_{\ell^{\prime}}\), contradicting \(x\in\operatorname{spt}\mathcal{H}^{1}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 1px}} \nolimits\partial^{*}S^{0}_{\ell}\). Therefore, there are at least two chambers in the blow-up cluster at \(x\).
Next, we claim that when \(N=3\) or \(c_{\ell}=1\) for all \(\ell\), there cannot be four or more non-empty connected components of chambers of \(\mathcal{S}\) comprising \(\mathbb{R}^{2}\) if the blow-up is at an interior interface point or comprising \(\{y:y\cdot x<0\}\) at a boundary interface point. If \(N=3\) and this were the case, then there must be some \(S_{\ell}\), say \(S_{1}\), which has two connected components \(C_{1}\) and \(C_{2}\) separated by a circular sector \(C_{3}\) with \(\partial C_{3}\cap\partial B=\{c,c^{\prime}\}\) and \(\operatorname{dist}(c,c^{\prime})<2\). We set \(K\) to be the convex hull of \(0\), \(c\), and \(c^{\prime}\) and define the cluster \(\mathcal{S}^{\prime}=(S_{0},S_{1}\cup K,S_{2}\setminus K,S_{3}\setminus K,\emptyset)\). Note \(\mathcal{S}^{\prime}\in\mathcal{A}_{x}\) when \(x\) is a boundary interface point. Then the triangle inequality implies that \(\mathcal{F}(\mathcal{S}^{\prime};B_{2})<\mathcal{F}(\mathcal{S};B_{2})\), which contradicts the minimality condition (4.35) or (4.36). For the case when \(c_{\ell}=1\) for all \(\ell\), if
there was more than three connected components, there must be some component \(C\subset S_{\ell}\) with \(\mathcal{H}^{1}(C\cap B_{1})<2\pi/3\), and when \(x\) is a boundary interface point, \(\partial C\cap\{y\cdot x=0\}=\{0\}\). Then the construction in [10, Proposition 30.9], in which triangular portions of \(C\) near \(0\) are allotted to the neighboring chambers allows us to construct a competitor (belonging to \(\mathcal{A}_{x}\) if required) that contradicts the minimality (4.35) or (4.36).
We may now conclude the proof. If \(x\) is an interior interface point, then there are either two or three distinct connected chambers in the blow-up at \(x\). Similar to the previous theorem, the triangle inequality implies that if there are two, they are both halfspaces. If there are three, the angle conditions (4.39) follow from a first variation argument. If \(x\) is a boundary interface point, then \((iii)\) holds by (3.1) if \(x\) is not a jump point of \(h\). If \(x\) is a jump point of \(h\), then \(\{y\cdot x<0\}\) is partitioned into either two or three connected cones. The former is case \((iv)\), and the latter is case \((v)\).
## 5. Proof of Theorem 1.4
To streamline the statement below, the terminology "arc of constant curvature" includes segments in addition to circle arcs.
**Theorem 5.1** (Interior Resolution for \(\delta>0\)).: _If \(\mathcal{S}^{\delta}\) minimizes \(\mathcal{F}\) among \(\mathcal{A}^{h}_{\delta}\) or \(\mathcal{A}^{\mathbf{m}}_{\delta}\) for some \(\delta>0\) and \(x\in\partial S^{\delta}_{\ell_{0}}\) is an interior interface point, then there exists \(r_{x}>0\) such that exactly one of the following is true:_
1. \(S^{\delta}_{\ell}\cap B_{r_{x}}(x)=\emptyset\) _for_ \(\ell^{\prime}\neq\ell_{0}\) _and_ \(\partial S^{\delta}_{\ell_{0}}\cap B_{r_{x}}(x)\) _is an arc of constant curvature separating_ \(\partial S^{\delta}_{\ell_{0}}\cap B_{r_{x}}(x)\) _and_ \(G^{\delta}\cap B_{r_{x}}(x)\)_;_
2. \(\partial S^{\delta}_{\ell_{0}}\cap B_{r_{x}}(x)\) _is an arc of constant curvature separating_ \(B_{r_{x}}(x)\) _into_ \(S^{\delta}_{\ell_{0}}\cap B_{r_{x}}(x)\) _and_ \(S^{\delta}_{\ell}\cap B_{r_{x}}(x)\) _for some_ \(\ell^{\prime}\neq\ell_{0}\)_;_
3. _there exist circle arcs_ \(a_{1}\) _and_ \(a_{2}\) _meeting tangentially at_ \(x\) _such that_ \[\partial S^{\delta}_{\ell_{0}}\cap\partial G^{\delta}\cap B_{r_{x}}(x)=a_{1} \,,\quad\partial S^{\delta}_{\ell^{\prime}}\cap\partial G^{\delta}\cap B_{r_{ x}}(x)=a_{2}\,,\quad\partial S^{\delta}_{\ell_{0}}\cap\partial S^{\delta}_{\ell^{ \prime}}\cap B_{r_{x}}(x)=\{x\}\,;\]
4. _there exists circle arcs_ \(a_{1}\) _and_ \(a_{2}\) _meeting in a cusp at_ \(x\) _and an arc_ \(a_{3}\) _of constant curvature reaching the cusp tangentially at_ \(x\)_, and_ \[\partial S^{\delta}_{\ell_{0}}\cap\partial G^{\delta}\cap B_{r_{x}}(x)=a_{1} \,,\quad\partial S^{\delta}_{\ell}\cap\partial G^{\delta}\cap B_{r_{x}}(x)=a_ {2}\,,\quad\partial S^{\delta}_{\ell_{0}}\cap\partial S^{\delta}_{\ell^{ \prime}}\cap B_{r_{x}}(x)=a_{3}\,.\]
Proof.: Let us assume for simplicity that \(x\) is the origin; the proof at any other point is similar.
_Step zero_: If \(0\notin\partial S^{\delta}_{\ell^{\prime}}\) for all \(\ell^{\prime}\neq\ell_{0}\), then by the density estimates (4.16), \(B_{r_{0}}\cap S^{\delta}_{\ell^{\prime}}=\emptyset\) for some \(r_{0}\) and all \(\ell^{\prime}\neq\ell_{0}\). From the classification of blowups in Theorem 4.13, \((i)\) must hold at \(0\).
_Step one_: For the rest of the proof, we assume instead that for some \(\ell^{\prime}\neq\ell_{0}\), \(0\in\partial S^{\delta}_{\ell^{\prime}}\). By Theorem 4.13 and the fact that the density estimates (4.16) pass to all blow-up limits, we are in case \((ii)\) of that theorem: any possible blow-up limit at \(0\) is a pair of halfspaces coming from \(S^{\delta}_{\ell_{0}}\) and \(S^{\delta}_{\ell^{\prime}}\). In this step we identify a rectangle \(Q^{\prime}\) small enough such that \(S^{\delta}_{\ell_{0}}\cap Q^{\prime}\) and \(S^{\delta}_{\ell^{\prime}}\cap Q^{\prime}\) are a hypograph and epigraph, respectively, over a common axis.
Let us fix \(r_{j}\to 0\) such that applying Theorem 4.13 and rotating if necessary, we obtain
\[S^{\delta}_{\ell_{0}}/r_{j}\stackrel{{ L^{1}_{\rm loc }}}{{\rightarrow}}\mathbb{H}^{-}:=\{y:y\cdot e_{2}<0\}\,,\quad S^{\delta}_{ \ell^{\prime}}/r_{j}\stackrel{{ L^{1}_{\rm loc}}}{{\rightarrow}} \mathbb{H}^{+}:=\{y:y\cdot e_{2}>0\}\,, \tag{5.1}\] \[\mathcal{H}^{1}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits\, \partial S_{\ell_{0}}/r_{j}\,,\,\mathcal{H}^{1}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt}} \nolimits\,\partial S_{\ell^{\prime}}/r_{j}\stackrel{{*}}{{ \rightarrow}}\mathcal{H}^{1}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt}} \nolimits\,\partial\mathbb{H}^{+}\,. \tag{5.2}\]
Set
\[Q=[-1,1]\times[-1,1]\,.\]
We note that for all \(r<r_{**}/r_{j}\),
\[\alpha_{1}\pi r^{2}\leq|S^{\delta}_{\ell}/r_{j}\cap B_{r}(y)|\leq(1-\alpha_{1}) \pi r^{2}\quad\text{if $y\in\partial S^{\delta}_{\ell}$ for $\ell=\ell_{0}$ or $\ell^{\prime}$} \tag{5.3}\]
due to (4.16). Also due to (4.16) and (5.1),
\[S_{\ell}^{\delta}\cap B_{r_{j}}=\emptyset\quad\forall\ell\notin\{\ell^{\prime}, \ell_{0}\}\,,\quad\text{for large $j$}; \tag{5.4}\]
we may assume by restricting to the tail that (5.4) holds for all \(j\). Next, a standard argument utilizing (5.1) and (5.3) implies that there exists \(J\in\mathbb{N}\) such that for all \(j\geq J\),
\[(\partial S_{\ell_{0}}^{\delta}/r_{j}\cup\partial S_{\ell^{\prime}}^{\delta}/r _{j})\cap Q\subset[-1,1]\times[-1/4,1/4]\,. \tag{5.5}\]
Now for almost every \(t\in[-1,1]\), by Lemma 2.1, the vertical slices (viewed as subsets of \(\mathbb{R}\))
\[(S_{\ell_{0}}^{\delta}/r_{j})_{t}=S_{\ell_{0}}^{\delta}/r_{j}\cap Q\cap\{y:y \cdot e_{1}=t\}\,,\quad(S_{\ell^{\prime}}^{\delta}/r_{j})_{t}:=(S_{\ell^{ \prime}}^{\delta})_{t}\cap Q\cap\{y:y\cdot e_{1}=t\}\]
are one-dimensional sets of finite perimeter and, by (5.5) and [13, Proposition 14.5],
\[2c_{\ell_{0}}+2c_{\ell^{\prime}} \leq\int_{-1}^{1}c_{\ell_{0}}P((S_{\ell_{0}}^{\delta}/r_{j})_{t}; (-1,1))+c_{\ell^{\prime}}P((S_{\ell^{\prime}}^{\delta}/r_{j})_{t};(-1,1))\,dt\] \[\leq c_{\ell_{0}}P(S_{\ell_{0}}^{\delta}/r_{j};\operatorname{int} Q)+c_{\ell^{\prime}}P(S_{\ell^{\prime}}^{\delta}/r_{j};\operatorname{int}Q)\,. \tag{5.6}\]
Since \(\mathcal{H}^{1}(\partial\mathbb{H}^{+}\cap\partial Q)=0\), (5.2) implies that
\[\lim_{j\to\infty}c_{\ell_{0}}P(S_{\ell_{0}}^{\delta}/r_{j};\operatorname{int} Q)+c_{\ell^{\prime}}P(S_{\ell^{\prime}}^{\delta}/r_{j};\operatorname{int}Q)=2c_{ \ell_{0}}+2c_{\ell^{\prime}}\,. \tag{5.7}\]
Together, (5.5)-(5.7) and Lemma 2.1 allow us to identify \(j\) as large as we like (to be specified further shortly) and \(1<t_{1}<t_{2}<1\) such that for \(i=1,2\),
\[P((S_{\ell_{0}}^{\delta}/r_{j})_{t_{i}};(-1,1)) =1=P((S_{\ell^{\prime}}^{\delta}/r_{j})_{t_{i}};(-1,1))\,, \tag{5.8}\] \[0 =\int_{(-1,1)}|\mathbf{1}_{(S_{\ell_{0}}^{\delta}/r_{j})_{t_{i}}^ {+}}-\mathbf{1}_{(S_{\ell_{0}}^{\delta}/r_{j})_{t_{i}}^{-}}|+|\mathbf{1}_{(S_{ \ell_{0}}^{\delta}/r_{j})_{t_{i}}^{+}}-\mathbf{1}_{(S_{\ell_{0}}^{\delta}/r_{ j})_{t_{i}}}|\,d\mathcal{H}^{1}\] \[=\int_{(-1,1)}|\mathbf{1}_{(S_{\ell^{\prime}}^{\delta}/r_{j})_{t_ {i}}^{+}}-\mathbf{1}_{(S_{\ell^{\prime}}^{\delta}/r_{j})_{t_{i}}^{-}}|+| \mathbf{1}_{(S_{\ell^{\prime}}^{\delta}/r_{j})_{t_{i}}^{+}}-\mathbf{1}_{(S_{ \ell^{\prime}}^{\delta}/r_{j})_{t_{i}}}|\,d\mathcal{H}^{1}\,, \tag{5.9}\]
where here and in the rest of the argument, the minus and plus superscripts denote left and right traces along \(\{y\cdot e_{1}=t_{i}\}\) (again viewed as subsets of \(\mathbb{R}\)). From (5.5) and (5.8)-(5.9), we deduce that there exist \(-1/4\leq a_{1}\leq b_{1}\leq 1/4\) and \(-1/4\leq a_{2}\leq b_{2}\leq 1/4\) such that
\[\mathcal{H}^{1}((S_{\ell_{0}}^{\delta}/r_{j})_{t_{i}}^{\pm}\Delta[-1,a_{i}])= 0=\mathcal{H}^{1}((S_{\ell^{\prime}}^{\delta}/r_{j})_{t_{i}}^{\pm}\Delta[b_{ i},1])\quad\text{ for $i=1,2$}\,. \tag{5.10}\]
Let us call \(Q^{\prime}=[t_{1},t_{2}]\times[-1,1]\). Since it will be useful later, we record the equality
\[\mathcal{F}(\mathcal{S}^{\delta})=\mathcal{F}(S^{\delta};\mathbb{R}^{2} \setminus r_{j}Q^{\prime})+c_{\ell_{0}}P(S_{\ell_{0}}^{\delta};\operatorname{ int}r_{j}Q^{\prime})+c_{\ell^{\prime}}P(S_{\ell^{\prime}}^{\delta}; \operatorname{int}r_{j}Q^{\prime})\,, \tag{5.11}\]
which follows from (5.4), (5.9), and Lemma 2.1.
Using the explicit description given by (5.5) and (5.10), we now identify a variational problem on \(Q^{\prime}\) for which our minimal partition must be optimal. We consider the minimization problem
\[\inf_{\mathcal{A}_{Q^{\prime}}}c_{\ell_{0}}P(A;\operatorname{int}Q^{\prime})+c_ {\ell^{\prime}}P(B;\operatorname{int}Q^{\prime})\,,\]
where
\[\mathcal{A}_{Q^{\prime}}:=\{(A,B):A,B\subset Q^{\prime},\left.A \right|_{\partial Q^{\prime}}=S_{\ell_{0}}^{\delta}/r_{j},\left.B\right|_{ \partial Q^{\prime}}=S_{\ell^{\prime}}^{\delta}/r_{j}\text{ in the trace sense,}\] \[\qquad\qquad\left|A\cap B\right|=0,\left.|A\cap Q^{\prime}|=|(S_{ \ell_{0}}^{\delta}/r_{j})\cap Q^{\prime}|,\left.|B\cap Q^{\prime}|=|(S_{\ell^{ \prime}}^{\delta}/r_{j})\cap Q^{\prime}|\right\}.\]
By the area constraint on elements in the class \(\mathcal{A}_{Q^{\prime}}\) and \(\mathcal{S}^{\delta}\in\mathcal{A}_{\delta}^{h}\) or \(\mathcal{S}^{\delta}\in\mathcal{A}_{\delta}^{m}\), any \(\mathcal{S}\) given by
\[S_{\ell_{0}}=(S_{\ell_{0}}\setminus r_{j}Q^{\prime})\cup r_{j}(A\cap Q^{\prime} )\,,\quad S_{\ell^{\prime}}=(S_{\ell^{\prime}}\setminus r_{j}Q^{\prime})\cup r _{j}(B\cap Q^{\prime})\,,\quad S_{\ell}=S_{\ell}^{\delta}\quad\ell\notin\{\ell_{0}, \ell^{\prime}\}\,,\]
satisfies \(|\mathbb{R}^{2}\setminus\cup_{\ell}S_{\ell}|\leq\delta\) in the former case and \(|\mathbb{R}^{2}\setminus\cup_{\ell}S_{\ell}|\leq\delta\) and \((|S_{1}|,\ldots,|S_{N}|)=\mathbf{m}\) in the latter. Also, once \(r_{j}\) is small enough, if \(\mathcal{S}^{\delta}\in\mathcal{A}^{h}_{\delta}\), then \(\mathcal{S}\) satisfies the trace condition (1.4) also. Therefore, \(\mathcal{S}\in\mathcal{A}^{h}_{\delta}\) or \(\mathcal{S}\in\mathcal{A}^{\mathbf{m}}_{\delta}\), so we can compare
\[\mathcal{F}(\mathcal{S}^{\delta}) \stackrel{{\eqref{eq:2.1}}}{{=}}\mathcal{F}( \mathcal{S}^{\delta};\mathbb{R}^{2}\setminus r_{j}Q^{\prime})+c_{\ell_{0}}P(S^ {\delta}_{\ell_{0}};\operatorname{int}r_{j}Q^{\prime})+c_{\ell^{\prime}}P(S^ {\delta}_{\ell^{\prime}};\operatorname{int}r_{j}Q^{\prime})\] \[\leq\mathcal{F}(\mathcal{S})=\mathcal{F}(\mathcal{S}^{\delta}; \mathbb{R}^{2}\setminus r_{j}Q^{\prime})+c_{\ell_{0}}P(S^{\delta}_{\ell_{0}}; \operatorname{int}r_{j}Q^{\prime})+r_{j}c_{\ell_{0}}P(A;\operatorname{int}Q^{ \prime})+r_{j}c_{\ell^{\prime}}P(B;\operatorname{int}Q^{\prime})\,,\]
where in the last equality we have used the trace condition on \(\mathcal{A}_{Q^{\prime}}\) and the formula (2.8) for computing \(\mathcal{F}(\cdot;\partial Q^{\prime})\). Discarding identical terms and rescaling, this inequality yields
\[c_{\ell_{0}}P(S^{\delta}_{\ell_{0}}/r_{j};\operatorname{int}Q^{\prime})+c_{ \ell^{\prime}}P(S^{\delta}_{\ell^{\prime}}/r_{j};\operatorname{int}Q^{\prime} )\leq c_{\ell_{0}}P(A;\operatorname{int}Q^{\prime})+c_{\ell^{\prime}}P(B; \operatorname{int}Q^{\prime})\,, \tag{5.12}\]
where \((A,B)\in\mathcal{A}_{Q^{\prime}}\) is arbitrary. Simply put, our minimal partition must be minimal on \(r_{j}Q^{\prime}\) among competitors with the same traces and equal areas of all chambers.
We now test (5.12) with a well-chosen competitor based on symmetrization. Let
\[A=\{(x_{1},x_{2}):-1\leq x_{2}\leq\mathcal{H}^{1}((S^{\delta}_{\ell_{0}}/r_{j })_{x_{1}})-1\}\,,\quad B=\{(x_{1},x_{2}):1\geq x_{2}\geq 1-\mathcal{H}^{1}((S^{ \delta}_{\ell^{\prime}})_{x_{1}})\}\,.\]
In the notation set forth in Lemma 2.3,
\[A=(S^{\delta}_{\ell_{0}}/r_{j})^{h}\,,\quad B=-(-S^{\delta}_{\ell^{\prime}}/r _{j})^{h}\,.\]
By (5.10) and (5.5), the assumptions of Lemma 2.3 are satisfied by \(S^{\delta}_{\ell_{0}}/r_{j}\) and \(-S^{\delta}_{\ell^{\prime}}/r_{j}\). Then the conclusions of that lemma imply that \((A,B)\in\mathcal{A}_{Q^{\prime}}\), so (5.12) holds. However, (2.16) also gives
\[c_{\ell_{0}}P(S^{\delta}_{\ell_{0}}/r_{j};\operatorname{int}Q^{\prime})+c_{ \ell^{\prime}}P(-S^{\delta}_{\ell^{\prime}}/r_{j};\operatorname{int}Q^{\prime })\geq c_{\ell_{0}}P(A;\operatorname{int}Q^{\prime})+c_{\ell^{\prime}}P(-B; \operatorname{int}Q^{\prime})\,, \tag{5.13}\]
so that in fact there is equality. But according to Lemma 2.3, every vertical slice of \((S^{\delta}_{\ell_{0}}/r_{j})\cap Q^{\prime}\) and \((-S^{\delta}_{\ell^{\prime}}/r_{j})\cap Q^{\prime}\) must therefore be an interval with one endpoint at \(-1\). This is precisely what we set out to prove in this step.
_Step two_: Here we prove that for the open set \(G^{\delta}\) (see Remark 4.5), the set
\[\mathcal{I}:=\{t\in[r_{j}t_{1}/2,r_{j}t_{2}/2]:(G^{\delta}\cap r_{j}Q^{\prime} )_{t}=\emptyset\}\]
is a closed interval. \(\mathcal{I}\) is closed since the projection of the open set \(G^{\delta}\cap r_{j}Q^{\prime}\) onto the \(x_{1}\) axis is open, so we only need to prove it is an interval. First, we claim that for any rectangle \(R^{\prime}=(T_{1},T_{2})\times[-r_{j},r_{j}]\) with \((T_{1},T_{2})\subset\mathcal{I}^{c}\),
\[\partial S^{\delta}_{\ell_{0}}\cap R\text{ and }\partial S^{\delta}_{\ell^{ \prime}}\cap R\text{ are graphs of functions }F_{0}\text{ and }F^{\prime} \tag{5.14}\]
with \(F_{0}<F^{\prime}\), over the \(x_{1}\)-axis of constant curvature with no vertical tangent lines in \(R^{\prime}\). To see this, first note that for any \((a,b)\subset\subset(T_{1},T_{2})\), \(\partial S^{\delta}_{\ell_{0}}\cap((a,b)\times[r_{j},r_{j}])\) and \(\partial S^{\delta}_{\ell^{\prime}}\cap((a,b)\times[r_{j},r_{j}])\) must be at positive distance from each other by the definition of \(\mathcal{I}^{c}\). Then a first variation argument implies that each has constant mean curvature in the distributional sense, and a graph over \((a,b)\) with constant distributional mean curvature must be a single arc of constant curvature with no vertical tangent lines in the interior. Letting \((a,b)\) exhaust \((T_{1},T_{2})\), we have proven the claim.
Suppose for contradiction that there exist \(T_{i}\in\mathcal{I}\), \(i=1,2\), such that \((T_{1},T_{2})\subset\mathcal{I}^{c}\). Set \((T_{1},T_{2})\times[-r_{j},r_{j}]=R\). Now \(F_{0}\) and \(F_{1}\) extend continuously to \(T_{1}\) and \(T_{2}\) with \(F_{0}(T_{i})\leq F^{\prime}(T_{i})\) for each \(i\). In fact \(F_{0}(T_{i})=F^{\prime}(T_{i})\). If instead we had for example \(F_{0}(T_{1})<F^{\prime}(T_{1})\), then \(G^{\delta}\) would contain a rectangle \((t,T_{1})\times(c,d)\) for some \(t<T_{1}\) and \(c<d\), which would imply that \(G^{\delta}\) has positive density at \((T_{1},F_{0}(T_{1}))\) and \((T_{1},F^{\prime}(T_{1}))\). By Corollary 4.14, \(\partial G^{\delta}\cap\partial S^{\delta}_{\ell_{0}}\) is single arc of constant curvature in neighborhood \(N\) of \((T_{1},F_{0}(T_{1}))\), which, by \(T_{1}\in\mathcal{I}\), has vertical tangent line at \((T_{1},F_{0}(T_{1}))\). Therefore, \(\partial S^{\delta}_{\ell_{0}}\cap N\cap R\) is either a vertical segment or a circle arc with vertical tangent line at \((T_{1},F_{0}(T_{1}))\), and both of these scenarios contradict (5.14). So we have \(F_{0}(T_{i})=F^{\prime}(T_{i})\), and thus \((T_{i},F_{0}(T_{i}))\in\partial S^{\delta}_{\ell_{0}}\cap\partial S^{\delta}_{ \ell^{\prime}}\cap\partial G^{\delta}\). As a consequence, by Corollary 4.14, \(G^{\delta}\) must have density \(0\) at \((T_{i},F_{0}(T_{i}))\), which means that the graphs of \(F_{0}\) and \(F^{\prime}\) meet tangentially at \(T_{i}\). But the only way
for two circle arcs to meet tangentially at two common points is if they are the same arc, that is \(F_{0}=F^{\prime}\), which is a contradiction of \(F_{0}<F^{\prime}\). We have thus shown that \(\mathcal{I}\) is a closed interval.
_Step three_: Finally we may finish the proof. We note that by our assumption \(0\in\partial S^{\delta}_{\ell^{\prime}}\cap\partial S^{\delta}_{\ell_{0}}\) (see the beginning of step one), \(0\in I\). Now if \(0\in\operatorname{int}I\), then \(|G^{\delta}\cap B_{r}(0)|=0\) for some small \(r\), and we have \((ii)\). If \(\{0\}=I\), then by the same argument as at the beginning of the previous step, we know that \(\partial S^{\delta}_{\ell_{0}}/r_{j}\cap(Q^{\prime}\setminus\{0\})\) and \(\partial S^{\delta}_{\ell^{\prime}}/r_{j}\cap(Q^{\prime}\setminus\{0\})\) are each two circle arcs of equal curvature meeting at the origin. Furthermore, since the blow-up of \(G^{\delta}\) is empty at \(0\), we see that all four of these arcs must meet tangentially at the origin, so that \((iii)\) holds. Lastly, if \(\operatorname{int}I\neq\emptyset\) and \(0\in\partial I\), the combined arguments of the previous two cases imply that \((iv)\) holds.
**Theorem 5.2** (Boundary Resolution for \(\delta>0\)).: _If \(\mathcal{S}^{\delta}\) minimizes \(\mathcal{F}\) among \(\mathcal{A}^{h}_{\delta}\) for some \(\delta>0\) and \(x\in\partial S^{\delta}_{\ell_{0}}\cap\partial B\), then there exists \(r_{x}>0\) such that exactly one of the following is true:_
1. \(x\) _is not a jump point of_ \(h\) _and_ \(B_{r_{x}}(x)\cap B=B_{r_{x}}(x)\cap S^{\delta}_{\ell_{0}}\)_;_
2. \(x\) _is a jump point of_ \(h\) _and_ \(\partial S^{\delta}_{\ell_{0}}\cap B_{r_{x}}(x)\) _is a line segment separating_ \(B_{r_{x}}(x)\cap B\) _into_ \(S^{\delta}_{\ell_{0}}\cap B_{r_{x}}(x)\cap B\) _and_ \(S^{\delta}_{\ell^{\prime}}\cap B_{r_{x}}(x)\cap B\) _for some_ \(\ell^{\prime}\neq\ell_{0}\)_;_
3. \(x\) _is a jump point of_ \(h\)_, and there exists circle arcs_ \(a_{1}\) _and_ \(a_{2}\) _meeting at_ \(x\) _such that_ \[\partial S^{\delta}_{\ell_{0}}\cap\partial G^{\delta}\cap B_{r_{x}}(x)=a_{1} \,,\quad\partial S^{\delta}_{\ell^{\prime}}\cap\partial G^{\delta}\cap B_{r_{ x}}(x)=a_{2}\,,\quad\partial S^{\delta}_{\ell_{0}}\cap\partial S^{\delta}_{\ell^{ \prime}}\cap B_{r_{x}}(x)=\{x\}\,.\]
Proof.: Let us assume for simplicity that \(x=\vec{e}_{1}\). The proof at any other point in \(\partial B\) is the same.
_Step zero_: If \(\vec{e}_{1}\in\partial B\) is not a jump point of \(h\), then by the inclusion (3.1) from Theorem 3.1, \((i)\) holds.
_Step one_: For the rest of the proof, we assume that \(\vec{e}_{1}\) is a jump point of \(h\). By Theorem 4.13, there exists \(\ell^{\prime}\neq\ell_{0}\) such that any blow-up at \(\vec{e}_{1}\) consists of the blow-up chambers \(S_{\ell_{0}}\), \(S_{\ell^{\prime}}\), each of which is the intersection of a halfspace with \(\{y:y\cdot\vec{e}_{1}<0\}\), \(S_{0}=\{y:y\cdot\vec{e}_{1}>0\}\), and \(G=\mathbb{R}^{2}\setminus(S_{0}\cup S_{\ell_{0}}\cup S_{\ell^{\prime}})\) is a possibly empty connected cone contained in \(\{y:y\cdot\vec{e}_{1}<0\}\). In this step we argue that on a small rectangle \(Q^{\prime}\) with \(0\in\partial Q^{\prime}\), \((S_{\ell_{0}}-\vec{e}_{1})/r_{j}\cap Q^{\prime}\) and \((S^{\delta}_{\ell^{\prime}}-\vec{e}_{1})/r_{j}\cap Q^{\prime}\) are the hypograph and epigraph, respectively of two functions over \(\{y\cdot\vec{e}_{1}=0\}\).
Let us choose \(r_{j}\to 0\) such that by Theorem 4.13, we have a blow-up limit belonging to \(\mathcal{A}_{\vec{e}_{1}}\). By the density estimates (4.16), \(B_{r_{j}}(x)\subset S^{\delta}_{\ell_{0}}\cup S^{\delta}_{\ell^{\prime}}\cup G ^{\delta}\cup S_{0}\) for all large enough \(j\), so we can ignore the other chambers. Also, for convenience, by the containment (3.1) of the circular segments in \(S^{\delta}_{\ell_{0}}\) and \(S^{\delta}_{\ell^{\prime}}\) from Theorem 3.1, we extend \(S^{\delta}_{\ell_{0}}\) and \(S^{\delta}_{\ell^{\prime}}\) on \(\{y:y\cdot\vec{e}_{1}<1\}\) so that for all large \(j\),
\[\{y:y\cdot\vec{e}_{1}=1\}\cap B_{r_{j}}(\vec{e}_{1})\subset\partial S^{\delta}_ {\ell_{0}}\cup\partial S^{\delta}_{\ell^{\prime}}\]
rather than
\[\partial B\cap B_{r_{j}}(\vec{e}_{1})\subset\partial S^{\delta}_{\ell_{0}} \cup\partial S^{\delta}_{\ell^{\prime}}\,;\]
this allows us to work on a rectangle along the sequence of blow-ups rather than \((B-\vec{e}_{1})/r_{j}\). Now due to the inclusion (3.1) from Theorem 3.1, there exists a rectangle
\[Q=[T,0]\times[-1,1]\]
such that for all large \(j\), up to interchanging the labels \(\ell_{0}\) and \(\ell^{\prime}\), in the trace sense,
\[(\{T\}\times[-1,-1/2])\cup([T,0]\times\{-1\})\cup(\{0\}\times[-1,0])\subset(S^ {\delta}_{\ell_{0}}-\vec{e}_{1})/r_{j}\,,\]
\[(\{T\}\times[1/2,1])\cup([T,0]\times\{1\})\cup(\{0\}\times[0,1])\subset(S^{ \delta}_{\ell^{\prime}}-\vec{e}_{1})/r_{j}\,.\]
Then a similar slicing argument as leading to (5.10) implies that for some large \(j\), there exist \(-1/2\leq a_{1}\leq a_{2}\leq 1/2\) and \(t\in[T,0)\) such that, in the trace sense,
\[(\{t\}\times[-1,a_{1}])\cup([t,0]\times\{-1\})\cup(\{0\}\times[-1,0])=(S^{ \delta}_{\ell_{0}}-\vec{e}_{1})/r_{j}\]
\[(\{t\}\times[a_{2},1])\cup([t,0]\times\{1\})\cup(\{0\}\times[0,1])=(S^{\delta}_ {\ell^{\prime}}-\vec{e}_{1})/r_{j}\,.\]
Given this explicit description on the boundary of \(Q^{\prime}:=[t,0]\times[-1,1]\), the same argument as in the proof of Theorem 5.1 gives claim of this step.
_Step two_: We may finally finish the proof of Theorem 5.2. By the same argument as in the previous theorem, the set
\[\mathcal{I}:=\{s\in[r_{j}t,1]:(G^{\delta}\cap(r_{j}Q^{\prime}+\vec{e}_{1}))_{s} =\emptyset\}\]
is a closed interval. Furthermore, since \(\vec{e}_{1}\) is a jump point of \(h\), \(\mathcal{I}\) contains \(0\). If \(\operatorname{int}I\neq\emptyset\), we immediately see that \((ii)\) holds. On the other hand, if \(\mathcal{I}=\{0\}\), then the vertical slices of \(G^{\delta}\) are non-empty for all \(s\in(r_{j}t,1)\). Again the same argument as in the previous theorem shows that \((iii)\) holds.
_Proof of Theorem 1.4._ At any \(x\in\operatorname{cl}B\), Theorems 5.1 and 5.2 yield the existence of \(r_{x}>0\) such that either \(x\) is an interior point of \(S^{\delta}_{\ell}\) or \(G^{\delta}\) or on \(B_{r_{x}}(x)\), the minimizer is described by one of the options listed in those theorems. By the enumeration of possible local resolutions in those theorems, we see that \(\partial S^{\delta}_{\ell}\cap B\) is \(C^{1,1}\) as desired, since it is analytic except where two arcs of constant curvature intersect tangentially. Now if \(x\) and \(y\) are both in \(\partial G^{\delta}\cap\partial S^{\delta}_{\ell}\) for some \(\ell\geq 1\), then one of Theorem 5.1.\((i)\), \((iii)\), or \((iv)\) or Theorem 5.2.\((iii)\) holds on \(B_{r_{x}}(x)\) and \(B_{r_{y}}(y)\); in particular, each \(\partial G^{\delta}\cap\partial S^{\delta}_{\ell}\cap B_{r_{x}}(x)\) and \(\partial G^{\delta}\cap\partial S^{\delta}_{\ell}\cap B_{r_{y}}(y)\) is an arc of constant curvature. A first variation argument then gives (1.6) if \(G^{\delta}\neq\emptyset\). Also, by the compactness of \(\operatorname{cl}B\) and the interior resolution theorem, there are only finitely many arcs in \(\partial G^{\delta}\cap\partial S^{\delta}_{\ell}\). We note that \(H_{S^{\delta}_{\ell}}\) cannot be negative along \(\partial^{*}S^{\delta}_{\ell}\cap\partial^{*}G^{\delta}\), since local variations which decrease the area of \(G^{\delta}\) are admissible. A similar argument based on the interior local resolution result implies that if \(\mathcal{H}^{1}(\partial S_{\ell}\cap\partial S_{m})>0\) for \(\ell,m\geq 1\), then \(\partial S^{\delta}_{\ell}\cap\partial S^{\delta}_{m}\) is composed of finitely many straight line segments. We have thus decomposed each such \(\partial S^{\delta}_{\ell}\cap\partial S^{\delta}_{m}\) and \(\partial G^{\delta}\cap\partial S^{\delta}_{\ell}\) into finitely many line segments and arcs of constant curvature, respectively.
Moving on to showing that each connected component, say \(C\), of \(S^{\delta}_{\ell}\) for \(1\leq\ell\leq N\) is convex, consider any \(x\in\partial C\). \(C\cap B_{r_{x}}(x)\) must be convex by Theorems 5.1 and 5.2 and \(H_{S^{\delta}_{\ell}}\geq 0\) along \(\partial^{*}S^{\delta}_{\ell}\cap\partial^{*}G^{\delta}\). Since \(\partial C\) consists of a finite number of segments and circular arcs and \(C\) is connected, the convexity of \(C\) follows from this local convexity. To finish proving the theorem, it remains to determine the ways in which these line segments and arcs may terminate. We note that each component of \(\partial G^{\delta}\) must terminate. If one did not, then by Corollary 4.14, it forms a circle contained in \(\partial S^{\delta}_{\ell}\cap\partial G^{\delta}\). This configuration cannot be minimal however, since that component of \(G^{\delta}\) may be added to \(S^{\delta}_{\ell}\) to decrease the energy. Suppose that one of these components terminates at \(x\). Next, by applying the local resolution at \(x\), either Theorem 5.1.\((iv)\) holds if \(x\in B\) or item \((ii)\) or \((iii)\) from Theorem 5.2 holds, where \(x\in\partial B\) is a jump point of \(h\). This yields the desired conclusion.
_Proof of Theorem 1.1._ The proof is similar to the proof of Theorem 1.4. Since every interface point is an interior interface point, determining the ways in which arcs may terminate proceeds as in the case \(x\in B\) in that theorem.
## 6. Proof of Theorem 1.6
_Proof of Theorem 1.6. Step one_: First, we show that the set \(\Sigma\) of interior triple junctions, or more precisely the set
\[\Sigma:=\{x\in B:\exists\text{ a blow-up at $x$ given by $(ii)$ from Theorem \ref{thm
Theorem 30.7]), so we consider only the case where \(x\in\partial B\). If \(\{x_{k}\}\subset\Sigma\) and \(x_{k}\to x\in\partial B\), then by (3.1), \(x\in\partial B\) is a jump point of \(h\). We claim that up to a subsequence which we do not notate,
\[\frac{S_{\ell}^{0}-x}{|x-x_{k}|}\to S_{\ell}\quad\text{ locally in $L^{1}$ for $\ell=1,2,3$} \tag{6.1}\]
for a blow-up cluster \(\mathcal{S}\) of the form from item \((v)\) in Theorem 4.15. To see this, we first note that by our assumption on \(x_{k}\),
\[x\in\partial S_{1}^{0}\cap\partial S_{2}^{0}\cap\partial S_{3}^{0}\,. \tag{6.2}\]
This inclusion rules out item \((iv)\) from Theorem 4.15, and so the blow-up cluster is three connected cones partitioning \(\{y:y\cdot x<0\}\). Up to a further subsequence, we may assume that
\[\frac{x_{k}-x}{|x_{k}-x|}\to\nu\in\{y:y\cdot x<0\}\,,\]
where we have used (3.1) to preclude the possibility that \(x_{k}\) approaches \(x\) tangentially. Now for some \(r>0\), \(B_{r}(\nu)\), and \(\ell_{0}\in\{1,2,3\}\), say \(\ell_{0}=1\), the description of the blow-up cluster implies that \(B_{r}(\nu)\subset S_{2}\cup S_{3}\). Combined with the \(L^{1}\) convergence (6.1) and the infiltration lemma, we conclude that \(B_{|x_{k}-x|r/4}\subset S_{2}^{0}\cup S_{3}^{0}\) for large enough \(k\), which is in direct conflict with \(x_{k}\in\Sigma\). We have thus proven that \(\Sigma\) has no accumulation points in \(\operatorname{cl}B\); in particular, it is finite.
_Step two_: We finally conclude the proof of Theorem 1.6. For any \(x\in(B\setminus\Sigma)\cap\partial S_{\ell_{0}}^{0}\), Theorem 4.15 and the infiltration lemma imply that \(x\in\partial^{*}S_{\ell_{0}}^{0}\cap\partial^{*}S_{\ell_{1}}^{0}\). In turn, by Corollary 4.7, there exists \(r_{x}>0\) such that \(B_{r_{x}}(x)\cap\partial S_{\ell_{0}}^{0}\) is a diameter of \(B_{r_{x}}(x)\). Recalling from Corollary 4.4 that \(\mathcal{H}^{1}(\partial S_{\ell_{0}}^{0}\setminus\partial^{*}S_{\ell_{0}}^{0} )=0\), we may thus decompose \(\partial S_{\ell_{0}}\) as a countable number of line segments, each of which must terminate at a point in the finite set \(\Sigma\) or a jump point of \(h\). Therefore, \(\partial S_{\ell_{0}}^{\delta}\) is a finite number of line segments. The remainder of Theorem 1.6 now follows directly from this fact and the classification of blow-ups in Theorem 4.15, items \((ii)\), \((iv)\), and \((v)\). Indeed, since the interfaces are a finite number of line segments, at \(x\in\Sigma\) or \(x\in\partial B\) which is a jump point of \(h\), the blow-up is unique, and the minimal partition \(\mathcal{S}^{0}\) must coincide with the blow-up on a neighborhood of \(x\). The convexity of connected components of \(S_{\ell}^{0}\) for \(1\leq\ell\leq N\) follows as in the \(\delta>0\) case.
## 7. Resolution for small \(\delta\) on the ball
Proof of Theorem 1.8.: _Step zero_: We begin by reducing the statement of the theorem to one phrased in terms of a sequence of minimizers \(\{\mathcal{S}^{\delta_{j}}\}\). More precisely, to prove Theorem 1.8, we claim it is enough to consider a sequence \(\{\mathcal{S}^{\delta_{j}}\}\) of minimizers for \(\delta_{j}\to 0\) and show that up to a subsequence, there exists a minimizer \(\mathcal{S}^{0}\) among \(\mathcal{A}_{0}^{h}\) with singular set \(\Sigma\) such that
\[\max\big{\{}\sup_{x\in\mathcal{S}^{\delta_{j}}_{\ell}}\operatorname {dist}(x,S^{0}_{\ell})\,,\,\sup_{x\in S^{0}_{\ell}}\operatorname{dist}(x,S^{ \delta_{j}}_{\ell})\big{\}}\to 0\quad\text{for $1\leq\ell\leq N$} \tag{7.1}\] \[\max\big{\{}\sup_{x\in G^{\delta_{j}}}\operatorname{dist}(x, \Sigma)\,,\,\sup_{x\in\Sigma}\operatorname{dist}(x,G^{\delta_{j}})\big{\}}\to 0\,, \tag{7.2}\]
and, for large enough \(j\) and each \(x\in\Sigma\), \(B_{r}(x)\cap\partial G^{\delta_{j}}\) consists of three circle arcs of curvature \(\kappa_{j}\), with total area \(|G^{\delta_{j}}|=\delta_{j}\). To see why this is sufficient, if Theorem 1.8 were false, then there would be some sequence \(\delta_{j}\to 0\) with minimizers \(\mathcal{S}^{\delta_{j}}\) among \(\mathcal{A}_{\delta_{j}}^{h}\) such that for any subsequence and choice of minimizer \(\mathcal{S}^{0}\) among \(\mathcal{A}_{0}^{h}\), at least one of (1.8)-(1.9) or the asymptotic resolution near singularities of \(\mathcal{S}^{0}\) did not hold. But this would contradict the subsequential claim above.
We point out that if we knew that \(\partial G^{\delta}\) is described near singularities by three circle arcs for small \(\delta\), the saturation of the area inequality \(|G^{\delta}|\leq\delta\) follows from the facts that \(\partial G^{\delta}\) has negative mean curvature away from its cusps and increasing the area of \(G^{\delta}\) is admissible if \(|G^{\delta}|<\delta\). Therefore, the rest of the proof is divided into steps proving (7.1)-(7.2) and the asymptotic resolution near
singular points. First we prove that due to \(c_{\ell}=1\) for \(1\leq\ell\leq N\), there are no "islands" inside \(B\). Second, we extract a minimizer \(\mathcal{S}^{0}\) for \(\mathcal{A}^{h}_{0}\) from a minimizing (sub-)sequence \(\mathcal{S}^{\delta_{j}}\) with \(\delta_{j}\to 0\) and prove (7.1). There are then two cases. In the first, we suppose that the set of triple junctions \(\Sigma\) is empty and show that \(G^{\delta_{j}}=\emptyset\) for large \(j\), so that (7.2) is trivial. In the other case, we assume that \(\Sigma\neq\emptyset\) and prove (7.2) and the final resolution near singularities of the limiting cluster.
_Step one_: Let \(\mathcal{S}^{\delta}\) be a minimizer for \(\delta>0\). We claim that for any connected component \(C\) of any chamber \(S^{\delta}_{\ell}\) with \(1\leq\ell\leq N\), \(\partial C\cap\{h=\ell\}\neq\emptyset\). Suppose that this were not the case for some \(C\subset S_{\ell}\). Then in fact, \(\operatorname{cl}C\subset B\), since by Theorem 5.2 and (3.1), the only components that can intersect \(\partial B\) are those bordering \(\partial B\) along an arc where \(h=\ell\). By Theorem 5.1, \(\partial C\) is \(C^{1,1}\) since its boundary is contained in \(B\). If \(\mathcal{H}^{1}(\partial C\cap\partial S^{\delta}_{\ell^{\prime}})>0\) for some \(\ell^{\prime}\), then since all \(c_{\ell}\) are equal, removing \(C\) from \(S^{\delta}_{\ell}\) and adding it to \(S^{\delta}_{\ell^{\prime}}\) contradicts the minimality of \(\mathcal{S}^{\delta}\). So it must be the case that \(\partial C\subset\partial G^{\delta}\) except for possibly finitely many points. We translate \(C\) if necessary until it intersects \(\partial G^{\delta}\cap\partial C^{\prime}\) for a connected component \(C^{\prime}\neq C\) of some \(S^{\delta}_{\ell^{\prime}}\) at \(y\in B\), which does not increase the energy. Creating a new minimal cluster \(\tilde{\mathcal{S}}\) by adding \(C\) to \(S^{\delta}_{\ell^{\prime}}\) and removing it from \(S^{\delta}_{\ell}\) gives a contradiction. This is because by Corollary 4.4, \(y\in(\tilde{S}^{\delta}_{\ell^{\prime}})^{(1)}\) implies that \(y\in\operatorname{int}\,(\tilde{S}^{\delta}_{\ell^{\prime}})^{(1)}\), and so \(\mathcal{F}(\tilde{\mathcal{S}};B_{r}(y))=0\) for some \(r>0\), against the minimality of \(\mathcal{S}^{\delta}\).
We note that as a consequence, the total number of connected components in \(\mathcal{S}^{\delta}\) is bounded in terms of the number of jumps of \(h\), and in addition the area of any connected component is bounded from below by the area of the smallest circular segment from (2.3).
_Step two_: Here we identify our subsequence, limiting minimizer among \(\mathcal{A}^{h}_{0}\), and prove (7.1). Let us decompose each \(S^{\delta_{j}}_{\ell}\) into its open connected components
\[S^{\delta_{j}}_{\ell}=\cup_{i=1}^{N_{\ell}^{j}}C^{\ell,j}_{i}\,, \tag{7.3}\]
where by the previous step, \(N_{\ell}^{j}\leq N_{\ell}(h)\) for all \(j\) and \(|C^{\ell,j}_{i}|\geq C(h)\) for all \(j\) and \(i\). Up to a subsequence which we do not notate, we may assume therefore that for each \(1\leq\ell\leq N\),
\[N_{\ell}^{j}=M_{\ell}\leq N_{\ell}(h)\quad\text{and}\quad|C^{\ell,j}_{i}|\geq C (h)\quad\forall j\quad\text{and}\quad i\in\{1,\ldots,M_{\ell}\}\,. \tag{7.4}\]
Since
\[\min_{\mathcal{A}^{h}_{\delta_{j}}}\mathcal{F}\leq\min_{\mathcal{A}^{h}_{0}} \mathcal{F}\quad\forall j\,, \tag{7.5}\]
up to a further subsequence, the compactness for sets of finite perimeter and (7.4) yield a partition \(\{C^{\ell}_{i}\}_{\ell,i}\) of \(B\), with no trivial elements thanks to (7.4), such that
\[\mathbf{1}_{C^{\ell}_{i},j}\to\mathbf{1}_{C^{\ell}_{i}}\quad \text{a.e.}\quad\text{ and} \tag{7.6}\] \[\liminf_{j\to\infty}\mathcal{F}(\mathcal{S}^{\delta_{j}};B)=\liminf _{j\to\infty}\sum_{\ell=1}^{N}\sum_{i=1}^{M_{\ell}}P(C^{\ell,j}_{i};B)\geq\sum_ {\ell=1}^{N}\sum_{i=1}^{M_{\ell}}P(C^{\ell}_{i};B)\quad\forall 1\leq\ell\leq N. \tag{7.7}\]
Actually, by Lemma 2.5, we may assume that each \(\operatorname{cl}\,C^{\ell}_{i}\) is compact and convex, \(C^{\ell}_{i}\) is open, and, for each \(1\leq\ell\leq N\),
\[\max\big{\{}\sup_{x\in C^{\ell}_{i}}\operatorname{dist}(x,C^{\ell,j}_{i})\,, \sup_{x\in C^{\ell,j}_{i}}\operatorname{dist}(x,C^{\ell}_{i})\big{\}}\to 0 \quad\forall 1\leq i\leq M_{\ell}\,. \tag{7.8}\]
We claim that the cluster
\[\mathcal{S}^{0}=(\mathbb{R}^{2}\setminus B,S^{0}_{1},\ldots,S^{0}_{N},\emptyset) =\Big{(}\mathbb{R}^{2}\setminus B,\bigcup_{i=1}^{M_{1}}C^{1}_{i},\ldots,\bigcup_ {i=1}^{M_{N}}C^{N}_{i},\emptyset\Big{)}\]
of \(B\) is minimal for \(\mathcal{F}\) on \(\mathcal{A}_{0}^{h}\). It belongs to \(\mathcal{A}_{0}^{h}\) by the inclusion (3.1) for each \(j\) and by \(\delta_{j}\to 0\). For minimality, we use (7.5) and (7.7) to write
\[\min_{\mathcal{S}\in\mathcal{A}_{0}^{\prime}}\mathcal{F}(\mathcal{S};B)\geq \sum_{\ell=1}^{N}\liminf_{j\to\infty}\sum_{i=1}^{M_{\ell}}P(C_{i}^{\ell,j};B) \geq\sum_{\ell=1}^{N}\sum_{i=1}^{M_{\ell}}P(C_{i}^{\ell};B)\geq\sum_{\ell=1}^{ N}P(S_{\ell}^{0};B)\,. \tag{7.9}\]
This proves \(\mathcal{S}^{0}\) is minimal. The Hausdorff convergence (7.1) follows from (7.8).
We note that by the minimality of \(\mathcal{S}^{0}\), (7.9) must be an equality, so that in turn
\[\sum_{i=1}^{M_{\ell}}P(C_{i}^{\ell};B)=P(S_{\ell}^{0};B)\quad\forall 1\leq \ell\leq N\,. \tag{7.10}\]
Now each \(C_{i}^{\ell}\) is open and convex; in particular, they are all indecomposable sets of finite perimeter. This indecomposability and (7.10) allow us to conclude from [1, Theorem 1] that \(\{C_{i}^{\ell}\}_{i}\) is the unique decomposition of \(S_{\ell}^{0}\) into pairwise disjoint indecomposable sets such that (7.10) holds. Also, by Theorem 1.6, each \((S_{\ell}^{0})^{(1)}\) is an open set whose boundary is smooth away from finitely many points. By [1, Theorem 2], which states that for an open set with \(\mathcal{H}^{1}\)-equivalent topological and measure theoretic boundaries (e.g. \((S_{\ell}^{0})^{(1)}\)) the decompositions into open connected components and maximal indecomposable components coincide, we conclude that the connected components of \((S_{\ell}^{0})^{(1)}\) are \(\{C_{i}^{\ell}\}_{i=1}^{M_{\ell}}\), and \(S_{\ell}^{0}=(S_{\ell}^{0})^{(1)}\). We have in fact shown in (7.8) that the individual connected components of \(S_{\ell}^{\delta_{j}}\) converge in the Hausdorff sense to the connected components of \(S_{\ell}^{0}\) for each \(\ell\).
_Step three_: In this step, we suppose that \(\Sigma=\emptyset\) and show that \(G^{\delta_{j}}=\emptyset\) for large \(j\), which finishes the proof in this case. If \(\Sigma=\emptyset\), then every component of \(\partial S_{\ell}^{0}\cap\partial S_{\ell^{\prime}}^{0}\) is a segment which, by Theorem 1.6, can only terminate at a pair of jump points of \(h\) which are not boundary triple junctions. Therefore, every connected component of a chamber \(S_{\ell}^{0}\) is the convex hull of some finite number of arcs on \(\partial B\) contained in \(\{h=\ell\}\). Now for large \(j\), by the Hausdorff convergence in step two and the containment (3.1), given any connected component \(C\) of a chamber of \(\mathcal{S}^{\delta_{j}}\) there exists connected component \(C^{\prime}\) of a chamber of \(\mathcal{S}^{0}\) such that \(\partial C\cap\partial B=\partial C^{\prime}\cap\partial B\). Since every connected component of every chamber is convex for \(\delta\geq 0\), we see that in fact it must be \(C=C^{\prime}\). So the minimal partition \(\mathcal{S}^{\delta_{j}}\) coincides with \(\mathcal{S}^{0}\) for all large \(j\) when there are no triple junctions of \(\mathcal{S}^{0}\).
_Step four_: For the rest of the proof, we assume that \(\Sigma\neq\emptyset\). In this step, we show that
\[G^{\delta_{j}}\neq\emptyset\quad\text{for all }j\quad\text{and}\quad\kappa_{j} \to\infty\,. \tag{7.11}\]
Assume for contradiction that \(G^{\delta_{j}}=\emptyset\) for some \(j\). Then \(\mathcal{S}^{\delta_{j}}\) is minimal for \(\mathcal{F}\) among \(\mathcal{A}_{0}^{h}\), so \(\mathcal{F}(\mathcal{S}^{\delta_{j}})=\mathcal{F}(\mathcal{S}^{0})\) and \(\mathcal{S}^{0}\) is minimal among \(\mathcal{A}_{h}^{\delta_{j}}\), too. But this is impossible, since \(\Sigma\neq\emptyset\) and Theorem 1.4 precludes the presence of interior or boundary triple junctions for minimizers when \(\delta>0\). Moving on to showing that \(\kappa_{j}\to\infty\), we fix \(y\in\Sigma\). Let us assume that \(y\in\partial B\) is a jump point of \(h\) between \(h=1\) and \(h=2\) with \(S_{3}^{0}\) being the third chamber in the triple junction, since the case when \(y\in B\) is easier. For all \(j\), by the containment (3.1) of the neighboring circular segments in \(S_{1}^{\delta_{j}}\) and \(S_{2}^{\delta_{j}}\), there exists \(r>0\) such that for all \(j\) and \(3\leq\ell\leq N\), \(\partial S_{\ell}^{\delta_{j}}\cap B_{r}(y)\subset B\) for some small \(r\). In particular, \(\partial S_{3}^{\delta_{j}}\cap B_{r}(y)\) is \(C^{1,1}\) by Theorem 1.4. Furthermore, since \(S_{3}^{\delta_{j}}\) converges as \(j\to\infty\) to a set with a corner in \(B_{r}(y)\), the \(C^{1,1}\) norms of \(\partial S_{3}^{\delta_{j}}\) must be blowing up on that ball. These norms are controlled in terms of \(\kappa_{j}\), and so \(\kappa_{j}\to\infty\).
_Step five_: In the next two steps, we prove (7.2). Here we show that
\[\sup_{x\in G^{\delta_{j}}}\operatorname{dist}(x,\Sigma)\to 0\,. \tag{7.12}\]
Suppose for contradiction that (7.12) did not hold. Then, up to a subsequence, we could choose \(r>0\) and \(y_{j}\in\text{cl }G^{\delta_{j}}\) such that
\[y_{j}\to y\in\text{cl }B\setminus\cup_{z\in\Sigma}B_{r}(z)\,.\]
Let us assume that \(y=\vec{e}_{1}\in\partial B\); we will point out the difference in the \(y\in B\) argument when the moment arises. We note that \(y\) must be a jump point of \(h\), say between \(h=1\) and \(h=2\), due to (3.1). Furthermore, by Theorem 1.6 and \(y\notin\Sigma\), there exists \(r^{\prime}>0\) such that
\[B_{r^{\prime}}(y)\cap B\subset\text{cl }S_{1}^{0}\cup\text{cl }S_{2}^{0}\,.\]
In particular, \(\text{dist}(y,S_{\ell}^{0})>r^{\prime}/2\) for \(3\leq\ell\leq N\). Therefore, due to (7.1), \(\text{dist}(y,S_{\ell}^{\delta_{j}})\geq r^{\prime}/2\) for large enough \(j\). Also by (3.1) applied to \(S_{1}^{\delta_{j}}\) and \(S_{2}^{\delta_{j}}\) and the convexity of connected components of those sets, we may choose small \(\varepsilon_{1}\) and \(\varepsilon_{2}\) such that on the rectangle
\[R=[1-\varepsilon_{1},1]\times[-\varepsilon_{2},\varepsilon_{2}]\subset B_{r^ {\prime}/2}(y)\,,\]
\(\partial S_{1}^{\delta_{j}}\cap R\cap B\) and \(\partial S_{2}^{\delta_{j}}\cap R\cap B\) are graphs of functions \(f_{1}^{j}\) and \(f_{2}^{j}\) over the \(\vec{e}_{1}\)-axis for all \(j\). Relabeling if necessary, we may take
\[-\varepsilon_{2}\leq f_{1}^{j}\leq f_{2}^{j}\leq\varepsilon_{2}\quad\text{ and}\quad(f_{1}^{j})^{\prime\prime}\leq 0\,,\ (f_{2}^{j})^{\prime\prime}\geq 0\,.\]
It is at this point that in the case \(y\in B\), we instead appeal to the Hausdorff convergence (7.1) and the convexity of the components of \(S_{\ell}^{\delta_{j}}\) to conclude that graphicality holds. Now the set
\[\mathcal{I}_{j}=\{t\in[1-\varepsilon_{1},1]:f_{1}^{j}=f_{2}^{j}\}\]
is a non-empty interval by the convexity of connected components of the chambers and the fact that \(f_{1}^{j}(1)=0=f_{2}^{j}(1)\). In addition, for each \(i=1,2\) and large \(j\),
\[f_{i}^{j}([1-\varepsilon_{1},1]\setminus\mathcal{I}_{j})\text{ is a graph of constant curvature }\kappa_{j}\]
since \(f_{1}^{j}<f_{2}^{j}\) implies that \((t,f_{i}^{j}(t))\in\partial G^{\delta_{j}}\). Since a graph of constant curvature \(\kappa_{j}\) can be defined over an interval of length at most \(2\kappa_{j}^{-1}\) and \(\kappa_{j}\to\infty\), we deduce that \(\mathcal{H}^{1}(\mathcal{I}_{j})\to\varepsilon_{1}\). Since \(1\in\mathcal{I}_{j}\) for all \(j\) and \(G_{j}\cap\text{int}\,\mathcal{I}_{j}\times[-\varepsilon_{2},\varepsilon_{2}]=\emptyset\), we conclude that \(G^{\delta_{j}}\) stays at positive distance from \(y=\vec{e}_{1}\), which is a contradiction. We have thus proved (7.12).
_Step six_: In this step, we prove the other half of (7.2), namely
\[\sup_{x\in\Sigma}\text{dist}(x,G^{\delta_{j}})\to 0\,. \tag{7.13}\]
For such an \(x\), say which is a triple junction between \(S_{1}^{0}\), \(S_{2}^{0}\), and \(S_{3}^{0}\), by (7.1) and the definition of \(\Sigma\), there exists \(r_{0}>0\) such that given \(r<r_{0}\), there exists \(J(r)\) such that
\[B_{r}(x)\cap S_{\ell}^{\delta_{j}}\neq\emptyset\quad\text{ for }\ell=1,2,3\text{ and }j\geq J(r)\,. \tag{7.14}\]
Furthermore, by decreasing \(r_{0}\) if necessary when \(x\in\partial B\cap\Sigma\) is a jump point of \(h\), the boundary condition (1.4) and absence of triple junctions for \(\delta>0\) allow us to choose \(1\leq\ell\leq 3\) such that
\[\partial S_{\ell}^{\delta}\cap\partial B\cap B_{r_{0}}(x)=\emptyset\quad\text{ for all }j\,. \tag{7.15}\]
Now (7.14) and (7.15) imply that \(\partial S_{\ell}^{\delta_{j}}\cap B_{r}(x)\subset B\) and is also non-empty for \(j\geq J(r)\). Since Theorem 1.4 implies that line segments in \(\partial S_{\ell}^{\delta_{j}}\) can only terminate inside \(B\) at interior cusp points in \(\partial G^{\delta}\) and \(S_{\ell}^{\delta_{j}}\cap B_{r}(x)\) converges to a sector with angle strictly less than \(\pi\), we find that \(G^{\delta_{j}}\cap B_{r}(x)\neq\emptyset\) for all \(j\geq J(r)\). Letting \(r\to 0\) gives (7.13).
_Step seven_: Finally, under the assumption that \(\Sigma=\{x_{1},\ldots,x_{P}\}\neq\emptyset\), we show that for large enough \(j\), \(G^{\delta_{j}}\) consists of \(P\) connected components, each of which is determined by three circle arcs contained in \(\partial S_{\ell_{i}}^{\delta_{j}}\cap\partial G^{\delta_{j}}\) for the three indices \(\ell_{i}\), \(i=1,2,3\), in the triple junction at \(x\). We fix
\(x\in\Sigma\) which is a triple junction between the first three chambers, so there is some \(B_{2r}(x)\) such that for each \(\ell\), \(B_{2r}(x)\cap S^{0}_{\ell}\) consists of exactly one connected component \(C_{\ell}\) of \(S^{0}_{\ell}\) for \(1\leq\ell\leq 3\) (also \(S^{0}_{\ell}\cap B_{2r}(x)=\emptyset\) for \(\ell\geq 4\)). Up to decreasing \(r\), we may also assume that
\[(\Sigma\setminus\{x\})\cap\operatorname{cl}B_{2r}(x)=\emptyset\,. \tag{7.16}\]
Recalling from step two (see (7.8) and the last paragraph) that the connected components of \(S^{\delta_{j}}_{\ell}\) converge in the Hausdorff sense to those of \(S^{0}_{\ell}\), for \(j\) large enough, we must have
\[B_{r}(x)\cap S^{\delta_{j}}_{\ell}=B_{r}(x)\cap C^{j}_{\ell}\neq\emptyset\quad 1 \leq\ell\leq 3 \tag{7.17}\]
for a single connected component \(C^{j}_{\ell}\), and, due to (7.2) and (7.16),
\[\operatorname{cl}G^{\delta_{j}}\cap\operatorname{cl}B_{r}(x)\subset B_{r/4}( x)\,. \tag{7.18}\]
Now \(\partial G_{\delta_{j}}\cap B_{r}(x)\) consists of finitely many circle arcs and has negative mean curvature (with respect to the outward normal \(\nu_{C^{\delta_{j}}}\)) along these arcs away from cusps. We claim that for \(j\) large, there are precisely three such arcs, one bordering each \(S^{\delta_{j}}_{\ell}\) for \(1\leq\ell\leq 3\) and together bounding one connected component of \(G^{\delta_{j}}\). There must be at least three arcs, since an open set bounded by two circle arcs has corners rather than cusps. To finish the proof, it suffices to show that there cannot be more than two distinct arcs belonging to \(\partial G^{\delta_{j}}\cap\partial S^{\delta_{j}}_{\ell}\cap B_{r/4}(x)\) for a single \(\ell\in\{1,2,3\}\). If there were, then \(\partial S^{\delta_{j}}_{\ell}\cap B_{r}(x)\) would contain at least three distinct segments, because with only two, each of which has one endpoint outside of \(B_{r}(x)\) according to (7.17)-(7.18), one cannot resolve three cusp points as dictated by Theorem 1.4. As a consequence, there exists \(\ell^{\prime}\neq\ell\) such that up to a subsequence, for large \(j\), there are two distinct segments, \(L_{1}\) and \(L_{2}\), both belonging \(\partial S^{\delta_{j}}_{\ell}\cap\partial S^{\delta_{j}}_{\ell^{\prime}}\cap B _{r}(x)\) and separated by at least one circle arc. It is therefore the case that \(L_{1}\) and \(L_{2}\) are not collinear. Also by (7.17), there is only a single convex component \(C^{j}_{\ell^{\prime}}\) of \(S^{\delta_{j}}_{\ell^{\prime}}\) containing \(S^{\delta_{j}}_{\ell^{\prime}}\cap B_{r}(x)\). Therefore, \(L_{1}\cup L_{2}\subset\partial C^{j}_{\ell}\cap\partial C^{j}_{\ell^{\prime}}\). But this is impossible: since a planar convex set lies on one side of any tangent line, \(\partial C^{j}_{\ell}\) and \(\partial C^{j}_{\ell^{\prime}}\) cannot share two non-collinear segments.
|
2301.12643 | Adversarial Style Augmentation for Domain Generalization | It is well-known that the performance of well-trained deep neural networks
may degrade significantly when they are applied to data with even slightly
shifted distributions. Recent studies have shown that introducing certain
perturbation on feature statistics (\eg, mean and standard deviation) during
training can enhance the cross-domain generalization ability. Existing methods
typically conduct such perturbation by utilizing the feature statistics within
a mini-batch, limiting their representation capability. Inspired by the domain
generalization objective, we introduce a novel Adversarial Style Augmentation
(ASA) method, which explores broader style spaces by generating more effective
statistics perturbation via adversarial training. Specifically, we first search
for the most sensitive direction and intensity for statistics perturbation by
maximizing the task loss. By updating the model against the adversarial
statistics perturbation during training, we allow the model to explore the
worst-case domain and hence improve its generalization performance. To
facilitate the application of ASA, we design a simple yet effective module,
namely AdvStyle, which instantiates the ASA method in a plug-and-play manner.
We justify the efficacy of AdvStyle on tasks of cross-domain classification and
instance retrieval. It achieves higher mean accuracy and lower performance
fluctuation. Especially, our method significantly outperforms its competitors
on the PACS dataset under the single source generalization setting, \eg,
boosting the classification accuracy from 61.2\% to 67.1\% with a ResNet50
backbone. Our code will be available at \url{https://github.com/YBZh/AdvStyle}. | Yabin Zhang, Bin Deng, Ruihuang Li, Kui Jia, Lei Zhang | 2023-01-30T03:52:16Z | http://arxiv.org/abs/2301.12643v1 | # Adversarial Style Augmentation for Domain Generalization
###### Abstract
It is well-known that the performance of well-trained deep neural networks may degrade significantly when they are applied to data with even slightly shifted distributions. Recent studies have shown that introducing certain perturbation on feature statistics (_e.g._, mean and standard deviation) during training can enhance the cross-domain generalization ability. Existing methods typically conduct such perturbation by utilizing the feature statistics within a mini-batch, limiting their representation capability. Inspired by the domain generalization objective, we introduce a novel Adversarial Style Augmentation (ASA) method, which explores broader style spaces by generating more effective statistics perturbation via adversarial training. Specifically, we first search for the most sensitive direction and intensity for statistics perturbation by maximizing the task loss. By updating the model against the adversarial statistics perturbation during training, we allow the model to explore the worst-case domain and hence improve its generalization performance. To facilitate the application of ASA, we design a simple yet effective module, namely AdvStyle, which instantiates the ASA method in a plug-and-play manner. We justify the efficacy of AdvStyle on tasks of cross-domain classification and instance retrieval. It achieves higher mean accuracy and lower performance fluctuation. Especially, our method significantly outperforms its competitors on the PACS dataset under the single source generalization setting, _e.g._, boosting the classification accuracy from 61.2% to 67.1% with a ResNet50 backbone. Our code will be available at [https://github.com/YBZh/AdvStyle](https://github.com/YBZh/AdvStyle).
## 1 Introduction
Deep neural networks (DNN) have exhibited impressive performance on many vision tasks, especially when the training and test data follow the same distribution, _i.e._, the so-called assumption of independent and identical distribution (IID). Unfortunately, the IID assumption may not hold in practical applications. For instance, detection models trained on samples collected in sunny days may be applied to data collected in bad weather (_e.g._, rainy days). In such cross-domain application scenarios, the DNN models may suffer from significant performance degradation. To solve this issue, domain generalization (DG) has been introduced to improve the generalization performance of DNN models to unseen test domains, and many DG methods have been developed recently [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22].
Generally speaking, DG aims to learn a model from the source domain(s) while ensuring that the learned model could perform well to any unseen test domains. The DG problem can be conventionally solved by learning domain invariant features [11, 12, 13, 15], employing meta-learning strategies [11, 12, 13], performing data augmentations [12, 13, 14], and so on [15, 16]. Recently, performing style augmentation in feature space by conducting feature statistics perturbation has attracted increasing attention due to its simplicity and efficacy [13, 14, 15, 16, 17, 18, 19, 20]. Specifically, it has been observed in the application of style transfer [13, 14] that feature statistics (_e.g._, mean and standard deviation) characterize the style information, and changing such statistics of image features will result in style-changed but content-preserved output images. Inspired by this observation, researchers have proposed to perform feature statistics perturbation to introduce style-augmented samples [13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. By expanding the training data with these style-augmented samples, improved generalization performance of DNN models has been observed [13, 14, 15, 16, 17, 18, 19, 20, 21, 22].
In such a statistics perturbation-based DG framework, the perturbation strategy is of vital importance. [13] and Zhou _et al._[15] respectively introduced the statistics perturbation by randomly swapping or linearly interpolating statistics of two instances within a mini-batch. Li _et al._[11] sampled the feature statistics perturbation from Gaussian distributions, the means and standard deviations of which are estimated from the current mini-batch. Although improved performance has been achieved, existing methods, unfortunately, constrain the representation space of feature statistics perturbation within that of the current mini-batch, limiting
the diversity of style augmentations.
To explore a broader style space beyond that spanned by batch statistics, we propose to generate more diverse statistics perturbation. Similar to [11], we model the feature statistics (_i.e._, mean and standard deviation) as Gaussian distributions and utilize the vanilla feature statistics as the means of these Gaussians. In [11], the standard deviations of these Gaussians are estimated from the current mini-batch. In contrast, inspired by the DG objective (see Equ. (6)), we model these standard deviations as learnable parameters and optimize them via adversarial training, resulting in the Adversarial Style Augmentation (ASA) method. Specifically, by maximizing the task loss w.r.t. the learnable standard deviations, we approach the most sensitive perturbation direction and intensity (_i.e._, the worst-case domain). Meanwhile, by minimizing the task loss w.r.t. the vanilla task model, we update the model against the worst-case domain perturbation. The model is expected to generalize well to tough unseen test domains if it could generalize to the worst-case domain.
The proposed ASA method could be directly implemented by conducting the maximization and minimization steps iteratively (please refer to Fig. 2 for more details). As illustrated in Sec. 4.3, such an iterative optimization strategy could lead to outstanding performance. To facilitate the application of ASA in practice, we take a step further and propose a simple yet effective module, namely AdvStyle, to enable end-to-end training of ASA. Specifically, we input the learnable standard deviations into Gradient Reverse Layers (GRL) [10, 10] before utilizing them to generate statistics perturbations (see Equ. (8) and Fig. 3 for more information). By minimizing the task objective solely, the objective minimization w.r.t. the vanilla model parameters and the maximization w.r.t. the learnable standard deviations are achieved simultaneously, thanks to the gradient reverse function of GRL. The AdvStyle could be easily implemented and it works in a plug-and-play manner.
We apply the proposed method on tasks of cross-domain classification and instance retrieval. On standard DG benchmarks, ASA improves its competitors with higher mean accuracy and lower performance fluctuation. Especially, a significant performance boost is observed on the PACS dataset under the single source generalization setting (_e.g._, the performance is boosted from 61.2% to 67.1% with a ResNet50 backbone), validating the effectiveness of our proposed ASA method. We summarize our contributions as follows:
* We propose a novel Adversarial Style Augmentation (ASA) method, which could explore a broader style space by performing feature statistics perturbation with less constraints via adversarial training.
* To facilitate the application of ASA in practice, we introduce an AdvStyle module so that ASA can be used in a plug-and-play manner.
* We perform detailed analyses on standard benchmarks of cross-domain classification and instance retrieval. On top of improved mean accuracy, ASA presents lower performance fluctuation, justifying its effectiveness.
## 2 Related Work
**Domain generalization.** Domain generalization (DG) targets at developing robust DNN models that can perform well on unseen test domains. The representative DG methods learn domain-invariant feature representations [13, 11, 12, 14], or employ meta-learning [11, 12], or perform data augmentation [13, 14, 15, 16, 17, 18, 19, 20, 21]. Our method adopts the data augmentation strategy, more specifically, feature-based augmentation [15, 16, 17, 18, 19, 20, 21]. It has been empirically observed in the task of style transfer [14, 15] that feature statistics can characterize the image styles, and the perturbation on such statistics could yield style-changed but semantic-preserved output. Based on such observation, researchers started to introduce style/distribution augmented training samples by feature statistics perturbation into DG model training [15, 16, 17, 18, 19, 20, 21].
To achieve feature statistics perturbation, [15] proposed to swap feature statistics between instances within a mini-batch and, similarly, [17] linearly interpolated feature statistics between instances. Besides the first and second order statistics used in [15, 16, 17], Zhang _et al._[20] implicitly considered high-order statistics for more effective statistics perturbation. Although improved generalization performance has been observed, the augmented statistics highly rely on the observed feature statistics of training instances, limiting the diversity of statistics. To introduce more diverse statistics perturbations, [11] modeled the feature statistics as multi-variate Gaussian distributions and randomly sampled statistics variants from the Gaussians, as illustrated in Fig. 1. This expands the statistics space of instances, which can be described by the standard deviations of Gaussians. However, these Gaussian standard deviations are estimated from the mini-batch statistics, still limiting the statistics diversity.
In this work, we solve the above mentioned problem by acquiring the Gaussian standard deviations with adversarial training. Specifically, instead of estimating Gaussian standard deviations with mini-batch statistics as in [11], we model the Gaussian standard deviations as learnable parameters, leading to a less constrained statistics space. By maximizing the task objective w.r.t. these standard deviations, we explore the most sensitive direction and intensity for statistics perturbation so that the trained DNN model can perform more robustly on unseen test domains.
**Adversarial training.** Adversarial training was first introduced in [1], where a discriminator is used to distinguish whether a sample comes from the training data or the generative models. Once the discriminator is fully confused by samples from generators, the generative models successfully recover the training data distribu
tion. The adversarial training strategy was later adopted to align feature distributions in domain adaptation [1, 1, 10], generative photo-realistic super-resolution [1, 13], data augmentation [10, 11, 12], and so on. Different from most data augmentation methods that introduce augmented samples in the image space [10, 11], we generate augmented samples in the feature space, which is more computationally efficient. Moreover, different from the existing feature augmentation methods [10, 12] that conduct adversarial perturbation on raw features, we adversarially change the feature statistics, resulting in specific perturbations along the style dimension. Furthermore, we propose a simple yet effective module to implement our method in a plug-and-play manner, facilitating its usage.
## 3 Methods
Denote by \(\mathbf{x}\in\mathbb{R}^{B\times C\times HW}\) the features encoded by some stacked neural layers, and we respectively denote by \(\mu(\mathbf{x})\in\mathbb{R}^{B\times C}\) and \(\sigma(\mathbf{x})\in\mathbb{R}^{B\times C}\) the mean and standard deviation of features in each channel for an instance, where \(B,C,H\) and \(W\) represent the batch size, channel dimension, height and width, respectively. Specifically, \(\mu(\mathbf{x})\) and \(\sigma(\mathbf{x})\) are computed as:
\[\mu_{b,c}(\mathbf{x}) =\frac{1}{HW}\sum_{i=1}^{HW}\mathbf{x}_{b,c,i}, \tag{1}\] \[\sigma_{b,c}^{2}(\mathbf{x}) =\frac{1}{HW}\sum_{i=1}^{HW}\left(\mathbf{x}_{b,c,i}-\mu_{b,c}(\mathbf{x })\right)^{2} \tag{2}\]
where \(\mu_{b,c}(\mathbf{x})\) represent the mean in the \(c\)-th channel of the \(b\)-th instance, and \(\sigma_{b,c}^{2}(\mathbf{x})\) and \(\mathbf{x}_{b,c,i}\) are similarly defined.
It is found in [1, 10] that the feature statistics, _e.g._, the mean and standard deviation in Equ. (1) and Equ. (2), can characterize the style/distribution of input images, such as lighting conditions and textures. Therefore, performing statistics perturbation could generate style-changed but semantic-preserved augmented samples, providing an effective method for DG tasks [10, 11, 12]. Nonetheless, existing methods along this line [10, 11, 12] mostly utilize feature statistics within the mini-batch to compute statistics perturbation, limiting the statistics representation capacity. In the
Figure 1: A toy example on the comparison across different style augmentation-based DG methods. In MixStyle [10], new styles are introduced by linearly interpolating statistics between two styles. pAdaIN [13] is a special case of [10], where only edge points along the linear connection line are considered via statistics swapping. In [11], the uncertainty of statistics is modeled via multi-variant Gaussian, whose mean and standard deviation are instantiated with the vanilla statistics points and batch standard deviation, respectively. All these methods limit the representation space of statistics within the space spanned by the mini-batch. In contrast, we approach a broader style space by exploring statistics perturbation along the most sensitive direction and intensity that maximize the task objective.
following, we propose to perform statistics perturbation via adversarial training to overcome this limitation.
### Adversarial Style Augmentation
Following [13, 14, 15], we implement our style augmentation module (SAM) based on the framework of AdaIN [16], which performs style transformation by replacing the feature statistics as:
\[\mathbf{x}_{t}=\sigma_{t}\left(\frac{\mathbf{x}-\mu(\mathbf{x})}{\sigma(\mathbf{x})}\right)+\mu_ {t}, \tag{3}\]
where \(\mathbf{x}_{t}\) are the features with new styles decided by \(\mu_{t}\in\mathbb{R}^{B\times C}\) and \(\sigma_{t}\in\mathbb{R}^{B\times C}\). Existing methods typically introduce \(\mu_{t}\) and \(\sigma_{t}\) with feature statistics within a mini-batch, _e.g._, by randomly shuffling or interpolating feature statistics across instances [13, 14] or randomly sampling from a Gaussian distribution estimated from the current mini-batch [12]. Such strategies limit the representation capacity of \(\mu_{t}\) and \(\sigma_{t}\) within a small statistics space. In this paper, we aim to explore larger style spaces by introducing more diverse \(\mu_{t}\) and \(\sigma_{t}\). Specifically, following [12], we first model the underlying distribution of \(\mu_{t}\) and \(\sigma_{t}\) as the popular Gaussian with the re-parameterization trick [15]:
\[\mu_{t}=\mu(\mathbf{x})+\epsilon_{\mu}\Sigma_{\mu},\quad\epsilon_{ \mu}\sim\mathcal{N}(\mathbf{0},\mathbf{1}), \tag{4}\] \[\sigma_{t}=\sigma(\mathbf{x})+\epsilon_{\sigma}\Sigma_{\sigma},\quad \epsilon_{\sigma}\sim\mathcal{N}(\mathbf{0},\mathbf{1}), \tag{5}\]
where \(\Sigma_{\mu}\in\mathbb{R}^{C}\) and \(\Sigma_{\sigma}\in\mathbb{R}^{C}\) control the direction and intensity of the statistics perturbation, _i.e._, the representation space of style augmentation. Therefore, the exploration of style space could be achieved by exploring \(\Sigma_{\mu}\) and \(\Sigma_{\sigma}\).
Li _et al._[12] constructed \(\Sigma_{\mu}\) and \(\Sigma_{\sigma}\) as the standard deviations of \(\mu(\mathbf{x})\) and \(\sigma(\mathbf{x})\) along the batch dimension. Although effective, this method limits the style augmentation space to that of the current mini-batch. To overcome this limitation, we propose to explore a broader style space by imposing less constraints on \(\Sigma_{\mu}\) and \(\Sigma_{\sigma}\).
In [1], the DG objective is defined as:
\[R^{DG}(f)=\min_{f}\max_{e\in\varepsilon_{all}}R^{e}(f), \tag{6}\]
where \(R^{e}(f)\) refers to the risk of model \(f\) within the domain \(e\). Given the training domain(s) \(\varepsilon_{tr}\), DG aims to learn a model \(f\) that performs well across a set of unseen but related domains \(\varepsilon_{all}\supset\varepsilon_{tr}\). Assuming that the unknown test domain also belongs to \(\varepsilon_{all}\), the objective of Equ. (6) suggests us to minimize the risk of the worst-case domain among \(\varepsilon_{all}\).
Inspired by the DG objective in Equ. (6), we propose to model \(\Sigma_{\mu}\) and \(\Sigma_{\sigma}\) as learnable parameters and optimize them via adversarial training, resulting in the following Adversarial Style Augmentation (ASA) method:
\[\min_{\theta}\max_{\Sigma}\mathcal{L}_{task}(\mathbf{x},\mathbf{y},\theta,\Sigma), \tag{7}\]
where \(\mathbf{x}\) and \(\mathbf{y}\) are the input data and their corresponding labels, \(\theta\) and \(\Sigma\) are the parameters of the vanilla task model and the collection of all \(\Sigma_{\mu}\) and \(\Sigma_{\sigma}\), respectively. \(\mathcal{L}_{task}(\cdot)\) is the overall objective defined by the considered tasks. For example, \(\mathcal{L}_{task}(\cdot)\) is typically instantiated as the cross-entropy loss in category classification. By learning \(\Sigma\) with Equ. (7), we simultaneously explore the perturbation direction (_e.g._, the principal orientation direction of \(\Sigma\)) and intensity (_e.g._, the Euclidean norm of \(\Sigma\)), which are individually investigated in Sec. 4.3.
Let's further clarify the relationship between the DG objective in Equ. (6) and our proposed ASA objective in Equ. (7). Since \(\Sigma\) controls the representation space of the style perturbation, exploring the worst-case domain of \(\varepsilon_{all}\) in Equ. (6) could be achieved by maximizing the task loss with respect to \(\Sigma\) in Equ. (7). By iteratively exploring the most sensitive \(\Sigma\) (_i.e._, the worst-case domain) for the current model and optimizing the model against such worst-case perturbation, the model is expect to generalize to the worst-case domain, and therefore to any unseen test domains.
### AdvStyle
One may opt to implement the ASA method in Equ. (7) by optimizing \(\theta\) and \(\Sigma\) iteratively, as presented in Fig. 2. Here, we propose a simple yet effective module, namely AdvStyle, to instantiate ASA in a plug-and-play manner. Motivated by the seminal adversarial domain adaptation methods [15, 16], we propose the AdvStyle module as:
\[\mathbf{x}_{t}=\sigma_{adv}\left(\frac{\mathbf{x}-\mu(\mathbf{x})}{\sigma(\mathbf{x})}\right)+ \mu_{adv}, \tag{8}\]
Figure 3: An illustration of the AdvStyle module, where ’GRL’ is the gradient reverse layer [15, 16].
Figure 2: An illustration of the Adversarial Style Augmentation (ASA) method with iterative minimax optimization. Note that we promote to conduct ASA by inserting AdvStyle modules (cf. Sec. 3.2) in a plug-and-play manner.
where
\[\mu_{adv} =\mu(\mathbf{x})+\epsilon_{\mu}GRL(\Sigma_{\mu},\lambda),\quad\epsilon_{ \mu}\sim\mathcal{N}(\mathbf{0},\mathbf{1}), \tag{9}\] \[\sigma_{adv} =\sigma(\mathbf{x})+\epsilon_{\sigma}GRL(\Sigma_{\sigma},\lambda), \quad\epsilon_{\sigma}\sim\mathcal{N}(\mathbf{0},\mathbf{1}). \tag{10}\]
where \(GRL(\Sigma_{\star},\lambda)\) is the gradient reverse layer [1], which outputs the vanilla \(\Sigma_{\star}\) in the forward pass and multiplies the gradients with \(-\lambda\) in the gradient back-propagation. \(\lambda\) is a hyper-parameter and validated in Sec. 4.3. The gradient reverse layer is widely adopted to perform adversarial training between feature extractor and domain discriminator in domain adaptation [1, 1].
To our best knowledge, we are the first to employ gradient reverse layer to perform adversarial training between style statistics (_i.e._, \(\Sigma\)) and the vanilla model (_i.e._, \(\theta\)) in DG. Similar to [12, 13], we only activate the AdvStyle module for training and deactivate it in the test stage. As discussed in Sec. 4.3, conducting ASA by simply inserting AdvStyle into the DNN models gives comparable performance to the iterative optimization-based variant (cf. Fig. 2). Therefore, we promote to conduct ASA with AdvStyle.
## 4 Experiments
In this section, we first conduct experiments on tasks of cross-domain classification and instance retrieval to justify the effectiveness of our proposed ASA method, especially the AdvStyle module. Then, ablation studies are provided to analyze the use of our method. Besides, we also justify the effectiveness of our method on the _cross-domain segmentation task_ and generalization performance to _images with corruptions_ in the **supplementary material**. All experiments are performed under the PyTorch framework on GeForce RTX \(2080\)Ti GPUs.
### Generalization on Classification
**Implementation details.** We conduct the classification experiments on the benchmark PACS dataset [10]. There are \(9,991\) samples from \(7\) classes and \(4\) domains, _i.e._, Art, Cartoon, Photo, and Sketch. We closely follow [12] to prepare the training and test data, set up the optimization strategy, and conduct model selection. Particularly, we perform existing style augmentation-based DG methods [13, 14, 15, 16] under the same setting for fair comparison. Experiments are conducted under the single source generalization setting, where we train models on samples of one domain and test them on the remaining three domains. We also validate the effectiveness of our method in the leave-one-domain-out setting, which is detailed in the **supplementary material**.
The ResNet-18, ResNet-50 [17] and VGG16 [18] models pre-trained on the ImageNet dataset [12] are adopted as the backbones. We also apply our method to existing DG algorithms [13, 14] by inserting the plug-and-play module into their backbones. Besides the widely-used mean accuracy across different tasks, we additionally report the standard deviation of classification accuracy across tasks. A smaller standard deviation of accuracy represents smaller performance fluctuation across different tasks, indicating more robust generalization ability.
**Results.** All results are shown in Tab. 1. Modelling the uncertainty of statistics via multi-variant Gaussian distribution [10] typically outperforms those methods based on statistics interpolation [13, 14] because the statistics space can be more effectively expanded by nonlinear distribution modelling, as we illustrated in Fig. 1 with the toy example. By introducing statistics perturbation via adversarial training, we further expand the potential style space towards the worst-case domain, leading to notable performance improvement. For example, by using ResNet-50 as backbone, our method boosts its closest competitor, _i.e._, DSU, from \(57.3\%\) to \(67.1\%\) on single source domain generalization, resulting in a \(144\%\) relative accuracy improvement over the ResNet-50 baseline (_i.e._, from \(6.8\%\) to \(16.6\%\)). Our method also outperforms the recent work [15] that explores broader style spaces by utilizing high-order batch statistics, revealing the advantage of exploring style spaces beyond batch statistics. Additionally, our method achieves the lowest accuracy standard deviation across different tasks with different backbones. This smaller performance fluctuation demonstrates the robust generalization ability of our proposed method.
What's more, it is also found that our method is complementary to existing DG approaches that adopt self-supervised regularization and data augmentation in the image space. For example, Carlucci _et al._[16] introduced the self-supervised regularization signals by solving a jigsaw puzzle, while Xu _et al._[13] proposed a Fourier-based image augmentation strategy to enhance the cross-domain generalization ability. As shown in Tab. 1, by coupling with our proposed AdvStyle, these two methods could be significantly boosted, justifying the nice plug-and-play property of our method.
### Generalization on Instance Retrieval
We closely follow [12, 15] to perform the cross-domain instance retrieval task on person re-identification (re-ID) datasets of Market1501 [15] and Duke [14, 15, 16]. Specifically, we conduct experiments with the OSNet [12] and report the results of ranking accuracy and mean average precision (mAP). As illustrated in Tab. 2, our AdvStyle boosts the vanilla baseline by a large margin (_e.g._, the mAP is boosted from 25.0 to 32.0 on the Duke\(\rightarrow\)MarKet1501 task). Compared to the common augmentation strategies [15, 16], the style augmentation methods [12, 13, 14] present clear advantages. More importantly, our AdvStyle significantly outperforms other style augmentation competitors [13, 14, 15], validating the effectiveness of expanding the style space via adversarial training.
### Ablation and Analyses
**Implementations of the ASA method.** We compare the two implementations of ASA, _i.e._, the iterative optimization strategy as shown in Fig. 2 and plugging the AdvStyle modules into DNN models. As illustrated in Tab. 3, the two implementations achieve comparable performance. Though the iterative optimization-based variant presents slightly higher accuracy, we promote the AdvStyle-based variant in practice since it instantiates the ASA in a plug-and-play manner. We adopt the AdvStyle-based variant as the default implementation of ASA in this paper.
**Where to apply AdvStyle.** We insert AdvStyle at different positions in the ResNet backbone and present the results in Tab. 4. Consistent performance improvements over the vanilla ResNet are observed no matter where the AdvStyle is inserted. The best performance is achieved when applying the AdvStyle to all the \(6\) analyzed positions, which is adopted as the default setting in this paper.
**The direction and intensity of statistics perturbation.** Compared to its closest competitor-DSU [11], our AdvStyle introduces different directions (_e.g._, the principal orientation directions of \(\Sigma_{\mu}\) and \(\Sigma_{\sigma}\)) and intensities (_e.g._, the Euclidean norms of \(\Sigma_{\mu}\) and \(\Sigma_{\sigma}\)) for statistics perturbation. To investigate the impact of direction and inten
\begin{table}
\begin{tabular}{l|c c c c c c} \hline Method & Art & Cartoon & Photo & Sketch & Mean \(\uparrow\) & Std. \(\downarrow\) \\ \hline ResNet-18 & 58.6\(\pm\)2.4 & 66.4\(\pm\)0.7 & 34.0\(\pm\)1.8 & 27.5\(\pm\)4.3 & 46.6 & 18.8 \\ + pAdIN [20] & 61.5\(\pm\)2.1 & 71.2\(\pm\)0.8 & 41.1\(\pm\)1.9 & 33.1\(\pm\)3.5 & 51.7 & 17.7 \\ + MixStyle [21] & 61.9\(\pm\)2.2 & 71.5\(\pm\)0.8 & 41.2\(\pm\)1.8 & 32.2\(\pm\)4.1 & 51.7 & 18.1 \\ + DSU [11] & 63.8\(\pm\)2.0 & 73.6\(\pm\)0.5 & 39.1\(\pm\)0.8 & 38.2\(\pm\)1.2 & 53.7 & 17.8 \\ + EFDMix [22] & 63.2\(\pm\)2.3 & 73.9\(\pm\)0.7 & 42.5\(\pm\)1.8 & 38.1\(\pm\)3.7 & 54.4 & 17.0 \\ + AdvStyle (Ours) & **67.8\(\pm\)**0.6 & **74.5\(\pm\)**0.4 & **45.5\(\pm\)**1.9 & **47.2\(\pm\)**1.6 & **58.7** & **14.6** \\ \hline \hline ResNet-50 & 63.5\(\pm\)1.3 & 69.2\(\pm\)1.6 & 38.0\(\pm\)0.9 & 31.4\(\pm\)1.5 & 50.5 & 18.3 \\ + pAdAlN [20] & 67.2\(\pm\)1.7 & 74.9\(\pm\)1.4 & 43.3\(\pm\)0.7 & 36.5\(\pm\)1.7 & 55.5 & 18.5 \\ + MixStyle [21] & 67.5\(\pm\)1.5 & 75.2\(\pm\)1.3 & 42.8\(\pm\)0.8 & 36.4\(\pm\)1.2 & 55.5 & 18.8 \\ + DSU [11] & 71.4\(\pm\)0.2 & 76.9\(\pm\)1.3 & 42.8\(\pm\)0.3 & 38.2\(\pm\)1.1 & 57.3 & 19.6 \\ + EFDMix [22] & 75.3\(\pm\)0.9 & 77.4\(\pm\)0.8 & 48.0\(\pm\)0.9 & 44.2\(\pm\)2.4 & 61.2 & 17.6 \\ + AdvStyle (Ours) & **77.3\(\pm\)**0.4 & **78.8\(\pm\)**0.7 & **50.3\(\pm\)**1.5 & **61.8\(\pm\)**1.2 & **67.1** & **13.6** \\ \hline \hline VGG16 & 56.2\(\pm\)0.5 & 62.7\(\pm\)2.2 & 35.3\(\pm\)0.7 & 47.5\(\pm\)1.7 & 50.4 & 11.9 \\ + pAdAlN [20] & 57.1\(\pm\)1.1 & 63.7\(\pm\)1.9 & 36.7\(\pm\)0.8 & 48.7\(\pm\)1.6 & 51.6 & 11.7 \\ + MixStyle [21] & 57.3\(\pm\)0.9 & 64.1\(\pm\)1.6 & 37.0\(\pm\)0.6 & 48.6\(\pm\)1.8 & 51.8 & 11.7 \\ + DSU [11] & 58.3\(\pm\)1.0 & 65.8\(\pm\)1.3 & 38.0\(\pm\)0.4 & 49.7\(\pm\)2.8 & 53.0 & 11.9 \\ + EFDMix [22] & 58.9\(\pm\)1.1 & 66.2\(\pm\)0.9 & 38.6\(\pm\)0.5 & 50.6\(\pm\)2.3 & 53.6 & 11.8 \\ + AdvStyle (Ours) & **61.9\(\pm\)**1.0 & **67.3\(\pm\)**0.6 & **40.8\(\pm\)**0.6 & **52.9\(\pm\)**2.5 & **55.7** & **11.6** \\ \hline \hline JiGen [1] & 59.7\(\pm\)1.7 & 67.8\(\pm\)0.5 & 38.7\(\pm\)1.7 & 29.0\(\pm\)3.2 & 48.8 & 18.0 \\ + DSU [11] & 62.6\(\pm\)1.5 & 72.8\(\pm\)0.6 & 42.0\(\pm\)1.4 & 38.3\(\pm\)2.6 & 53.9 & 16.5 \\ + AdvStyle (Ours) & **68.2\(\pm\)**0.9 & **76.1\(\pm\)**0.5 & **48.4\(\pm\)**1.5 & **50.8\(\pm\)**1.3 & **60.9** & **13.4** \\ \hline \hline FACT [20] & 69.7\(\pm\)1.2 & 75.2\(\pm\)0.4 & 42.7\(\pm\)1.3 & 48.9\(\pm\)2.1 & 59.1 & 15.8 \\ + DSU [11] & 72.7\(\pm\)1.1 & 78.3\(\pm\)0.3 & 52.9\(\pm\)1.5 & 62.1\(\pm\)1.9 & 66.5 & 11.3 \\ + AdvStyle (Ours) & **74.9\(\pm\)**0.7 & **79.1\(\pm\)**0.4 & **57.3\(\pm\)**1.4 & **67.4\(\pm\)**1.2 & **69.7** & **9.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Domain generalization results of classification on the PACS dataset under the single source generalization setting, where the listed domain is adopted for training and results are reported on the remaining three domains.
\begin{table}
\begin{tabular}{l|c} \hline Method & Acc. (\%) \\ \hline
**Leave-one-domain-out generalization results** \\ \hline AdvStyle-based ASA & **87.0** \\ Iterative optimization-based ASA & **87.0** \\ \hline \hline
**Single source generalization results** \\ \hline AdvStyle-based ASA & 67.1 \\ Iterative optimization-based ASA & **68.1** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Domain generalization results on the cross-domain person re-ID task.
\begin{table}
\begin{tabular}{l|c c c|c c c c} \hline Method & Art & Cartoon & Photo & Sketch & Mean \(\uparrow\) & Std. \(\downarrow\) \\ \hline ResNet-18 & 58.6\(\pm\)2.4 & 66.4\(\pm\)0.7 & 34.0\(\pm\)1.8 & 27.5\(\pm\)4.3 & 46.6 & 18.8 \\ + pAdAlN [20] & 61.5\(\pm\)2.1 & 71.2\(\pm\)0.8 & 41.1\(\pm\)1.9 & 33.1\(\pm\)3.5 & 51.7 & 17.7 \\ + MixStyle [21] & 61.9\(\pm\)2.2 & 71.5\(\pm\)0.8 & 41.2\(\pm\)1.8 & 32.2\(\pm\)4.1 & 51.7 & 18.1 \\ + DSU [11] & 63.8\(\pm\)2.0 & 73.6\(\pm\)0.5 & 39.1\(\pm\)0.8 & 38.2\(\pm\)1.2 & 53.7 & 17.8 \\ + EFDMix [22] & 63.2\(\pm\)2.3 & 73.9\(\pm\)0.7 & 42.5\(\pm\)1.8 & 38.1\(\pm\)3.7 & 54.4 & 17.0 \\ + AdvStyle (Ours) & **67.8\(\pm\)**0.6 & **74.5\(\pm\)**0.4 & **45.5\(\pm\)**1.9 & **47.2\(\pm\)**1.6 & **58.7** & **14.6** \\ \hline \hline ResNet-50 & 63.5\(\pm\)1.3 & 69.2\(\pm\)1.6 & 38.0\(\pm\)0.9 & 31.4\(\pm\)1.5 & 50.5 & 18.3 \\ + pAdAlN [20] & 67.2\(\pm\)1.7 & 74.9\(\pm\)1.4 &
sity individually, we introduce the AdvStyle-Direction-Only and AdvStyle-Intensity-Only variants. In the former case, the perturbation direction is learned via our proposed adversarial training strategy, while the perturbation intensity is set with the batch statistics as DSU [11]. In the latter case, in contrast, we set the perturbation direction with the batch statistics as DSU [11] and learn the perturbation intensity via adversarial training. As illustrated in Tab. 5, AdvStyle-Direction-Only and AdvStyle-Intensity-Only outperform the DSU by 7.6% and 4.3%, respectively, indicating that exploring perturbation direction is more effective than exploring the perturbation intensity. The best result is achieved with AdvStyle by exploring both perturbation direction and intensity simultaneously.
**Visualization.** As illustrated in Fig. 4, a broader style space is successfully achieved by using AdvStyle, which is qualitatively validated by the enlarged overlap regions across domains and quantitatively justified by the reduced \(\mathcal{A}\)-distance [1].
**Analyses on the hyper-parameter \(\lambda\).** The hyper-parameter \(\lambda\) controls the strength of statistics perturbation. As illustrated in Fig. 5, our method performs stable and significantly outperforms baselines within a large range of \(\lambda\) (_e.g._, \(0.5\leq\lambda\leq 20\)). We empirically find that \(\lambda=5\) achieves good results across a large range of tasks, which is adopt the default setting in this paper.
## 5 Conclusions
To expand the potential statistics space for more diverse style augmentations, we proposed a novel style augmentation method, namely Adversarial Style Augmentation (ASA), by performing feature statistics perturbation via adversarial training. To facilitate the application of ASA in practice, we further introduced a novel module, namely AdvStyle, to instantiate ASA in a plug-and-play manner. ASA outperformed existing style augmentation methods on tasks such as classification and instance retrieval. It was also found that expanding style spaces along the direction dimension is more effective than the intensity dimension, which may inspire more studies on the style space exploration.
\begin{table}
\begin{tabular}{c c c c c c|c} \hline Conv-1 & Pool-1 & Res-1 & Res-2 & Res-3 & Res-4 & Acc. (\%) \\ \hline ✓ & & & & & & 56.6 \\ ✓ & ✓ & & & & & 61.9 \\ ✓ & ✓ & ✓ & & & & 64.4 \\ ✓ & ✓ & ✓ & ✓ & & & 66.3 \\ ✓ & ✓ & ✓ & ✓ & ✓ & & 66.4 \\ ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & **67.1** \\ \hline \hline \multicolumn{6}{c|}{Vanilla ResNet-50 without AdvStyle} & \multicolumn{1}{c}{50.5} \\ \hline \end{tabular}
\end{table}
Table 4: Ablation studies on the inserting position of AdvStyle module on the PACS dataset, where all experiments are based on ResNet-50 backbone and follow the single source generalization setting. ‘Conv-1’, ‘Pool-1’, ‘Res-1’, ‘Res-2’, ‘Res-3’, ‘Res-4’ denote whether we apply the AdvStyle after the first convolution layer, the first max pooling layer, the first residual block, the second residual block, the third residual block, the fourth residual block, respectively.
Figure 4: T-SNE [23] and \(\mathcal{A}\)-distance (_i.e._, \(Dist_{\mathcal{A}}\)) [1] of the feature representations on PACS dataset, where a smaller \(Dist_{\mathcal{A}}\) represents smaller distribution divergence. We adopt the Art and Sketch as the source and target domains, respectively. More results are provided in the **supplementary material**.
Figure 5: Results with different values of \(\lambda\) on the PACS dataset under the single source generalization setting.
\begin{table}
\begin{tabular}{l|c} \hline Method & Acc. (\%) \\ \hline DSU [11] & 57.3 \\ AdvStyle-Intensity-Only & 61.6 \\ AdvStyle-Direction-Only & 64.9 \\ AdvStyle & **67.1** \\ \hline \end{tabular}
\end{table}
Table 5: Domain generalization performance on the PACS dataset. Please refer to the main text for the definitions of AdvStyle-Intensity-Only and AdvStyle-Direction-Only. |
2304.00499 | On the Pseudonullity of Fine Selmer groups over function fields | The $p^\infty$-fine Selmer group of an elliptic curve $E$ over a global field
is a subgroup of the classical $p^\infty$-Selmer group. Coates and Sujatha
discovered that the structure of the fine Selmer group of $E$ over certain
$p$-adic Lie extensions of a number field is intricately related to some deep
questions in classical Iwasawa theory. Inspired by a conjecture of Greenberg,
they made prediction about the structure of the fine Selmer group over certain
$p$-adic Lie extensions of a number field, which they called Conjecture B. In
this article, we discuss some new cases of Conjecture B and its analogues over
some $p$-adic Lie extensions of function fields of characteristic $p$. | Sohan Ghosh | 2023-04-02T10:23:28Z | http://arxiv.org/abs/2304.00499v2 | # On the pseudonullity of fine Selmer groups over function fields.
###### Abstract.
The \(p^{\infty}\)-fine Selmer group of an elliptic curve \(E\) over a global field is a subgroup of the classical \(p^{\infty}\)-Selmer group. Coates and Sujatha discovered that the structure of the fine Selmer group of \(E\) over certain \(p\)-adic Lie extensions of a number field is intricately related to some deep questions in classical Iwasawa theory. Inspired by a conjecture of Greenberg, [CS] made prediction about the structure of the fine Selmer group over certain \(p\)-adic Lie extensions of a number field, which they called Conjecture B. In this article, we discuss some new cases of Conjecture B and its analogues over some \(p\)-adic Lie extensions of function fields of characteristic \(p\).
AMS Subject Classification: 11R23, 11G05, 11S25, 11R60
Keywords and phrases: Iwasawa theory, fine Selmer groups, function fields
**Conjecture B**.: _[_CS_]_ _Assume that the Conjecture A holds for \(E\) over \(F_{\rm cyc}\). Let \(F_{\infty}\) be an admissible \(p\)-adic Lie extension of \(F\) such that \(G=\operatorname{Gal}(F_{\infty}/F)\) has dimension at least 2 as a p-adic Lie group. Then \(R(E/F_{\infty})^{\vee}\) is a pseudonull \(\mathbb{Z}_{p}[[G]]\)-module._
Following Conjecture B, various authors[LP, Jh, Sh, Oc] have investigated the properties of the fine Selmer group over \(p\)-adic Lie extensions of a number field.
In [GJS], the authors initiated the study of the fine Selmer groups over function fields (both \(\ell\neq p\) and \(\ell=p\) cases). More precisely, they investigated analogues of Conjecture A and Conjecture B over function fields. While the answer to an analogue of Conjecture A was affirmative, it was shown that the analogue of Conjecture B fails in the \(\ell\neq p\) case. For the \(\ell=p\) case, evidences were given in support of Conjecture B [GJS, Proposition 3.8, Theorem 3.14]. More precisely, in [GJS, Theorem 3.14], the authors show that under suitable hypotheses, Conjecture B holds for certain \(\mathbb{Z}_{p}^{2}\) extensions of a function field. In another main result of this article, we generalise [GJS, Theorem 3.14] to give a new class of examples over two different certain \(\mathbb{Z}_{p}^{d}\) extensions of \(K=\mathbb{F}(t)\), where Conjecture \(B\) holds. More precisely, we prove the following results:
**Theorem 0.2** (Theorem 1.7).: _Let \(K_{\infty}\) be the arithmetic \(\mathbb{Z}_{p}\) extension of \(K\) and \(K_{d}^{\mathfrak{P}}\) be the geometric \(\mathbb{Z}_{p}^{d}\) extension of \(K\), where \(d\geq 1\) (see SS1). Let \(F_{\infty}=K_{\infty}K_{d}^{\mathfrak{P}}\). Let \(\nu_{ram}\) be the unique prime of \(K\) that ramifies in \(F_{\infty}\)._
_Assume \(E/K\) be an ordinary elliptic curve that has good reduction outside \(\nu_{ram}\). Then \(R^{S}(E/F_{\infty})^{\vee}\) is a pseudonull \(\mathbb{Z}_{p}[[G]]\) module, where \(G=\operatorname{Gal}(F_{\infty}/F)\cong\mathbb{Z}_{p}^{d+1}\) and \(S\) is any set of primes of \(K\) containing the prime \(\nu_{ram}\)._
**Proposition 0.3** (Theorem 1.8).: _Let \(K_{d}^{\mathfrak{P}}\) be the geometric \(\mathbb{Z}_{p}^{d}\) extension of \(K\), (see SS1). Let \(v_{r}\) be the unique prime of \(K\) that ramifies in \(K_{d}^{\mathfrak{P}}\) and set \(S^{\prime}=\{v_{r}\}\). Suppose that \(E/K\) be an ordinary elliptic curve that has good reduction outside the set \(S^{\prime}\)._
_Then \(R^{S}(E/K_{d}^{\mathfrak{P}})^{\vee}\) is a pseudonull \(\mathbb{Z}_{p}[[\mathcal{G}]]\) module, where \(\mathcal{G}=\operatorname{Gal}(K_{d}^{\mathfrak{P}}/F)\) and \(S\) is any set of primes of \(K\) with \(S^{\prime}\subset S\)._
**Remark 0.4**:
1. In SS1.1, we will see that there are infinitely many \(\mathbb{Z}_{p}^{d}\) extensions of function fields unlike the number field case.
2. The condition that the elliptic curve \(E\) has good reduction at all places outside \(\nu_{ram}\) is trivially satisfied by all the elliptic curves defined over the finite field \(\mathbb{F}\).
3. Other than \(\nu_{ram}\), every prime of \(K\) splits infinitely over \(F_{\infty}\). Infact, this is the case with every \(\mathbb{Z}_{p}^{d}\) extension that are constructed from Carlitz module, where \(d\geq 1\), see [Ro, Theorem 12.10, Theorem 12.14 and discussion on Page 213]). Therefore, using techniques as in the proof of [GJS, Theorem 3.14], the Theorem 1.7 cannot be generalised to include elliptic curves with bad reduction outside the ramified prime \(v_{r}\).
## 1. Fine Selmer Groups over function fields
Fix an integer prime \(p\). Let \(K\) be a function field in one variable over a finite field \(\mathbb{F}\) of characteristic \(p\). Let \(E\) be an elliptic curve defined over \(K\). Consider
an open dense subset \(U\) of \(C_{K}=\mathbb{P}^{1}_{\mathbb{F}}\) such that \(E/K\) has good reductions at every place of \(U\). Let \(\mathcal{E}\) denote the Neron model of \(E\) over \(C_{K}\). Let \(\Sigma_{K}\) be the set of all the primes of \(K\) and \(S\) denote the set of primes of \(K\) outside \(U\) i.e., the places of \(C_{K}\setminus U\). Therefore, \(S\) is a finite set of primes of \(K\) that contains the set of all primes of bad reduction of \(E/K\). Let \(K_{S}\) denote the maximal algebraic extension of \(K\) unramified outside \(S\). Consider a finite extension of \(K\), \(L\subset K_{S}\). Let \(v\) be any prime of \(K\) and \(w\) denote a prime of \(L\). Define
\[J^{1}_{v}(E/L):=\prod_{w|v}\frac{H^{1}_{fl}(L_{w},E_{p^{\infty}})}{im(\kappa_{ w})}\text{ and }K^{1}_{v}(E/L):=\underset{w|v}{\prod}H^{1}_{fl}(L_{w},E_{p^{\infty}}). \tag{1.1}\]
Here \(H^{i}_{\mathfrak{h}}(-,-)\) denotes the flat cohomology [M2, Chapters II, III] and \(\kappa_{w}:E(L_{w})\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p}\hookrightarrow H^{1} _{fl}(L_{w},E_{p^{\infty}})\) is induced by the Kummer map.
**Definition 1.1**.: _[_KT_, Prop. 2.4]_ _Let \(\Sigma_{K},S\) and \(K\subset L\subset K_{S}\) be as above. Then the Selmer group \(S(E/L)\) is defined as:_
\[S(E/L):=\ker\big{(}H^{1}_{fl}(L,E_{p^{\infty}})\longrightarrow\bigoplus_{v \in\Sigma_{K}}J^{1}_{v}(E/L)\big{)}. \tag{1.2}\]
_We define the \(S\)-fine Selmer group as:_
\[\begin{split} R^{S}(E/L):=\ker\big{(}H^{1}_{fl}(L,E_{p^{\infty}} )\longrightarrow\bigoplus_{v\in S}& K^{1}_{v}(E/L)\bigoplus_{v \in\Sigma_{K}\setminus S}J^{1}_{v}(E/L)\big{)}\\ &\cong\ker\big{(}S(E/L)\longrightarrow\bigoplus_{w|v,v\in S}& E(L_{w})\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p}\big{)}.\end{split} \tag{1.3}\]
For an infinite algebraic extension \(\mathcal{L}\) of \(K\), the above definitions extend, as usual, by taking inductive limit over finite subextensions of \(\mathcal{L}\) over \(K\).
### \(\mathbb{Z}_{p}^{d}\) extensions
Let \(\mathbb{F}^{(p)}\) be the unique subfield of \(\bar{\mathbb{F}}\) such that \(\operatorname{Gal}(\mathbb{F}^{(p)}/\mathbb{F})\cong\mathbb{Z}_{p}\). Set \(K_{\infty}:=K\mathbb{F}^{(p)}\). Note that \(K_{\infty}/K\) is unramified everywhere.
The second type of \(\mathbb{Z}_{p}\) extensions that bears a close analogy with the cyclotomic \(\mathbb{Z}_{p}\) extension of a number field, is the "cyclotomic extension at the prime ideal \(\mathfrak{P}\)". Consider a field extension \(F\) of \(K\). Then \(F\) can be thought of as a \(\mathbb{F}[t]\) module, where the action of \(\mathbb{F}[t]\) is given by the Carlitz polynomials. Choose \(\mathfrak{P}\) to be a prime of \(\mathbb{F}[t]\). For \(n>0\), let
\[\Lambda_{\mathfrak{P}^{n}}:=\{\lambda\in\overline{\mathbb{F}(t)}|[\mathfrak{P} ^{n}](\lambda)=0\}.\]
Here \(K(\Lambda_{\mathfrak{P}^{n}})/K\) is Galois with \(\operatorname{Gal}(K(\Lambda_{\mathfrak{P}^{n}})/K)\cong(\mathbb{F}[t]/ \mathfrak{P}^{n})^{\times}\). Put \(\widetilde{K}:=\underset{n\geq 1}{\bigcup}K(\Lambda_{\mathfrak{P}^{n}})\), then \(\operatorname{Gal}(\widetilde{K}/K)\cong\mathbb{Z}_{p}^{\mathbb{N}}\times( \mathbb{F}[t]/\mathfrak{P})^{\times}\). The \(\mathbb{Z}_{p}^{d}\) extension obtained from \(\widetilde{K}\), for \(d\geq 1\), denoted by \(K_{d}^{\mathfrak{P}}\), is ramified only at one prime and it is totally ramified at that prime [Ro, Proposition 12.7]. Set \(K^{\mathfrak{P}}\) as the \(\mathbb{Z}_{p}\) extension \(K_{1}^{\mathfrak{P}}\).
Let \(F_{\infty}=K_{d}^{\mathfrak{P}}K_{\infty}\). Therefore, \(F_{\infty}\) is a \(\mathbb{Z}_{p}^{d+1}\) extension of \(K\), that is ramified at exactly one prime, which splits finitely over \(F_{\infty}\).
### Pseudonullity
Let \(Cl(L)\) denote the divisor class group of \(L\), where \(L\) is an algebraic extension of \(K\).
_From now on, we will assume that \(E/K\) is an ordinary elliptic curve.._
**Definition 1.2**.: _Let \(E,K,S\) be as before and \(L\subset K_{S}\) be a finite extension of \(K\). For \(\bullet\in\{\mu_{p^{\infty}},\ \mathbb{Q}_{p}/\mathbb{Z}_{p},\ \mu_{p},\ \mathbb{Z}/p \mathbb{Z}\}\), set \(K_{v}^{1}(\bullet/L):=\underset{w|v}{\bigoplus}H^{1}_{fl}(L_{w},\bullet)\) and \(K_{v}^{2}(\bullet/L):=\underset{w|v}{\bigoplus}H^{1}_{fl}(L_{w},\bullet)\). Then \(K_{v}^{1}(\bullet/L):=\underset{w|v}{\bigoplus}H^{1}_{fl}(L_{w},\bullet)\)._
Proof.: Let \(\bullet\) be a finite extension of \(K\). Let \(\bullet\) be a finite extension of \(K\).
\(J_{v}^{1}(\bullet/L):=\underset{w|v}{\bigoplus}H_{fl}^{1}(L_{w},\bullet)/H_{fl}^{1} (O_{w},\bullet)\). We define the groups \(S^{\prime}(\bullet/L)\) and \(R^{S}(\bullet/L)\) as follows:_
\[S^{\prime}(\bullet/L):=\ker(H_{fl}^{1}(L,\bullet)\longrightarrow\underset{v \in\Sigma_{K}}{\bigoplus}J_{v}^{1}(\bullet/L)). \tag{1.4}\]
\[R^{S}(\bullet/L):=\ker(H_{fl}^{1}(L,\bullet)\longrightarrow\underset{v\in S}{ \bigoplus}K_{v}^{1}(\bullet/L))\underset{v\in\Sigma_{K}\setminus S}{\bigoplus}J_ {v}^{1}(\bullet/L)), \tag{1.5}\]
_Similarly, the group \(R^{S}(E[p]/L)\) of \(E[p]\) over \(L\) is defined by:_
\[R^{S}(E[p]/L):=\ker(H_{fl}^{1}(L,E[p])\longrightarrow\underset{v\in S}{ \bigoplus}K_{v}^{1}(E[p]/L))\underset{v\in\Sigma_{K}\setminus S}{\bigoplus}J_ {v}^{1}(E[p]/L)), \tag{1.6}\]
_where \(K_{v}^{1}(E[p]/L):=\underset{w|v}{\bigoplus}H_{fl}^{1}(L_{w},E[p])\) and \(J_{v}^{1}(E[p]/L):=\underset{w|v}{\bigoplus}H_{fl}^{1}(L_{w},E[p])/H_{fl}^{1} (O_{w},\mathcal{E}[p])\)._
Before proving our main theorems, let us discuss some preliminary results that are needed to prove Theorem 1.7.
**Theorem 1.3**.: _Let \(E/K\) be an ordinary elliptic curve and \(S\) be as defined in SS1. Then, \(R^{S}(E/K_{\infty})^{\vee}\) is a finitely generated \(\mathbb{Z}_{p}\) module. _
The proof of the above theorem can be found in [1, Theorem 3.7]. Here, we give a brief sketch of it.
Proof.: It is easy to see that the kernel and the cokernel of the natural map \(R^{S}(E[p]/K_{\infty})\to R^{S}(E/K_{\infty})[p]\) are finite. Hence, we obtain that \(\mu(R^{S}(E/K_{\infty})^{\vee})=0\) if and only if \(R^{S}(E[p]/K_{\infty})\) is finite.
Now, let \(K_{\infty}^{p}:=K\bar{\mathbb{F}}_{p}\). We claim that \(R^{S}(E[p]/K_{\infty}^{p})\) is finite. Consider the connected-\(\acute{e}\)tale sequence (see, for example, [1, SS3.2])
\[0\longrightarrow E[p]^{0}\longrightarrow E[p]\longrightarrow\pi_{0}(E[p])\longrightarrow 0 \tag{1.7}\]
where \(E[p]^{0}\) and \(\pi_{0}(E[p])\) are Cartier dual to each other. As \(E/K\) is ordinary, we know that \(\pi_{0}(E[p])\cong\mathbb{Z}/p\mathbb{Z}\) and \(E[p]^{0}\cong\mu_{p}\), where \(\mu_{p}\) is the Cartier dual to \(\mathbb{Z}/p\mathbb{Z}\).
Now, using the definitions in (1.6) and equation (1.7), we get the following commutative diagram of complexes, which is not necessarily exact:
\[\begin{CD}0@>{}>{}>R^{S}\big{(}(\mathbb{Z}/p\mathbb{Z})/K_{\infty}^{p}\big{)} @>{}>{}>H_{fl}^{1}(K_{\infty}^{p},\mathbb{Z}/pZ)@>{}>{}>\underset{v\in S}{ \prod}H_{fl}^{1}(K_{\infty,w}^{p},\mathbb{Z}/p\mathbb{Z})@>{}>{}>\underset{w|v \in S}{\prod}\frac{H_{fl}^{1}(K_{\infty,w}^{p},\mathbb{Z}/p\mathbb{Z})}{H_{ fl}^{1}(O_{\infty,w},\overline{Z}/p\mathbb{Z})}\\ @V{}V{}V@V{}V{}V\\ 0@>{}>{}>R^{S}(E[p]/K_{\infty}^{p})@>{}>{}>H_{fl}^{1}(K_{\infty}^{p},E[p])@>{}>{ }>\underset{v\in S}{\prod}H_{fl}^{1}(K_{\infty,w}^{p},E[p])@>{}>{}>\underset{v \in S}{\prod}\frac{H_{fl}^{1}(K_{\infty,w}^{p},E[p])}{H_{fl}^{1}(O_{\infty,w}, \mathcal{E}[p])}\\ @V{}V{\beta}V@V{}V{}V\\ 0@>{}>{}>R^{S}(\mu_{p}/K_{\infty}^{p})@>{}>{}>H_{fl}^{1}(K_{\infty}^{p},\mu_{p})@>{}>{}> \underset{w|v}{\prod}H_{fl}^{1}(K_{\infty,w}^{p},\mu_{p})@>{}>{}>\underset{v \in S}{\prod}\frac{H_{fl}^{1}(K_{\infty,w}^{p},\mu_{p})}{H_{fl}^{1}(O_{\infty,w },\mu_{p})}\\ \end{CD} \tag{1.8}\]
Using the proof of [1, Lemma 3.4] and the fact that \(Cl(K_{\infty}^{p})[p]\), the \(p\)-part of the divisor class group, is finite [1, Proposition 11.16], we deduce that \(R^{S}(\mu_{p},K_{\infty}^{p})\) and \(R^{S}(\mathbb{Z}/p\mathbb{Z},K_{\infty}^{p})\) are finite. From [1, SSIII.7], we know that for \(w\mid v,v\notin S\), \(\ker(\gamma_{w})=0\), which implies that \(\ker(\gamma)\) is finite. By applying
a snake lemma to the lower complex, we deduce that \(R^{S}(E[p]/K_{\infty}^{p})\) is finite. As \(G=\operatorname{Gal}(K_{\infty}^{p}/K_{\infty})\cong\prod\limits_{l\neq p}\mathbb{ Z}_{l}\), we get \(R^{S}(E[p]/K_{\infty}^{p})^{G}\cong R^{S}(E[p]/K_{\infty})\). This completes our proof.
**Proposition 1.4**.: _Let \(F_{\infty}=K_{\infty}K_{d}^{\mathfrak{P}}\) be a \(\mathbb{Z}_{p}^{d+1}\) extension, where \(d\geq 1\). Let \(\mathcal{H}=\operatorname{Gal}(F_{\infty}/K_{\infty}K_{d-1}^{\mathfrak{P}}) \cong\mathbb{Z}_{p}^{d}\). Then, \(Cl(F_{\infty})[p^{\infty}]^{\vee}\) is a finitely generated torsion \(\mathbb{Z}_{p}[[\mathcal{H}]]\) module._
Proof.: By a result of [2, Page-229], it suffices to show that \((Cl(F_{\infty})[p^{\infty}])^{\mathcal{H}}\) is finite.
Using the commutative diagram in [1, Page-38], we obtain the following commutative diagram:
(1.9)
Note that \(\frac{(\mathbb{F}^{(p)})^{\times}}{((\mathbb{F}^{(p)})^{\times})^{p^{m}}}=0\) for all \(m\). Therefore, \(\beta\) is an isomorphism. Further, by [1, Proposition 2], \(Cl(K^{\mathfrak{P}})[p^{\infty}]\) is finite. We claim that \(\operatorname{coker}(\alpha)\) is finite. Then the lemma follows by using a snake lemma in diagram 1.9. We now establish the claim. Consider the commutative diagram:
(1.10)
Note that \(H_{\operatorname{fl}}^{1}(\operatorname{Spec}(R),\mu_{p^{n}})=R^{\times}/(R^{ \times})^{p^{n}}\), for a local ring \(R\)[1, Page-37]. Hence, we get that \(\ker(\eta)=\operatorname{coker}(\delta)=0\). Then \(\operatorname{coker}(\alpha)=0\) follows from the diagram (1.10).
**Lemma 1.5**.: \(R^{S}\big{(}(\mathbb{Z}/p\mathbb{Z})/F_{\infty}\big{)}\) _is finite._
Proof.: We have \(F_{\infty}=\bigcup\limits_{n_{1},\cdots,n_{d}}F_{n_{1},\cdots,n_{d+1}}\), where \(\operatorname{Gal}(F_{n_{1},\cdots,n_{d+1}}/F)\cong\mathbb{Z}/p^{n_{1}}\mathbb{ Z}\times\cdots\times\mathbb{Z}/p^{n_{d+1}}\mathbb{Z}\). Using [2, Theorem 27.6], we observe that \(R^{S}\big{(}(\mathbb{Z}/p\mathbb{Z})/F_{n_{1},\cdots,n_{d+1}})\cong\operatorname {Hom}(G_{S}^{\operatorname{ab}}(F_{n_{1},\cdots,n_{d+1}})(p),\mathbb{Z}/p \mathbb{Z})\hookrightarrow\operatorname{Hom}(G_{\phi}^{\operatorname{ab}}(F_{ n_{1},\cdots,n_{d+1}})(p),\mathbb{Z}/p\mathbb{Z})\), where \(G_{\phi}(F_{n_{1},\cdots,n_{d+1}})^{\operatorname{ab}}(p)\) is the Galois group of the maximal abelian everywhere unramified pro-\(p\) extension of \(F_{n_{1},\cdots,n_{d+1}}\). Also, \(\operatorname{Hom}(G_{\phi}^{\operatorname{ab}}(F_{n_{1},\cdots,n_{d+1}})(p), \mathbb{Z}/p\mathbb{Z})\cong\operatorname{Hom}(Cl(F_{n_{1},\cdots,n_{d+1}}) \otimes\mathbb{Z}_{p},\mathbb{Z}/p\mathbb{Z})\cong\operatorname{Hom}(Cl(F_{ n_{1},\cdots,n_{d+1}})/p,\mathbb{Z}/p\mathbb{Z})\). By applying a control theorem, similar to the proof of Proposition 1.4, we get that the kernel and the cokernel of the map \(Cl(F_{n_{1},\cdots,n_{d+1}})[p]\longrightarrow Cl(F_{\infty})[p]^{\operatorname {Gal}(F_{\infty}/F_{n_{1},\cdots,n_{d+1}})}\) are finite and bounded independently of \(n_{i}\)'s. As a result, \(Cl(F_{n_{1},\cdots,n_{d+1}})[p]\) is finite and bounded independently of \(n_{i}\)'s. Moreover, as \(Cl(F_{n_{1},\cdots,n_{d+1}})(p)\) is finite, the same is true for
\(Cl(F_{n_{1},\cdots,n_{d+1}})/(p)\). Thus, \(R^{S}\big{(}(\mathbb{Z}/p\mathbb{Z})/F_{n_{1},\cdots,n_{d+1}}\big{)}\) is finite and bounded independently of \(n_{i}^{\prime}s\). Hence, we conclude that \(R^{S}\big{(}(\mathbb{Z}/p\mathbb{Z})/F_{\infty}\big{)}\) is finite.
**Lemma 1.6**.: \(S^{\prime}\big{(}\mu_{p}/F_{\infty}\big{)}\) _is finite. It follows that \(R^{S}(\mu_{p}/F_{\infty})\) is also finite._
Proof.: We have an exact sequence:
\(0\longrightarrow\frac{(\mathbb{F}^{(p)})^{\times}}{((\mathbb{F}^{(p)})^{ \times})^{p}}\longrightarrow S^{\prime}(\mu_{p}/F_{\infty})\longrightarrow Cl (F_{\infty})[p]\longrightarrow 0\).
Note that \(\frac{(\mathbb{F}^{(p)})^{\times}}{((\mathbb{F}^{(p)})^{\times})^{p^{m}}}=0\), hence \(S^{\prime}\big{(}\mu_{p}/F_{\infty}\big{)}\cong Cl(F_{\infty})[p]\). We have \(K_{d}^{\mathfrak{P}}=\underset{n_{1},\cdots n_{d}}{\bigcup}K_{n_{1}\cdots n_ {d}}\), where \(K\subset K_{n_{1}\cdots n_{d}}\subset K_{d}^{\mathfrak{P}}\) with \(\mathrm{Gal}(K_{n_{1}\cdots n_{d}}/K)\cong\mathbb{Z}/p_{1}^{n}\mathbb{Z} \times\cdots\times\mathbb{Z}/p_{d}^{n}\mathbb{Z}\). Set \(K_{n_{1}\cdots n_{d}}^{\infty}=K_{n_{1}\cdots n_{d}}K_{\infty}\) and \(\mathcal{G}_{n}=\mathrm{Gal}(K_{n_{1}\cdots n_{d}}^{\infty,p}/K_{n_{1}\cdots n _{d}}^{\infty})\) with \(K_{n_{1}\cdots n_{d}}^{\infty,p}=K_{n_{1}\cdots n_{d}}^{\infty}\bar{\mathbb{F} }_{p}(t)\). As profinite order of \(\mathcal{G}_{n}\) is prime to \(p\), following a standard diagram chase using the definition of \(S^{\prime}(\mu_{p^{\infty}}/-)\), we get \(S^{\prime}(\mu_{p^{\infty}}/K_{n_{1}\cdots n_{d}}^{\infty,p})^{\mathcal{G}_{n}} \cong S^{\prime}(\mu_{p^{\infty}}/K_{n_{1}\cdots n_{d}}^{\infty})\). Similarly, we obtain that \(Cl(K_{n_{1}\cdots n_{d}}^{\infty})[p^{\infty}]\cong Cl(K_{n_{1}\cdots n_{d}}^{ \infty,p})[p^{\infty}]^{\mathcal{G}_{n}}\). Let \(C_{K_{n_{1}\cdots n_{d}}^{\infty,p}}\) be a proper smooth geometrically connected curve which is the model of the function field \(K_{n_{1}\cdots n_{d}}^{\infty,p}\). On the other hand, [NSW, Proposition 10.1.1], we observe that \(Cl(K_{n_{1}\cdots n_{d},\infty}^{p})[p^{\infty}]\cong(\mathbb{Q}_{p}/\mathbb{ Z}_{p})^{r_{n_{1}\cdots n_{d}}}\), where \(0\leq r_{n_{1}\cdots n_{d}}\leq\mathrm{genus}(C_{K_{n_{1}\cdots n_{d}}^{ \infty,p}})\). Therefore \(Cl(F_{\infty})[p^{\infty}]\cong\underset{n_{1},\cdots,n_{d}}{\lim}\)\(Cl(K_{n_{1}\cdots n_{d}}^{\infty,p})[p^{\infty}]\) is \(p\)-divisible. By Proposition 1.4, \(Cl(F_{\infty})[p^{\infty}]^{\vee}\) is also a finitely generated torsion \(\mathbb{Z}_{p}[[\mathcal{H}]]\) module. Consequently, by [Lim1, Lemma 2.4.1], we easily get that \(S^{\prime}\big{(}\mu_{p}/F_{\infty}\big{)}\) is finite. As \(R^{S}(\mu_{p}/F_{\infty})\hookrightarrow S^{\prime}\big{(}\mu_{p}/F_{\infty} \big{)}\) (see Definition 1.2), \(R^{S}(\mu_{p}/F_{\infty})\) is finite as well.
**Theorem 1.7**.: _Let \(K_{\infty}\) be the arithmetic \(\mathbb{Z}_{p}\) extension of \(K\) and \(K_{d}^{\mathfrak{P}}\) be the geometric \(\mathbb{Z}_{p}^{d}\) extension of \(K\), where \(d\geq 1\) (see SS1). Let \(F_{\infty}=K_{\infty}K_{d}^{\mathfrak{P}}\). Let \(\nu_{ram}\) be the unique prime of \(K\) that ramifies in \(F_{\infty}\)._
_Assume \(E/K\) be an ordinary elliptic curve that has good reduction outside \(\nu_{ram}\). Then \(R^{S}(E/F_{\infty})^{\vee}\) is a pseudonull \(\mathbb{Z}_{p}[[G]]\) module, where \(G=\mathrm{Gal}(F_{\infty}/F)\cong\mathbb{Z}_{p}^{d+1}\) and \(S\) is any set of primes of \(K\) containing the prime \(\nu_{ram}\)._
Proof.: Let \(H=\mathrm{Gal}(F_{\infty}/K_{\infty})\). By Theorem 1.3, \(R^{S}(E/K_{\infty})^{\vee}\) is a finitely generated \(\mathbb{Z}_{p}\) module. It is easy to see that the kernel and the cokernel of the map \((R^{S}(E/F_{\infty})^{\vee})_{H}\longrightarrow R^{S}(E/K_{\infty})^{\vee}\) are finitely generated \(\mathbb{Z}_{p}\)-modules. Hence by Nakayama lemma, \(R^{S}(E/F_{\infty})^{\vee}\) is a finitely generated \(\mathbb{Z}_{p}[[H]]\) module.
Let \(S^{\prime}=\{\nu_{ram}\}\). Using ordinarity of \(E/K\), from (1.7), we obtain a complex (not necessarily exact):
\[R^{S^{\prime}}(\mu_{p}/F_{\infty})\longrightarrow R^{S^{\prime}}(E[p]/F_{ \infty})\longrightarrow R^{S^{\prime}}\big{(}(\mathbb{Z}/p\mathbb{Z})/F_{\infty} \big{)}. \tag{1.11}\]
Then, from the definition of \(R^{S^{\prime}}(\_/F_{\infty})\), there is a commutative diagram (not necessarily exact):
(1.12)
Here \(w\) denote a place of \(F_{\infty}\). Note that \(R^{S^{\prime}}(\mu_{p}/F_{\infty})\) and \(R^{S^{\prime}}\big{(}(\mathbb{Z}/p\mathbb{Z})/F_{\infty}\big{)}\) are finite by Lemmas 1.6 and 1.5, respectively. Note that there are only finitely many primes of \(F_{\infty}\) lying above \(v_{r}\) in \(F_{\infty}\).Now, from the proof of Proposition 1.3, we get that \(\ker(\gamma)\) is finite. Again using a diagram chase, as in the proof of Proposition 1.3, we get that \(R^{S^{\prime}}(E[p]/F_{\infty})\) is finite.
Next, it is easy to see that the kernel and the cokernel of the natural map \(R^{S^{\prime}}(E[p]/F_{\infty})\to R^{S^{\prime}}(E/F_{\infty})[p]\) are finite. Thus \(R^{S^{\prime}}(E/F_{\infty})^{\vee}/(pR^{S^{\prime}}(E/F_{\infty})^{\vee})\) is finite. Now, it follows that \(R^{S^{\prime}}(E/F_{\infty})^{\vee}\) is a finitely generated torsion \(\mathbb{Z}_{p}[[H]]\) module by [11, Lemma 2.4.1]. Therefore, \(R^{S^{\prime}}(E/F_{\infty})^{\vee}\) is a pseudonull \(\mathbb{Z}_{p}[[G]]\) module[Ve]. Finally, for any finite set \(S\) of primes of \(K\) containing \(S^{\prime}\)=\(\{v_{r}\}\), such that \(E/K\) has good reduction outside \(v_{r}\), \(R^{S}(E/L_{\infty})^{\vee}\) is a pseudonull \(\mathbb{Z}_{p}[[G]]\) module.
Next, we give another new class of \(\mathbb{Z}_{p}^{d}\) extensions which satisfies Conjecture B.
**Proposition 1.8**.: _Let \(K_{d}^{\mathfrak{P}}\) be the geometric \(\mathbb{Z}_{p}^{d}\) extension of \(K\), (see SS1). Let \(v_{r}\) be the unique prime of \(K\) that ramifies in \(K_{d}^{\mathfrak{P}}\) and set \(S^{\prime}=\{v_{r}\}\). Suppose that \(E/K\) be an ordinary elliptic curve that has good reduction outside the set \(S^{\prime}\)._
_Then \(R^{S}(E/K_{d}^{\mathfrak{P}})^{\vee}\) is a pseudonull \(\mathbb{Z}_{p}[[\mathcal{G}]]\) module, where \(\mathcal{G}=\operatorname{Gal}(K_{d}^{\mathfrak{P}}/F)\) and \(S\) is any set of primes of \(K\) with \(S^{\prime}\subset S\)._
Proof.: By [10, Proposition 2], we know that \(Cl(K^{\mathfrak{P}})[p^{\infty}]\) is finite. The proof now follows from arguments similar to the proof of Theorem 1.7.
**Example 1**.: _Let \(p=2\) and \(K=\mathbb{F}_{2}(t)\). Let \(K_{d}^{\mathfrak{P}}\) be the \(\mathbb{Z}_{p}^{d}\) extension, where \(d\geq 1\), constructed using Carlitz module that is ramified only at the prime \(\mathfrak{P}=(t)\). Let \(E/K\) be given by the Weierstrass equation:_
\[y^{2}+xy=x^{3}+(1/t)x^{2}+1, \tag{1.13}\]
_Then \(E\) has bad reduction only at the prime \((t)\) of \(K\) and is an ordinary elliptic curve. Hence, this example of \(E/K\) satisfies the assumptions of Theorem 1.7 and Proposition 1.8. This in turn shows that \(R^{S}(E/K_{\infty})\) is a pseudonull \(\mathbb{Z}_{p}^{d+1}\) module and \(R^{S}(E/K_{d}^{\mathfrak{P}})\) is a pseudonull \(\mathbb{Z}_{p}^{d}\) module._
**Acknowledgements** We would like to thank Aprameyo Pal, Somnath Jha and Sudhanshu Shekhar for many helpful discussions and comments. The author gratefully acknowledges the support from HRI postdoctoral fellowship. |
2306.16482 | DenseBAM-GI: Attention Augmented DeneseNet with momentum aided GRU for
HMER | The task of recognising Handwritten Mathematical Expressions (HMER) is
crucial in the fields of digital education and scholarly research. However, it
is difficult to accurately determine the length and complex spatial
relationships among symbols in handwritten mathematical expressions. In this
study, we present a novel encoder-decoder architecture (DenseBAM-GI) for HMER,
where the encoder has a Bottleneck Attention Module (BAM) to improve feature
representation and the decoder has a Gated Input-GRU (GI-GRU) unit with an
extra gate to make decoding long and complex expressions easier. The proposed
model is an efficient and lightweight architecture with performance equivalent
to state-of-the-art models in terms of Expression Recognition Rate (exprate).
It also performs better in terms of top 1, 2, and 3 error accuracy across the
CROHME 2014, 2016, and 2019 datasets. DenseBAM-GI achieves the best exprate
among all models on the CROHME 2019 dataset. Importantly, these successes are
accomplished with a drop in the complexity of the calculation and a reduction
in the need for GPU memory. | Aniket Pal, Krishna Pratap Singh | 2023-06-28T18:12:23Z | http://arxiv.org/abs/2306.16482v1 | # DenseBAM-GI: Attention Augmented DenseNet with momentum aided GRU for HMER
###### Abstract
The task of recognising Handwritten Mathematical Expressions (HMER) is crucial in the fields of digital education and scholarly research. However, it is difficult to accurately determine the length and complex spatial relationships among symbols in handwritten mathematical expressions. In this study, we present a novel encoder-decoder architecture (DenseBAM-GI) for HMER, where the encoder has a Bottleneck Attention Module (BAM) to improve feature representation and the decoder has a Gated Input-GRU (GI-GRU) unit with an extra gate to make decoding long and complex expressions easier. The proposed model is an efficient and lightweight architecture with performance equivalent to state-of-the-art models in terms of Expression Recognition Rate (exprate). It also performs better in terms of top 1, 2, and 3 error accuracy (\(\leq 1\)(%), \(\leq 2\)(%) and \(\leq 3\)(%)) across the CROHME 2014, 2016, and 2019 datasets. DenseBAM-GI achieves the best exprate among all models on the CROHME 2019 dataset. Importantly, these successes are accomplished with a drop in the complexity of the calculation and a reduction in the need for GPU memory.
keywords: HMER, DenseNet, Gated Recurrent Unit, BAM, Pacs: 0000, 1111
_2000 MSC:_ 0000, 1111
## 1 Introduction
HMER has recently attracted a lot of attention since it has potential uses in a number of industries, including conferencing systems, office automation, and education. Although machine learning and deep learning technologies have advanced quickly, the intricate spatial linkages and two-dimensional arrangements present in input pictures continue to provide a major challenge to developing efficient HMER solutions.
The process of HMER fundamentally involves symbol segmentation, symbol identification, and structural analysis, as discussed in various studies such as those by Zanibbi, Garain, and Alvaro et al ([4], [5], [8]). Previously, strategies centred around pre-defined regulations and parsing techniques, as examined in the study by Anderson et al ([1]). Nonetheless, the advancements in HMER have been primarily driven by Deep Learning models, as evidenced in the works of Transformer and DenseWAP-TD (([37], [35])). Viewing HMER as an image-to-text problem within the mark-up context has proved successful, implementing deep learning's encoder-decoder structure by Deng et al (([18])). Following this, a number of encoder-decoder model modifications, such as coverage-based encoder-decoder attention models, multiscale attention, and multi-modal Subsequently, various adaptations of encoder-decoder models ([26], [28], [30]) have been developed. Furthermore, state-of-the-art performance has been achieved by Transformer-based and dual loss-based encoder-decoder models ([32], [37]).
Despite achieving state-of-art (exprate upto 52%), these methods exhibit limitations, including over-translation and under-translation issues ([26]), an inability to capture intricate spatial relationships ([34]), substantial GPU memory requirements ([34], [37]), a lack of proficiency in representing length expressions ([35]), and a necessity for improved generalization capabilities ([35]).
Most of the deep learning research for HMER, is predominantly focused on developing the individual components of the encoder-decoder architecture, i.e., encoder or decoder. Architectures such as Fully Convolutional Networks (FCN) ([26]) and DenseNet ([28], [35]) are adopted in the encoder segment. However, these large-scale CNN models often encounter
challenges related to gradient dynamics, such as exploding and vanishing gradients ([15], [20], [21]). In order to mitigate these issues, attention mechanisms are integrated, originally proving their efficacy in areas like machine translation ([13], [16]) and image classification ([25]). In the mathematical images, attention is crucial to capture its 2D structure and complex spatial relations ([26]). However, there is a noticeable deficiency in the academic discourse regarding integrating attention mechanisms within the encoder segment of encoder-decoder architectures applied to HMER models. In order to address this research gap, our present study introduces DenseBAM, an innovative model intended to amplify the representational capacity of the encoder. This proposed architecture melts the initial three blocks of DenseNet with the Bottleneck Attention Module (BAM), a fusion we designate as DenseBAM, to facilitate a more effective encoding process. BAM is a small integrable lightweight attention module proposed by Park et al. ([27]) and focuses on enhancing the network's representation power efficiently by introducing two attention mechanisms called channel & spatial attention, which guides the network on "what" and "where" features should be emphasized. The channel attention mechanism captures the interdependencies in the channels of the feature map, and spatial attention captures the same in spatial features. Results show that it works best when applied in the high-level layers in the DenseNet.
The evolution of decoders in HMER models are marked by notable advancements with an initial application of Long Short Term Memory (LSTM) in combination with a CNN-based encoder proposed by Deng et al. ([18]). LSTM decoder later substituted with a more streamlined, two-layered stacked Gated Recurrent Unit (GRU), which provided similar performance but with fewer parameters. Subsequent innovation led to the development of a multimodal GRU-based decoder pioneered by Zhang et al. ([30]). Most recently, R-GRU ([38]), a refined version of GRU, demonstrated superior performance compared to the conventional GRU. Despite this progress, these models still grapple with challenges such as over-under parsing and less than satisfactory performance on long, complex mathematical expressions. In response to this necessity, we propose the GI-GRU, a novel model inspired by the momentum RNN
([33]). This design integrates an auxiliary input into the naive GRU architecture, providing additional information that enhances the model's learning capacity. The integration of auxiliary input plays a crucial role in modulating the information update in the present hidden state derived from the current input. This integration not only fortifies the long-term memory retention capability of the GRU but also facilitates expedited convergence, thereby enhancing the overall efficiency of the model. Combination of proposed encoder and decoder, named as DenseBAM-GI, achieves state-of-the-art performance for top 1, 2, and 3 error accuracy for all HMER models on the CROHME datasets from 2014, 2016, and 2019. Similarly, it establishes a new benchmark for exprate on the CROHME 2019 dataset. Our novel contribution in this works are:
1. We propose a novel Encoder called DenseBAM which is equipped with channel and spatial attention mechanism.
2. We develop a novel decoder named GI-GRU inspired from momentum based optimization techniques.
3. The integrated encoder-decoder framework, named DenseBAM-GI, surpasses numerous leading models while reducing memory utilization and training duration.
## 2 Literature review
Research in HMER primarily bifurcates into two methodologies: those that leverage rule-based approaches and those that employ encoder-decoder-based models.
Rule-based approaches were prevalent in deciphering mathematical expressions' two-dimensional structures, including the first research ([1]) in HMER. Zanibbi et al. ([4]) introduced the concept of expression grammar, an advanced form of Context-Free Grammar (CFG), to develop expression trees. It utilizes three stages: Layout, Lexical, and Expression Analysis. These stages methodically convert input images into LaTeX strings, optimizing this conversion process. Further research has focused on enriching this conversion by integrating classifiers and combining online and offline features, improving accuracy and overall performance. Further
more, Yamamoto et al. ([6]) developed a Probabilistic CFG (PCFG) and it was extended as a stochastic CFG combining with HMM ([8]). Furthermore, Alvaro et al. ([12]) put forth a consolidated grammar-based approach to tackle the HMER challenge, and this method took first place in the CROHME 2014 contest.
Progress in deep learning-based models has presented a resort to grammar-based methods, enabling enhanced performance and the integration of automated segmentation and parsing. Deng et al. ([18]) proposed an encoder-decoder model incorporating a coarse attention mechanism consisting of a multi-layer CNN and LSTM decoder. Zhang et al. ([26], [28]) further refined the model by integrating coverage-based attention, multi-scale attention, and stacked GRU layers. Wang et al. ([30]) unveiled a multimodal approach that strategically utilizes a multimodal encoder-decoder model. This innovative combination of online and offline modalities culminated in a substantial improvement in benchmark performance. Wu et al. ([31]) further enhanced this strategy by formulating the Paired Adversarial Learning-v2 (PAL-v2) model. This system incorporated a Dense Convolutional Recurrent Neural Network (Conv-RNN) block, functioning as an encoder and replaced the traditional RNN decoder with an attention-based convolutional decoder ([34]). In a recent study, Zhao et al. ([37]) unveiled an innovative bidirectionally trained transformer tailored for this particular domain. The uniqueness of this methodology is built on self-attention principles coupled with positional encodings.
Previous research has demonstrated the efficacy of Fully Convolutional Networks (FCNs) in extracting features from mathematical images ([26]), with DenseNet later introduced as the encoder for efficient feature propagation ([35]). Increasing the number of layers, as seen in VGGNet ([17]) and ResNet ([15]), can enhance performance. However, deeper CNNs face challenges like exploding and vanishing gradients and vast parameter spaces. In response, the attention mechanism was introduced and gained prominence in various domains, offering minimal computational burden and significant performance improvement. Initially applied in Neural Machine Translation, attention was later integrated into CNNs for tasks like image
classification. Channel-wise attention and adaptive strategies, such as the Bottleneck Attention Module, have also been implemented to modulate features dynamically. In this study, we propose DenseBAM encoder, which incorporates the BAM in the first three blocks of The DenseNet and enhance the baseline model performance with very little computational overhead.
Recurrent Neural Networks (RNNs) have long been a cornerstone for sequence modelling tasks, with their initial application in encoder-decoder architectures attributed to Cho et al. ([9]) in 2014. Even with their widespread use, RNNs intrinsically suffer from the vanishing and exploding gradient phenomena ([2], [7]), which impede their capacity to learn long-range dependencies within sequences. To mitigate these challenges, gated RNN variants, such as LSTM ([3]) and GRU ([10]), were introduced, providing enhanced capabilities in preserving information from elongated sequences. However, despite these advances, LSTMs and GRUs grapple with residual vanishing gradient issues, underscoring the necessity for ongoing research and refinement of RNN-based models for sequence learning endeavours. In this study, we propose the GI-GRU method to address the vanishing gradient issue and effectively capture long and complex spatial relationships in handwritten mathematical images. This technique outperforms the naive GRU while converging more rapidly.
We amalgamate the two novel proposed encoder-decoder components and evaluated the model against the CROHME benchmark datasets, wherein it exhibited performance commensurate with state-of-the-art models.
## 3 Architecture of the model
In our research, we employ an base encoder-decoder structure, enriched with attention mechanisms, as comprehensively expounded by Zhang et al. ([26]). We refer to this particular architectural design as the base-model throughout this study. In our study, we introduce a novel encoder called DenseBAM, and the decoder module features a new variant of GRU, namely, GI-GRU. The proposed architecture aims to ensure quick convergence while capturing
long-term dependencies. As depicted in Figure 1, the model's architecture encompasses an encoder, a decoder, and an attention module. It's important to note that this model undergoes end-to-end training in unison with all other components.
### Encoder
The architecture we utilize as encoder incorporates the initial three Dense blocks from DenseNet-121. With its Fully Convolutional Network (FCN) type, it can adapt to various sizes of input images. This adaptability is paramount in HMER due to the variable length of equations. The encoder's role is to transform the provided input image into a form of feature representation, which is then subjected to further processing by an attention model-driven
Figure 1: In the architecture of our proposed model, the initial input constitutes an image of a handwritten mathematical equation. The embedded feature matrix, \(\mathbf{A}^{\prime}\), is subsequently produced by the DenseBAM encoder we propose. Thereafter, the attention mechanism within the model identifies and emphasizes significant portions of the data, which it then conveys to the decoder unit. Within the decoder, the final output is generated by our newly proposed GL-GRU.
decoder. A robust feature representation is essential in HMER tasks, given the intricate relationships among mathematical symbols in the equation. Inadequate feature representation could lead to the subsequent module failing to generate an accurate Latex sequence corresponding to the image. We propose the DenseBAM encoder to bolster the power of feature representation by integrating the first three dense blocks with the BAM, generating a robust feature representation and substantially enhancing the model's performance.
The architecture of the DenseBAM encoder is shown in Figure 1. It comprises the initial three dense blocks, each exhibiting a unique characteristic where the feature maps produced by a particular layer are concatenated with the features derived from all preceding layers.. This dense connectivity ensures that the information flows more efficiently and allows for better feature reuse. Consider an image passed through the convolution network, which comprises \(L^{1}\) layers. A designated non-linear operation, termed \(CF\), is executed within each layer, amalgamating Batch normalisation, ReLU transformation, pooling, and a convolution operation of \(3\times 3\). Thus, the output generated by the \(l^{th}\) layer, symbolized as \(q_{l}\), can be defined in the following manner:
\[q_{l}=CF_{l}([q_{0},q_{1},...,q_{l-1}]) \tag{1}\]
Here, the entity \([q_{0},q_{1},...,q_{l-1}]\) represents the concatenation of features emanating from a series of layers, specifically layers 0, 1, through to \(l-1\). The number of layers and the dimension rise as the feature maps get concatenated. After applying \(L^{1}\) layers in Dense block further a Bottleneck Layer which consist of \(1\times 1\) Convolutional Layer and Transition Layer consist of \(1\times 1\) Convolutional Layer and \(2\times 2\) Average Pooling Layer further gets applied. Let the output from the Transition Layer can be represented as the following equation:
\[Q^{1}=\text{Denseblock}_{1}(q_{l}) \tag{2}\]
Then this output is passed through Bottleneck Attention Module which has 2 Attention Layer called:
(i) Channel Attention Layer (ii) Spatial Attention Layer
The channel and Spatial Attention Layer are represented by \(M_{ch}()\) and \(M_{sp}()\) respectively.
The \(M_{ch}()\) comprised of a batch normalization Multi Layer Perceptron (MLP) and a Average Pooling Layer it can be represented as
\[M_{ch}(I)=BN(MLP(AvgPooling(I))) \tag{3}\]
here I is the input to Channel Attention Layer.
Similarly we represent Spatial Attention Layer as \(M_{sp}()\) and it consist of Convolutional Layer and Batch Normalization and can be represented as
\[M_{sp}(I)=BN(f_{1\times 1}^{3}(f_{3\times 3}^{2}(f_{3\times 3}^{1}(f_{1\times 1 }^{0}(I))))) \tag{4}\]
\(f_{1\times 1}^{m}\) and \(f_{3\times 3}^{m}\) are \(1\times 1\) and \(3\times 3\) Convolution and m is the index of convolution.
The \(Q^{1}\) is passed \(M_{ch}()\) and \(M_{sp}()\) and the entire operation can be described as
\[M(Q^{1})=\sigma(M_{ch}(Q^{1})+M_{sp}(Q^{1})) \tag{5}\]
This \(M(Q^{1})\) is further added to \(Q^{1}\) by elementwise multiplication and described as
\[(Q^{1})^{{}^{\prime}}=Q^{1}+Q^{1}\otimes M(Q^{1}) \tag{6}\]
This is the output of the \(1^{st}\) dense block with BAM and input of \(2^{nd}\) dense block. The
final output of the DenseBAM Network is \(\mathbf{A^{\prime}}\), has \(L^{1}(H\times W)\) grids of features and in our study the \(L^{1}\) is 1024. Each component in \(l\) is characterized by an \(N^{\prime}\)-dimensional vector, providing a detailed representation of a localized region within the image.
\[a=\{a_{1},...,a_{L^{1}}\},a_{i}\in\mathbb{R}^{N^{\prime}} \tag{7}\]
### Attention Mechanism
The use of attention enables the encoder-decoder structure precisely align the input and output sequences. Specifically, additive attention combined with convolution, as described by [14], is employed. The approach has proven its efficiency across diverse applications. It is frequently used in work requiring complex sequence transformations, such as language processing, machine translation using neural networks, and other applications. This attention weight prioritizes important regions in the input for generating the next token in the output sequence. The attention component generates the context vector \(c_{t}\) from the inputs \(\mathbf{A^{\prime}}\) and \(h_{t}^{{}^{\prime}}\). The formula used to calculate it is provided by:
\[\begin{split}\beta_{t}&=\sum_{l}^{t-1}\alpha_{l};F= Q*\beta_{t}\\ e_{ti}&=\nu_{a}^{T}\mathbf{tanh}(W_{h^{\prime}}h_{t }^{\prime}+W_{a}a_{i}+W_{f}f_{i}+b_{i});\\ \alpha_{ti}&=\frac{exp(e_{ti})}{\sum_{k=1}^{L}exp(e _{tk})};c_{t}=\sum\alpha_{ti}*a_{i}\end{split} \tag{8}\]
where \(i\) represents the position in the matrix \(\mathbf{A^{\prime}}\) and \(t\) stands for the current timestamp. \(\beta_{t}\) is a feature of the attention aggregation. The \(i^{th}\) component of the coverage vector \(F\) is symbolised by \(f_{i}\), and the \(Q\) denotes the convolution layer equipped with \(q\) output channels. The attention sum vector is introduced into the convolution layer \(Q\) to produce the coverage vector \(F\). Additionally, \(e_{ti}\) measures the energy of \(a_{i}\) at the particular timestamp \(t\). The first RNN cell's and encoder's outputs are \(h_{t}^{{}^{\prime}}\) and \(a_{i}\), respectively. The attention weight of
the feature map \(a_{i}\) at \(t\) is \(\alpha_{ti}\). The second RNN cell then receives the context vector \(c_{t}\) to compute \(h_{t}\).
Additionally, we have \(\nu\in\mathbb{R}^{d^{\prime}}\), \(W_{h^{\prime}}\in\mathbb{R}^{d^{\prime}\times n}\), \(W_{a}\in\mathbb{R}^{d^{\prime}\times N}\), and \(W_{f}\in\mathbb{R}^{d^{\prime}\times q}\), where \(d^{\prime}\) denotes attention dimension.
### Decoder
Two layers of stacked RNN architecture made up the decoder unit. An intermediate hidden state is produced by the first layer, which is then used as the input for the subsequent recurrent neural network (RNN) and attention unit. The final hidden state, the context vector, and the current output word are all produced by the second layer. We employ the GRU as the RNN unit for decoding. It has fewer parameters and is effective for addressing the vanishing gradient problem. In this paper, a novel decoding unit called GI-GRU is developed to produce a sequence of Latex by taking use of the interaction between the context vector and the feature representation matrix.
Figure 2(a) illustrates the structure of a GRUCell. The mathematical representation of \(h_{t}\) in GRU is:
Figure 2: Architecture of proposed GI-GRU Cell.
\[z_{t} =\sigma(W_{yz}y_{t-1}^{E}+U_{hz}h_{t-1}+C_{cz}c_{t}+b_{z}) \tag{9}\] \[r_{t} =\sigma(W_{yr}y_{t-1}^{E}+U_{hr}h_{t-1}+C_{cr}c_{t}+b_{r})\] \[\tilde{h}_{t} =tanh(W_{yh}y_{t-1}^{E}\] \[+r_{t}\otimes(U_{rh}h_{t-1})+C_{ch}c_{t}+b_{h})\] \[h_{t} =(1-z_{t})\otimes h_{t-1}+z_{t}\otimes\tilde{h}_{t}\]
where the symbol \(\sigma\) symbolizes the sigmoid function, while \(\otimes\) denotes element-wise multiplication. \(y_{t-1}^{E}\) refers to the previously timestamped predicted word or label incorporated with its embedding. The variables \(z_{t}\) and \(r_{t}\) represent the update and reset gates, respectively. Moreover, \(\tilde{h}_{t}\) indicates the candidate activation, while \(h_{t}\) designates the hidden state at the \(t^{th}\) instance.
Gi-GruWe introduce an auxiliary state \(v_{t}=s\otimes W_{yr}y_{t-1}^{E}\), which is added to three elements, namely, \(z_{t}\) (update gate), \(r_{t}\) (reset gate), and \(h_{t}\) (current hidden state), by drawing on the principles of the classical momentum technique, which is renowned for its capacity to accelerate model convergence and mitigate the vanishing gradient issue. The model we propose is christened as GI-GRU, with its definition as follows:
\[v_{t} =s\otimes W_{yr}y_{t-1}^{E} \tag{10}\] \[z_{t} =\sigma(W_{yz}(y_{t-1}^{E})+U_{hz}h_{t-1}+v_{t}+C_{cz}c_{t}+b_{z})\] \[r_{t} =\sigma(W_{yr}(y_{t-1}^{E})+U_{hr}h_{t-1}+v_{t}+C_{cr}c_{t}+b_{r})\] \[\tilde{h}_{t} =tanh(W_{yh}(y_{t-1}^{E})\] \[+r_{t}\otimes(U_{hh}h_{t-1}+v_{t})+C_{ch}c_{t}+b_{h})\] \[h_{t} =(1-z_{t})\otimes h_{t-1}+z_{t}\otimes\tilde{h}_{t}\]
where \(s\) is the Step size parameter. The variable \(v_{t}\) adeptly modulates the information updated to the gates, thus expediting the convergence of the Gated Recurrent Unit (GRU) and enhancing its capability to retain longer sequences. It is achieved by fostering a more stable dynamical system and alleviating vanishing gradient. The Gated Input GRU (GIRU) is correctly titled since the gate only introduces the input to the GRU. The Algorithm 1 describes computing processes.
### Loss Function of DenseBAM-GI
The calculation of the target LaTeX sequence for a given timestep, denoted as \(t\), can be expressed as follows:
\[\begin{split}\mathrm{P}(y_{t}|y_{1},..,y_{t-1},x)& =g(W_{o}o_{t}),\\ &=g\left(W_{o}\left(\mathrm{V_{h}h_{t}+W_{c}c_{t}+W_{y}y_{t-1}^{ E}}\right)\right),\end{split} \tag{11}\]
where \(\mathrm{W_{o}}\in\mathbb{R}^{\mathrm{K}\times\mathrm{w^{\prime}}},\mathrm{V_{h }}\in\mathbb{R}^{\mathrm{w^{\prime}}\times\mathrm{n}},\mathrm{W_{c}}\in \mathbb{R}^{\mathrm{w^{\prime}}\times\mathrm{N}}\) and \(W_{y}\in\mathbb{R}^{\mathrm{w^{\prime}}\times E}\), \(g\) signifies the softmax activation function, \(K\) is indicative of the aggregate count of words present within the vocabulary, while \(n\) embodies the dimensionality of the hidden state. Additionally, \(\mathrm{w^{\prime}}\) signifies the dimension of the model's linear layer. To minimize overfitting and ensure generalizability, the model employs a loss function based on cross entropy and imparts L2 regularization to the model's weights. Hence, the optimization goal can be described by the following objective function:
\[\mathrm{F}=-\sum_{t=1}^{S}\log\mathrm{P}(\mathrm{y_{t}}|\mathrm{y_{t-1}}, \mathrm{x})+\lambda||\mathbf{W}||^{2} \tag{12}\]
where \(S\) denotes the length of the output sequence, \(x\) refers to the training data comprised of images of handwritten mathematical expressions, \(\mathbf{W}\) represents the model's parameters, and \(\lambda\) is indicative of the hyperparameter.
```
1:Input: HMER Image
2:Encoder: The encoder processes a batch of images representing handwritten mathematical expressions and subsequently generates a corresponding matrix of features, denoted as \(\mathbf{A}^{{}^{\prime}}\).
3:Attention unit: It creates the context vector \(c_{t}\) using the inputs \(\mathbf{A}^{{}^{\prime}}\), the current attention sum \(\beta_{t}\), and the hidden state of the previous timestamp.
4:Decoding: Two level stacked GI-GRUCells are used for decoding purpose.
5:First level GRU is: \[v^{{}^{\prime}}_{t},h^{{}^{\prime}}_{t}=\mathbf{GI-GRU}(y^{E}_{t-1},h_{t-1},v_{t})\] where the inputs to first are true label or predicted word (\(y^{E}_{t-1}\)), hidden state (\(h_{t-1}\)), auxiliary state (\(v_{t}\)) created at the timestamp \(t\) by this expression \(s\otimes W_{yv}y^{E}_{t-1}\).
6:Functional form of Second level GRU can be written as \[v_{t},h_{t}=\mathbf{GI-GRU}(c_{t},h^{{}^{\prime}}_{t},v^{{}^{\prime}}_{t})\] In this level GRU, inputs are context vector(\(c_{t}\)), hidden and momentum auxiliary states (\(h^{{}^{\prime}}_{t}\),\(v^{{}^{\prime}}_{t}\)) generated by first level GRU.
7:Output: The output at a given timestamp, denoted as \(\mathrm{y_{t}}\), is derived from the parameters \(h_{t}\), \(y^{E}t-1\), and \(c_{t}\) in \(\mathrm{P(y_{t}|y_{1},..,y_{t-1},x)=g(W_{o}o_{t})}\) (eq-12).
```
**Algorithm 1** Computational steps of DenseBAM-GI
## 4 Experimental Design
The proposed DenseBAM-GI model's performance is assessed in this study using the CROHME 2014, 2016, and 2019 datasets. Each inkml file in these datasets contains x and y coordinates for online data traces. Each inkml file is converted into a binary image as part of the data preparation for the model. The whole model training and testing dataset is created by pairing these images with the corresponding labels.There are 101 classes in the training data, which includes 8,836 expressions from the CROHME 2014 and 2016 datasets as well as 1,157 expressions from the CROHME 2019 dataset. In comparison to the CROHME 2014, 2016, and 2019, test datasets, which each had 986, 1,136, and 1,199 novel expressions, respectively. The official CROHME 2014 dataset only serves as the training set for the suggested model. Performance evaluations were carried out utilising an 11 GB Nvidia RTX 2080 Ti graphics card.
The Stochastic Gradient Descent (SGD) approach with a momentum coefficient of 0.9 was used to train the models, and the training procedure was continued for 300 iterations. The learning rate was set to 0.0001 at the beginning and used an update approach that reduced it to the current rate/10 if performance did not increase for ten consecutive epochs. For L2 regularisation, the hyper-parameter lambda (\(\lambda\)) was fixed at 0.01. We used pre-trained weights from naive GRU to initialise the GI-GRU model in order to promote quick convergence. We used the exprate and the Word Error Rate (WER) metrics to evaluate the performance of the suggested models.
Overall, this study provides a rigorous evaluation of the proposed models using standardized datasets, providing valuable insights into their effectiveness in recognizing handwritten mathematical expressions.
## 5 Results and Discussion
This discourse is organized into three segments. First, the proposed model is juxtaposed with the state-of-art models using the benchmark datasets from CROHME 2014, 2016, and
2019 to evaluate comparative performance. Following this, within our proposed model, an ablation study is performed to scrutinize the individual contributions of the BAM and GI-GRU. The final segments of this section are dedicated to analyzing the exprate relative to different sequence length and comparing our proposed model's computational complexity with that of other prominent models in the field.
### An assessment of the proposed model in comparison to the most recent state-of-the-art techniques.
An extensive comparison analysis is done on the CROHME 2014, 2016, and 2019 datasets to validate the efficacy of the proposed models (DenseBAM-GI). Only those systems that used official CROHME training data and did not make use of ensemble techniques were taken into consideration for a fair comparison. The top 1, 2, and 3 error accuracies as well as the
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline \multicolumn{1}{c|}{Models} & \multicolumn{1}{c|}{Exprate (\%)} & \(\leq 1\)(\%) & \(\leq 2\)(\%) & \(\leq 3\)(\%) \\ \hline I [11] & 37.22 & 44.22 & 47.26 & 50.20 \\ II [11] & 15.01 & 22.31 & 26.57 & 27.69 \\ IV [11] & 18.97 & 28.19 & 32.35 & 33.37 \\ V [11] & 18.97 & 26.37 & 30.83 & 32.96 \\ VI [11] & 25.66 & 33.16 & 35.90 & 37.22 \\ VII [11] & 26.06 & 33.87 & 38.54 & 39.96 \\ WYGIWYS [18] & 28.70 & - & - & - \\ WAP [26] & 40.04 & 56.1 & 59.9 & - \\ WAP(with ensemble) [26] & 44.40 & 58.40 & 62.20 & 63.10 \\ End-to-end [23] & 25.09 & - & - & - \\ PAL [28] & 39.66 & - & - & - \\ PAL-v2 [34] & 43.81 & - & - & - \\ PAL-v2(with printed data) [34] & 48.88 & 64.50 & 69.78 & 73.83 \\ DenseWAP-TD [35] & 49.1 & 64.2 & 67.8 & - \\ Transformer(uni) [37] & 48.17 & 59.63 & 63.29 & - \\ Transformer(bi) [37] & **53.96** & 66.02 & 70.28 & - \\ DenseWAP-CTC [36] & 50.96 & - & - & - \\ R-GRU[38] & 43.72 & - & - & - \\ AdamR-GRU([39]) & 50.32 & 68.39 & 75.97 & 82.25 \\
**base-model** & 43.11 & 62.31 & 70.77 & 77.03 \\
**with DenseBAM** & 53.50 & 70.17 & 78.23 & 83.54 \\
**DenseBAM-GI** & 51.69 & **70.27** & **78.87** & **83.75** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the proposed model with the cutting-edge models on the CROHME 2014 test sets.
exprate are also documented. The most recent comparison of the CROHME 2014 dataset is shown in Table 1. The performance of the participating models, I through VII, was assessed based on the results of the CROHME 2014 competition. Our proposed model, DenseBAM-GI, outperforms all models except the transformer model, with recognition rates of 51.19%. However with scores of 70.17%, 78.23%, and 83.54% for the top 1, 2, and 3 error accuracy, the proposed models set a new benchmark.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline \multicolumn{1}{c|}{Models} & \multicolumn{1}{c|}{Exprate (\%)} & \(\leq 1\)(\%) & \(\leq 2\)(\%) & \(\leq 3\)(\%) \\ \hline Tokyo [19] & 43.90 & 50.91 & 53.70 & - \\ sao paolo [19] & 33.39 & 43.50 & 49.17 & - \\ Nantes [19] & 13.34 & 21.02 & 28.33 & - \\ WAP [26] & 37.1 & - & - & - \\ WAP(with ensemble) [26] & 44.55 & 57.10 & 61.55 & 62.34 \\ PAL-v2 [34] & 43.77 & - & - & - \\ PAL-v2(with printed data) [34] & 49.61 & 64.08 & 70.27 & 73.50 \\ DenseWAP-TD [35] & 48.5 & 62.3 & 65.3 & - \\ DenseWAP-CTC [36] & 49.96 & - & - & - \\ Transformer(uni) [37] & 44.95 & 56.13 & 60.47 & - \\ Transformer(bi) [37] & **52.31** & 63.90 & 68.61 & - \\ R-GRU[38] & 41.29 & - & - & - \\ AdamR-GRU([39]) & 47.68 & 64.48 & 75.46 & 81.36 \\
**base-model** & 42.28 & 58.62 & 71.05 & 78.22 \\
**with DenseBAM** & 49.54 & 65.42 & 76.91 & 83.21 \\
**DenseBAM-GI** & 51.18 & **67.33** & **77.28** & **83.66** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of proposed model with the state-of-the art models on CROHME 2016 test sets.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline \multicolumn{1}{c|}{Models} & \multicolumn{1}{c|}{Exprate (\%)} & \(\leq 1\)(\%) & \(\leq 2\)(\%) & \(\leq 3\)(\%) \\ \hline Univ.Linz[29] & 41.49 & 54.13 & 58.88 & - \\ TUAT[29] & 24.10 & 35.53 & 43.12 & - \\ WAP [26] & 41.7 & 55.5 & 59.3 & - \\ DenseWAP-TD [35] & 51.4 & 66.1 & 69.1 & - \\ Transformer(uni) [37] & 44.95 & 56.13 & 60.47 & - \\ Transformer(bi) [37] & 52.96 & 65.97 & 69.14 & - \\ AdamR-GRU([39]) & 50.00 & 65.82 & 73.27 & 80.35 \\
**base-model** & 39.59 & 58.78 & 68.85 & 73.89 \\
**with DenseBAM** & 50.88 & 68.34 & 77.42 & 82.09 \\
**DenseBAM-GI** & **52.99** & **71.34** & **79.36** & **84.21** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of proposed model with the state-of-the art models on CROHME 2019 test sets.
On the CROHME 2016 dataset, DenseBAM-GI models attain expression rates of 51.18% percent. The most recent comparison of the models on the CROHME 2016 dataset is shown in Table 2. Top 1, 2, and 3 error accuracy scores for the DenseBAM-GI model were 67.33%, 77.28%, and 83.66%, respectively. This performance sets a new standard in the field by outperforming all previous models in terms of top 1, 2, and 3 error accuracies.
On the CROHME 2019 dataset, the DenseBAM-GI outperforms all competing systems with an expression rate of 52.99%. Table 3 shows the results with current state-of-the-art models. DenseBAM-GI achieves top 1, 2, and 3 error accuracies of 71.34%, 79.36%, and 84.21%, respectively.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline Architecture & Exprate & \(\leq 1\)(\%) & \(\leq 2\)(\%) & \(\leq 3\)(\%) \\ \hline
1. DenseNet + Naive GRU (base-model) & 43.11 & 62.31 & 70.77 & 77.03 \\
2. DenseNet + BAM + Naive GRU (DenseBAM with naive GRU ) & 53.50 & 70.17 & 78.23 & 83.54 \\
3. DenseNet + BAM + GI-GRU (DenseBAM-GI) & 51.69 & 70.27 & 78.87 & 83.75 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results of applying BAM without and with momentum in CROHME 2014 test dataset.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline \multicolumn{1}{c|}{BAM in different positions} & Exprate & \(\leq 1\)(\%) & \(\leq 2\)(\%) & \(\leq 3\)(\%) \\ \hline
1. After denseblock1 & 47.55 & 64.65 & 74.09 & 79.72 \\
2. After denseblock2 & diverges & — & — & — \\
3. After denseblock3 & 51.06 & 67.51 & 76.85 & 81.63 \\
4. After denseblock1 and denseblock3 & 47.22 & 62.82 & 72.65 & 78.52 \\
5. After denseblock2 and denseblock3 & 53.50 & 70.17 & 78.23 & 83.54 \\
6. After denseblock1 and denseblock2 & 42.99 & 59.44 & 70.17 & 77.07 \\
7. After each denseblock & 46.39 & 65.67 & 73.77 & 79.51 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of applying BAM in DenseNet with 6,12,24-layers at each denseblock with naive-GRU.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline \multicolumn{1}{c|}{Encoder Architecture (Number of convolutional layer in each DenseBlock)} & Exprate & \(\leq 1\)(\%) & \(\leq 2\)(\%) & \(\leq 3\)(\%) \\ \hline
1. DenseBAM (12,12,12) & Diverges & - & - & - \\
2. DenseBAM (16,16,16) & 47.32 & 66.56 & 75.10 & 81.30 \\
3. DenseBAM (6,12,24) & 53.50 & 70.17 & 78.23 & 83.54 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of applying Constant and variable size convolution layers in the DenseBAM encoder (with naive GRU) on CROHME 2014 test dataset.
### Ablations studies
In this section, we perform ablation studies to thoroughly examine the proposed model, focusing on the effects of applying the BAM and incorporating an additional auxiliary gate in the GRU. Moreover, we investigate the performance of BAM under varying configurations of convolutional layers within the DenseBlock, aiming to identify the optimal settings. In addition to these studies, attention visualization is provided to facilitate a comprehensive analysis of BAM's influence on the attention-learning process, shedding light on its role in enhancing the model's performance.
Effect of applying BAM in different bottleneck positions of DenseNet EncoderTo attain optimal performance, it is crucial to appropriately position the Bottleneck Attention Module (BAM) within the dense blocks in DenseNet. In the study conducted by Park et al. ([27]), the BAM was positioned following each residual block. In our investigation, we examine the performance of BAM when placed after individual dense blocks and in conjunction with others. The most favourable results were obtained when BAM was placed after Denseblock 2 and 3 with naive GRU decoder, achieves exprate of 53.50%. Table 4 demonstrates these findings. Additionally, when positioned after Denseblock 3 exclusively, the exprate reached 51.03%. Based on these observations, it can be deduced that the optimal placement of BAM is within high-end layers responsible for capturing high-level features. These layers grasp the critical discriminative characteristics, necessitating attention to select the most pertinent information.
Effect of constant and variable size conv layers in DenseNet with BAMThe DenseNet architecture comprises a varying number of convolutional layers within each DenseBlock. In our model, we utilize the first three DenseBlocks from DenseNet-121. Some studies ([37]) have applied a constant number of convolutional layers in each DenseBlock, prompting us to conduct experiments to ascertain the optimal settings for BAM. Table 5 shows the results, with the positions of BAM in DenseNet remaining consistent with the previous section.
Figure 3: Recognition result comparison between proposed DenseBAM encoder (with naive GRU Decoder) and the base-model on different image of expression.
Initially, we experiment by setting 12 convolutional layers (each layer contains \(1\times 1\) and \(3\times 3\) conv layers as in Table 1 of [22]) uniformly across each dense block, followed by increasing this number to 16. Finally, we reverted to the original structure of DenseNet. In the first scenario, the model diverged, while in the second case, the model achieved a 47.50% exprate. However, enhanced results were obtained when we returned to the original configuration, applying 6, 12, and 24 convolutional layers in the DenseBlocks.
These observations suggest that, for optimal BAM performance within DenseNet, it is advantageous to use fewer convolutional layers at the beginning and more towards the end, as high-level features are predominantly located in the latter layers of the DenseNet architecture.
Effect of Combining GI-GRU decoder with DenseBAM encoder.In the conducted experiments, the DenseBAM encoder was paired with both a naive GRU decoder and a GI-GRU decoder. The baseline model only attained an exprate of 43%. However, integrating the BAM boosted the exprate to 53.50%. Table 6 depicts this results. Despite this improvement, the performance remained subpar on the CROHME 2016 and 2019 datasets (Table 2 & 3), and the top 1, 2, and 3 error accuracies remained less than ideal. The incorporation of the GI-GRU decoder remedied these shortcomings, with the model achieving state-of-the-art results in top 1, 2, 3 accuracies, and also obtaining state-of-the-art exprate on the CROHME 2019 dataset. This improvement can be attributed to the auxiliary gate of the GI-GRU decoder, which assists in the retention of longer sequences, mitigates over-parsing and under-parsing issues, and fortifies the robustness of the language model.
Attention visualization with BAM and without BAM.To elucidate how the BAM improves the encoder's feature representation and facilitates recognition, we generate attention maps for both DenseBAM encoder (with naive GRU decoder) and the base-model, as illustrated in Figure 1. We examine attention maps at crucial timesteps, where BAM plays a vital role in predicting the LaTeX sequence. In the first row, the first seven timesteps demonstrate that DenseBAM accurately recognizes the symbol 'M', while the base model fails. Timesteps 2
Figure 4: Recognition result comparison between the proposed DenseBAM Encoder and Densenet Encoder (both with naive GRU decoder) and the base model on different image of expression.
Figure 5: Recognition result comparison between proposed DenseBAM Encoder (with naive GRU decoder) and proposed DenseBAM-GI model on different image of expression.
and 4 are particularly critical for DenseBAM in identifying the symbol 'M'. The attention areas for DenseBAM are notably larger and more distinct than the base model. Furthermore, DenseBAM focuses on both 'frac' tokens in timestep three and again between timesteps 14 and 19. In contrast, the base model does not, resulting in its failure to recognize the second 'frac' token. These observations show that integrating the BAM attention technique in the encoder substantially alters the base model's attention mechanism, concentrating on the image's significant areas at the appropriate timesteps. Consequently, we deduce that BAM enhances the encoder's feature representation capabilities.
Performance comparision at expression level between DenseNet and DenseBAM.In a comprehensive analysis, we compare two encoders: DenseNet (the base encoder) and DenseBAM (the proposed encoder). The results are illustrated in Figure 4. The base model could not identify the relationship between the 'frac' token and failed to recognize the 'M' symbol. Moreover, the second expression did not capture the 'X' symbol and its superscript relations, while DenseBAM successfully did so. In the subsequent two expressions, the base model failed to detect subscript and superscript relationships, leading to an incorrect LaTeX sequence. Although DenseBAM preserved all spatial relationships, it did not recognize the '7' symbol, and the '-' symbol was over-translated. The final mathematical expression is also misidentified due to the complex spatial relationships between superscripts and subscripts.
These findings suggest that the proposed DenseBAM model effectively captures most expressions with intricate spatial relationships among symbols, despite occasional over-translation. We attribute this performance to the additional attention mechanism within the encoder, which facilitates learning complex spatial relationships for the overall architecture. Furthermore, the results indicate an improvement in feature representation, as nearly all symbols are accurately identified.
Performance comparision at expression level between DenseBAM and DenseBAM-GI.In this section, a comparison is conducted between DenseBAM and DenseBAM-GI, focusing on the
effect of integrating an additional gate into the GRU by appending BAM to the encoder at the expression level. Figure 5 illustrates the results. In the case of the first expression, Dense-BAM did not successfully capture the intricate spatial relationships between the 'frac' and'sqrt' components, while DenseBAM-GI managed to preserve these relationships. DenseBAM over-translates the'sin' and 'frac' tokens for the second expression, but DenseBAM-GI is corrected. In the third and fourth expressions, the'sin alpha' component was over-translated by DenseBAM, but DenseBAM-GI mitigated this issue. The last expression, which is lengthy and features a limited number of complex spatial relationships among superscripts, presented an under-translation challenge for DenseBAM, but DenseBAM-GI effectively addressed this problem. The findings from this comparison indicate that most errors associated with DenseBAM stem from over-translation and under-translation. However, incorporating an additional gate in DenseBAM-GI alleviates these issues. It can be inferred that the extra gate in DenseBAM-GI strengthens the language model, facilitating the decoder in generating precise LaTeX representations for intricate and lengthy mathematical expressions.
Figure 6: Comparative Analysis of DenseBAM-GI and Base-Model Performance Across Varied Expression Lengths Within CROHME 2014, 2016, and 2019 Datasets: (a) Assessment of DenseBAM-GI performance across disparate expression lengths, (b) Evaluation of base-model performance across varying expression lengths.
### The influence of input sequence length on the exprate of the proposed model
For a more comprehensive understanding of the exprate, a detailed comparison between the DenseBAM-GI and the base model is conducted, focusing on the influence of sequence length variability on different mathematical expressions. As shown in Table 7 and Figure 6, which includes separate plots for the DenseBAM-GI and base models, the proposed DenseBAM-GI model demonstrated superior performance compared to the base model (naive GRU-based) across all sequence length categories. The figure also compares the exprate on CROHME 2016 and 2019 test datasets with CROHME 2014. Notably, the DenseBAM-GI model consistently registered an Exprate 3-10% higher than the base model. In the CROHME 2014 training dataset, consisting of 8836 expressions, 986 are short series (1-5). As the sequence length increased, the quantity of data decreased correspondingly, with long mathematical expressions (26-30) comprising roughly half the quantity of shorter sequences. Even though only 465 such lengthy expressions were available, the DenseBAM-GI model achieved an Exprate of 21%, in contrast to the base model's 16%. The complexity of symbol relationships in lengthy expressions poses significant recognition challenges for models, yet the proposed model significantly outperforms the base model (by more than 3% in Exprate) in this area. In addition, we observe that the proposed DenseBAM-GI maintains an accuracy of up to 40% up to the
\begin{table}
\begin{tabular}{c c c c} \hline \hline length & Exprate with DenseBAM-GI & Exprate with base-model & Training data \\ \hline
1-5 & 72 & 62.8 & 985 \\
6-10 & 59.5 & 52 & 754 \\
11-15 & 49.26 & 41.91 & 939 \\
16-20 & 41.26 & 39.16 & 675 \\
21-25 & 40.44 & 34.61 & 761 \\
26-30 & 21.67 & 15 & 465 \\
31-35 & 16.21 & 13 & 534 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Performance of DenseBAM-GI & base-model on different sequence length on CROHME 2014 test data
\begin{table}
\begin{tabular}{c c} \hline \hline System name & Computational Complexity \\ \hline PAL-V2 ([34]) & \(O(kmd^{2}*nd^{2}+knd^{2})\) \\ Transformer ([37]) & \(O(knd^{2})+O(L(2n^{2}d+2ndd_{ff}))\) \\ WAP ([26]) & \(O(5(nd^{2}+knd^{2}))\) \\ DenseBAM-GI (Proposed model) & \(O(n(d^{2}+c^{\prime})+n(C^{2}+(h*w))+knd^{2})\) \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparison of the computational complexity of DenseBAM-GI.
sequence length range of 21-25. Beyond this, as the quantity of training data declines, so does the model's performance. These findings suggest that the DenseBAM-GI model excels in recognizing both complex long-range expressions and short-sequence expressions. The model could perform even better with more data for sequence lengths exceeding 25.
### Comparative analysis of Computational Complexity of DenseBAM-GI
Compared to other current, state-of-the-art models in the field, our suggested model, as shown in Table 8, has a lower computational complexity (CC). The Computational Complexity (CC) of the proposed DenseBAM-GI can be represented as \((n(d^{2}+c^{\prime})+n(C^{2}+(h*w))+knd^{2})\), by using framework established by Vaswani et al. ([24]).Where, the variables \(d\) denote the dimension of the representation, \(k\) denotes the size of the convolution, and \(n\) is the length of the sequence. In a dense block-generated feature map, the terms \(C\) stand for the channels, while \(h\) and \(w\) stand for height and width, respectively. For adding the additional gate in GRU, the constant \(c^{{}^{\prime}}\) is used.
DenseBAM-GI uses a \(d\) of 256 and a \(n\) range of 0 to 48. The proposed DenseBAM-GI model gets linear and constant calculation time from the auxiliary state \(v_{t}\). With regard to PAL-V2, the CC is roughly \(O(knd*nd*nd+knd*2)\). The WAP model has a CC of \(O(5(nd2+knd2))\) since it consists of five different models. The complexity of the Transformer model is around \(O(knd2+2n2d)\). While the variable \(d\) is not overtly defined, it can be noted that in the work of Vaswani et al. ([24]), this parameter is established as 1000. Our proposed approach only adds a linear polynomial computational time over the base model. In terms of memory allocation, DenseBAM-GI and the base model share similar characteristics. However, DenseBAM-GI exhibits a faster convergence rate, reaching optimal performance within 150 epochs, while only utilizing 6 GB of memory on a 2080 Ti graphics card. In stark contrast, models like Transformer and PAL-V2 necessitate four Nvidia GPUs (either 2080 Ti or 1080 Ti), each contributing 11 GB of memory. Evaluating these data, it is apparent that our proposed DenseBAM-GI model achieves a performance metric equivalent to the state-of-the-art models, yet it does so with a significantly reduced memory footprint and computational
requirements, and within a shorter time span.
## 6 Conclusion
This study proposes a novel encoder-decoder architecture (DenseBAM-GI) that address the HMER challenge. The DenseBAM architecture, which includes both the channel and spatial attention mechanisms, is used in our proposed encoder. In addition, we propose the GI-GRU, a novel GRU unit designed to capture, improve, and manage lengthy and complicated expressions as decoder unit. The proposed DenseBAM-GI model, according to experimental results, performs on par with current state-of-the-art models, setting new benchmarks for top 1, 2, and 3 error accuracy while using less processing and memory power. Furthermore, it achieves a stat-of-the art results on CROHME 2019 dataset in terms of exprate. Future research can use this model for extending to other fields, like document recognition and handwritten text recognition.
## Acknowledgement
We wish to extend our gratitude to the Indian Institute of Information Technology, Allahabad, for supplying the essential research infrastructure that facilitated the execution of this study. |
2308.00994 | SYNAuG: Exploiting Synthetic Data for Data Imbalance Problems | Data imbalance in training data often leads to biased predictions from
trained models, which in turn causes ethical and social issues. A
straightforward solution is to carefully curate training data, but given the
enormous scale of modern neural networks, this is prohibitively labor-intensive
and thus impractical. Inspired by recent developments in generative models,
this paper explores the potential of synthetic data to address the data
imbalance problem. To be specific, our method, dubbed SYNAuG, leverages
synthetic data to equalize the unbalanced distribution of training data. Our
experiments demonstrate that, although a domain gap between real and synthetic
data exists, training with SYNAuG followed by fine-tuning with a few real
samples allows to achieve impressive performance on diverse tasks with
different data imbalance issues, surpassing existing task-specific methods for
the same purpose. | Moon Ye-Bin, Nam Hyeon-Woo, Wonseok Choi, Nayeong Kim, Suha Kwak, Tae-Hyun Oh | 2023-08-02T07:59:25Z | http://arxiv.org/abs/2308.00994v3 | # SYNAuG: Exploiting Synthetic Data for Data Imbalance Problems
###### Abstract
We live in an era of data floods, and deep neural networks play a pivotal role in this moment. Natural data inherently exhibits several challenges such as long-tailed distribution and model fairness, where data imbalance is at the center of fundamental issues. This imbalance poses a risk of deep neural networks producing biased predictions, leading to potentially severe ethical and social problems. To address these problems, we leverage the recent generative models advanced in generating high-quality images. In this work, we propose SYNAuG, which utilizes synthetic data to uniformize the given imbalance distribution followed by a simple post-calibration step considering the domain gap between real and synthetic data. This straightforward approach yields impressive performance on datasets for distinctive data imbalance problems such as CIFAR100-LT, ImageNet100-LT, UTKFace, and Waterbirds, surpassing the performance of existing task-specific methods. While we do not claim that our approach serves as a complete solution to the problem of data imbalance, we argue that supplementing the existing data with synthetic data proves to be an effective and crucial step in addressing data imbalance concerns.
## 1 Introduction
Deep neural networks (DNNs) have achieved strong performance on visual tasks. The outstanding performance has been demonstrated by training a model with abundant and diverse labeled data [14, 38]. Despite the importance of data, machine learning researchers have focused mainly on the model and algorithms [61]. We should care about the data for training DNNs because unexpected influences can occur by data.
We commonly encounter data imbalance problems categorized into _class_ or _group_ imbalance problems. Class imbalance means different data amounts in the classes. Suppose that we collect animal images on the internet. Images of rare animals may be found on search engines less than cats or dogs because of human bias for uploading their photograph 1. Group imbalance, on the other hand, stands for different data amounts in the groups. We may collect data depending on our environments, including preferences, country, and cultural backgrounds. Suppose we collect a picture of human hands; then, the skin tones can be biased. If we do not care about these biases, the collected dataset becomes imbalanced in terms of classes [13, 82], groups [75], or both. With such a dataset and the standard supervised learning algorithms based on empirical risk minimization (ERM) principle [72], the classifier will be trained to be biased to majority classes [17]. Since these problems yield not only substantial performance degradation but also social or ethical issues with biases, researchers have independently developed various algorithms [3, 5, 7, 23, 30, 31, 32, 34, 39, 41, 43, 50, 53, 62, 65, 67, 68, 76, 81] to overcome these respective problems.
Footnote 1: For example, the number of search results of Tarsier is about 102 times less than Maltese in Google search engine.
In this work, we first uniformize the number of samples in each class using the recent text-to-image generative models before applying off-the-shelf task-specific algorithms. The prior studies work the limited, fixed, and bounded original dataset without adding more additional data and mainly focus on algorithmic approaches, such as reweighting [62, 62, 28, 22, 53, 58, 7, 29], resampling [24, 29, 30, 41, 50, 54, 46], or augmentation [50, 11, 30]. In contrast to the prior arts, we go beyond the fixed original dataset by exploiting the generative diffusion models to synthesize data, which have recently shown potential as synthetic training data generation [51, 52, 70, 4, 21]. This allows us to tackle the fundamental bottleneck of data imbalance,, data, rather than indirect ways of tackling learning algorithms or architectures. It is a more natural way than restricting training data to the fixed dataset as in the prior arts.
As shown in Fig. 1, we propose SYNAuG, exploiting the
generative diffusion model to augment and make the original data distribution to be uniform distribution,, uniformization. After training on the uniformized data composed of the original and synthetic data, we found that it is effective to use simple fine-tuning of the last layer with uniformly sub-sampled original data. This outperforms the other strong baselines, including the baseline using the additional external web data as well as the competing methods on the long-tailed recognition benchmark, CIFAR100-LT, and the fairness benchmark, UTKFace. In addition, we demonstrate the effectiveness of our method for improving the robustness of the classifier to spurious correlation. We summarize our contributions as follows:
* Proposing SYNAuG that uniformizes the given data distribution with synthetic data, beyond the given datasets;
* Demonstrating the effectiveness of SYNAuG on three distinctive data imbalance tasks: long-tailed recognition, model fairness, and robustness to spurious correlation;
* Reporting the observation of the importance of a few original samples when we use synthetic data together.
## 2 Related Work
Data imbalance can lead to suboptimal generalization and many challenges in practical application scenarios,, finance, healthcare, and autonomous driving. The data imbalance problem is a common source of different imbalance sub-problems: long-tailed recognition, model fairness, and model robustness to spurious correlation. We brief the related work on the associated sub-problems and on using synthetic data for machine learning tasks.
**Long-tailed recognition.** Long-tailed distribution is inherent to the real world [13, 82]. There are two main streams in the realm of re-balancing classes, including re-sampling [41, 50, 65, 30, 41] and re-weighting [57, 62, 20, 13, 20, 53, 7, 13]. The re-weighting methods share a similar mechanism to weighting minority classes inverse-proportionally to the number of instances. The re-sampling methods weight the samples in minority classes by more frequently sampling with replacement so that the training model can see the uniform number of samples across classes.
There are other approaches by designing loss functions. Ryou [57] and Lin [37] induce adaptive re-weighting effects during training. The others take into account margin [7] or balance of softmax [53] in the loss design. Wang [73] take a completely different approach; model selection given diversely pre-trained classifiers. In addition, Ye-Bin [77] propose TextManiA, visual feature augmentation for sparse samples, which shows improved performance in long-tailed distribution.
**Model fairness.** In fairness [16, 19, 45], researchers have tackled the issue of model bias, where accuracy varies based on sensitive attributes such as race, age, and ethnicity. Model fairness is also related to data imbalance because the number of samples of some sensitive groups is lower than that of the major groups. Fairness has predominantly been tackled using loss weighting and batch sampling. A loss weighting algorithm [28] proposes fairness optimization, where they minimize the worst-case loss of the group by adaptively weighting losses. Batch sampling approaches [54, 29] take an adaptive sampling strategy by considering sensitive information rather than uniform sampling. Zeng [79] take a post-calibration approach after training to calibrate the classifiers.
**Spurious correlation.** The spurious correlation problem is related to the robustness of models against misleading correlations. DNNs are susceptible to falling into shortcuts that capture the most frequently observed patterns in a class regardless of true causality; it is called spurious correlation or shortcut problems [64, 17, 32]. It is never desirable to rely on spurious features that degrade the generalizability of DNNs [58, 40]. The spurious correlation problem is also dealt with similar approaches to the above two tasks: weighting [43, 58, 31], sampling [59, 24], augmentation [23, 34, 76], and post-calibration [32, 35, 39].
**Summary of data imbalance problems.** While researchers have developed algorithms for each task separately, three different tasks sourced from data imbalance have mainly been tackled in the shared perspective,, up-weight loss values or sampling probabilities of minor groups using group or sensitive information. However, they have focused only on algorithmic parts by limiting their methods to exploit the given imbalance dataset, where the inherent imbalance still remains.
In this work, we shed light on the overlooked convention to go beyond the given bounded dataset. We exploit
Figure 1: **Overview of SYNAuG process. Given the imbalanced real-world data with the class labels, we first uniformize the imbalanced real data distribution by generating the synthetic samples that are conditioned on the class label. Second, we train a model with the uniformized training data. Finally, we fine-tune the last layer with the uniformly subsampled real-world data.**
the synthetic data from the generative foundation models [46, 55, 60] to take flexibility and controllability so that we can populate the long-tailed training data distribution to become a uniform distribution, which mitigates the imbalance problem itself. We observe that this simple correction of class distribution with synthetic data can significantly improve the worst-case accuracy and fairness of DNNs. To our best knowledge, our work is the first work that demonstrates improved or competitive performance with generated synthetic data for both class imbalance and fairness tasks.
**Using synthetic data in machine learning tasks.** To overcome the lack of data or sensitive issues of data,, licensing and privacy concerns, recent approaches have started to leverage synthetic data for their tasks of interest: classification [2, 71], segmentation [63, 83], re-identification [85], motion estimation [15, 18, 42, 47, 66], computational photography [49], and representation learning [25]. Recently, deep generative models [60, 55, 46] have shown promising results in generating realistic and high-quality samples, stemming from the goal of modeling the real data distribution. In particular, the image generation conditioned on text provides great controllability and flexibility, which has the potential to be used for a variety of tasks, such as 3D reconstruction [51, 52, 8] and image recognition [4, 21, 70]. In this work, we explore the use of a pre-trained foundation diffusion model to mitigate data imbalance problems.
## 3 Method
We first present our motivation for using synthetic data to address data imbalance problems based on experimental findings (Sec. 3.1). Building on these empirical insights, we propose to exploit the synthetic data (SYNAuG) as a means to uniformize the given training data distribution (Sec. 3.2).
### Motivations
During training, we consider how to curate the data, train the model, and evaluate it. As aforementioned, prior methods addressing data imbalance problems have explored in various ways, including data re-sampling, loss function design, and model architecture. Instead, we emphasize the importance of data curation and the controllability of data, as data curation significantly affects the training and the subsequent evaluation despite its position as the first step.
Before incorporating synthetic data into our proposed method, we delve into the influence of training with synthetic and original data together. We establish two settings by controlling the ratio of original and synthetic data. We use the generated images from the Stable Diffusion [55] for synthetic data. In the **first setting**, we take an extreme approach by replacing whole original data belonging to specific classes with synthetic data. It means that certain classes have no real samples but only synthetic samples. In the **second setting**, we uniformly replace the original data with synthetic data, which means all classes have the same ratio of original and synthetic data. This approach ensures that every class at least has a few original data. The significance of original samples becomes apparent through observing the performance change.
The results of the two settings are in Fig. 2. The first setting shows the linear performance degradation as the number of classes with no original data increases (See Fig. 1(a)). However, the second setting shows the log-like performance degradation as more original data are replaced with synthetic data uniformly (See Fig. 1(b)). We achieve 41.11% when using 1% of real data in the second setting, which is similar to the result of 43.96% when using 50% of real data in the first setting. The results suggest that at least a few original samples are necessary as an anchor, as the domain gap may still exist even with high-quality synthetic data.
To check the presence of a domain gap, we conduct domain classification and visualization of the features from both real and synthetic data (See Fig. 3). As shown in Fig. 2(a), the classification performance is 74.16%. This indicates the existence of a domain gap, considering that 50% means no domain gap. As shown in Fig. 2(b), the features of Syn C2 are more closer to Syn C1 rather than Real C2. This observation provides empirical evidence of a domain gap existing between real and synthetic data.
In summary, (1) at least, a few real samples are important when we supplement the real samples with the syn
Figure 2: **Replacement test.** To investigate the effect on model performance when using original and synthetic data together, we replace the original data with synthetic ones in two ways: (a) class-wise and (b) the same ratio of instances across all classes. We use CIFAR100, which has 500 samples per class and 100 classes.
thetic samples, (2) synthetic samples are still insufficient to fully replace the original samples, although the deep generative models show impressive performance, thus, (3) there might be additional room for improvement due to the domain gap between the original and the synthetic data. It is desirable that the remaining original samples serve as an anchor role, and synthetic data support and populate the insufficient samples.
### SYNAuG
Given the preliminary experiments, we propose SYNAuG, which leverages synthetic data to mitigate the imbalance and domain gap from the data perspective. Our approach is applied to three distinct tasks: long-tailed recognition, model fairness, and robustness to spurious correlation. While these tasks differ in their ultimate objectives and evaluation metrics, the common underlying factor is the presence of data imbalance. SYNAuG is an integrated approach designed to mitigate data imbalance across diverse tasks.
As illustrated in Fig. 1, we first uniformize the imbalance data by generating synthetic data, train the model on the uniformized data, and finally fine-tune the last layer with a few original data uniformly subsampled from each class. We exploit recent powerful generative models, _e.g_., Stable Diffusion [55], to generate the synthetic data of corresponding classes or attributes with the controllable prompt. Since they are trained on a large number of web data, it would be considered to cover and model the wide distribution of the real world. Exploiting these favorable properties, we generate supporting data to alleviate the imbalance of the data distribution. We generate the samples with diverse prompts like "a photo of {modifier} {class}". We find list of proper modifiers by ChatGPT [48] to make our pipeline automatic. We train the model on uniformized data with Cross Entropy (CE) loss.
While SYNAuG is simple and effective, there is still room to improve its performance because of the domain gap identified in Sec. 3.1. To bring further improvement by mitigating the gap, we propose to utilize two simple methods. First, we propose to leverage Mixup [80] during training to augment the samples to be interpolated samples between real and synthetic samples, _i.e_., domain Mixup. Second, we propose to fine-tune the classifier on the subsampled uniform original data from the original training data after the first training stage. The fine-tuned classifier would lead to more accurate recognition of the target data by alleviating the domain gap.
In summary, the process of SYNAuG is as follows: (1) uniformize the original data distribution with synthetic data from the generative model, (2) train the model with uniformized data using Mixup, and (3) fine-tune the last layer with the uniformly subsampled real data.
## 4 Experiments
In this section, we evaluate our method for three sub-tasks: long-tailed recognition task (Sec. 4.1), model fairness (Sec. 4.2), and model robustness to spurious correlation (Sec. 4.3). Through these results, we demonstrate the effectiveness of SYNAuG for data imbalance problems.
### Long-tailed Recognition
**Experimental setting.** We employ two long-tail datasets: CIFAR100-LT [7] and ImageNet100-LT [26]. CIFAR100-LT and ImageNet100-LT have train sets that are artificially curated to make class imbalance from the original datasets, CIFAR100 [33] and ImageNet100 [69]. The test sets for them are the same as the original one. The classes in the long-tailed datasets are divided into three groups: Many-shot (more than 100 samples), Medium-shot (20-100 samples), and Few-shot (less than 20 samples). For CIFAR100-LT, the imbalance factor (IF) can be controlled by computing the ratio of samples in the head to tail class, \(N_{1}/N_{K}\), where \(N_{k}=|\mathcal{D}_{k}|\), and \(\mathcal{D}_{k}\) is the set of samples belonging to the class \(k\in\{1,\cdots,K\}\). As the IF value increases, the skewness of the training data becomes more severe, which makes it more challenging. We evaluate under the standard IFs of 100, 50, and 10, following [1]. We use ResNet32 for CIFAR100-LT and ResNet50 for ImageNet100-LT. Further details can be found in the supplementary material.
**Competing methods and baselines.** We compare with recent prior arts: SSD [36] and PaCo [12] for self-supervised learning, RISDA [10] and CMO [50] for data augmentation, and Weight Balancing [1] for the rebalance classifier. They are state-of-the-art in each perspective and propose methods only using the original long-tailed data without external data sources.
We present other variants of generation methods as baselines: 1) Motivated by the recent work [21] using the few
Figure 3: **Domain gap between real and synthetic data.** We test the domain gap empirically with (a) binary domain classification and (b) feature visualization. For binary classification, we use 2.5k samples for each real and synthetic domain and train only one fully-connected layer with the extracted features. For visualization, the features are extracted from the pre-trained model on CIFAR100. C1 and C2 denote different classes.
shot original samples as guidance during the generation process, we first introduce _Intra-class Image Translation_, where we use the original samples from the original training data as a class-wise guidance image for generation, 2) Inspired by the M2m [30] translating an image of the major class to the minor class for leveraging the diversity of the majority information, we introduce _Inter-class Image Translation_, where we utilize random samples in the dataset as guidance regardless of the class, 3) As an advanced version motivated by DreamBooth [56], we fine-tune the diffusion model with the samples in each class to model the class-wise distribution, named _Class Distribution Fitting_, and 4) As a strong baseline, we collect the real data from the internet instead of generating synthetic images, _i.e._, _Web crawled images_. Details are in the supplementary material.
**Comparison results.** We compare SYNAuG with the prior arts in Table 1. Compared to the CE method [13] trained with the Cross Entropy loss on the original data, we achieve large improvements when exploiting the generated samples regardless of the skewness of the training data. Our method also outperforms most of the competing methods. This is stunning results in that it suggests that relieving the imbalance from the data point of view is simple but more effective than the conventional complex algorithmic methods.
In Table 2, we compare our method with our proposed baselines. Compared to the case that uses real-world web data2, it shows that the generated images are of sufficient quality to mitigate the class imbalance problem. Also, we evaluate additional baselines, which apply the variant methods during the generation process. While they are better than training only with the original long-tailed data (CE method [13]), the performance is lower than SYNAuG. The results imply that the domain gap between the original and synthetic data is hard to narrow during the generation process. Thus, we propose to leverage Mixup during training and fine-tuning the classifier as a more straightforward way. Note that naively applying Mixup to imbalanced data cases is known to be detrimental [77]; thus, we distinctively apply Mixup after uniformizing data distribution which makes a noticeable difference.
Footnote 2: We collected image from Google image search. Google image search returns images very favorable to DNNs, because Google has used CNN-based image search since March 2013 [9]. Thus, using web data is analogous to the distillation of a Google internal model, _i.e._, very strong baseline.
**Ablation study.** In Table 3, we conduct an ablation study to investigate the influence of each component of our SYNAuG. When we use modifiers in the prompt, we can
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{5}{c}{**IF=100**} & \multirow{2}{*}{**50**} & \multirow{2}{*}{**10**} \\ \cline{2-2} \cline{5-7} & **Many** & **Medium** & **Few** & **All** & \\ \hline CE [13] & 68.31 & 36.88 & 4.87 & 37.96 & 43.54 & 59.50 \\ \hline SSD [36] & - & - & - & 46.0 & 50.5 & 62.3 \\ PaCo [12] & - & - & - & 52.0 & 56.0 & 64.2 \\ RISDA [10] & - & - & - & 50.16 & 53.84 & 62.38 \\ CE + CMO [50] & 70.4 & 42.5 & 14.4 & 43.9 & 48.3 & 59.5 \\ LDAM + CMO [50] & 61.5 & 48.6 & 28.8 & 47.2 & 51.7 & 58.4 \\ RIIDE (3 experts) + CMO [50] & - & - & - & 50.0 & 53.0 & 60.2 \\ Weight Balancing [1] & 72.60 & 51.86 & 32.63 & 53.35 & 57.71 & 68.67 \\ \hline SYNAuG & **74.06** & **56.63** & **42.83** & **58.59** & **61.36** & **69.01** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Long-tailed recognition performance on CIFAR100-LT. We compare our SYNAuG with recent works in long-tailed recognition. We report the Top-1 accuracy (%) with different imbalance factors, _i.e._, IF={100, 50, 10}.**
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Additional**} & \multicolumn{3}{c}{**IF**} \\ \cline{3-6} & & **Data Type** & **100** & **50** & **10** \\ \hline CE [13] & N/A & 37.96 & 43.54 & 59.50 \\ \hline Web crawled images & Real & 54.06 & 56.40 & 63.86 \\ \hline Intra-class Image Translation & Syn. & 47.87 & 53.33 & 64.95 \\ Inter-class Image Translation & Syn. & 47.17 & 51.33 & 64.11 \\ Class Distribution Fitting & Syn. & 51.53 & 55.60 & 65.60 \\ \hline SYNAuG & Syn. & **58.59** & **61.36** & **69.01** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Comparison with the baselines. We use CIFAR100-LT. The second column denotes the data type used in uniformization.**
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Modifier**} & \multirow{2}{*}{**Mixup**} & \multirow{2}{*}{**Re-train**} & \multirow{2}{*}{**Finetune**} & \multicolumn{3}{c}{**IF**} \\ \cline{3-6} & & & & **100** & **50** & **10** \\ \hline (a) & & & & & 52.41 & 56.99 & 66.34 \\ (b) & ✓ & & & & 53.54 & 57.09 & 66.66 \\ (c) & ✓ & ✓ & & & 55.45 & 58.69 & 66.84 \\ (d) & ✓ & ✓ & ✓ & & 57.31 & 60.34 & 67.90 \\ (e) & ✓ & ✓ & & ✓ & **58.59** & **61.36** & **69.01** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Ablation study of SYNAuG. We use CIFAR100-LT. Each component, Modifier, Mixup, Re-train, and Finetune, means we use the class-related modifiers in the prompt, use Mixup augmentation during training, and re-train or finetune the last layer after training, respectively. (e) stands for our SYNAuG.**
get diverse generated samples, which result in the gain between (a) and (b). We can achieve further improvement by utilizing Mixup (c) to interpolate between original and synthetic data, whereby the domain gap is mitigated by bridging two different domain data. Despite the domain Mixup, the classifier still has room to be more adjusted toward the target data. To do so, we can re-train (d) or fine-tune (e) the last layer on the uniform distribution data sampling from the original training data, _i.e_., we set the number of samples in each data class to the smallest number of samples in the original long-tail training data class. As shown in Table 3-(d,e), we can achieve an additional improvement by adjusting the classifier towards the targeted real data and found that fine-tuning is more effective than re-training.
**Performance according to the number of synthetic data.** We explore the performance according to varying number of synthetic data (See Fig. 4). We use CIFAR100-LT with the imbalance factor IF=100, _i.e_., the total number of the original samples is 10,847. For LT+\(\alpha\), we uniformly allocate synthetic data across all classes disregarding the distinction between Many, Medium, and Few classes. In this case, the absolute difference in sample amount between classes is kept unchanged. For U\(\beta\), we ensure an equal number of samples in each class by either adding synthetic data or trimming some of the original samples.
In Fig. 4, the performance is improved as the number of samples increases regardless of the data distribution. As the quantity of synthetic data increases, accuracies of LT+\(\alpha\) and U\(\beta\) become quite similar. We think that LT+\(\alpha\) tends to deviate from the long-tailed distribution as the number of synthetic data increases, _i.e_., the Few class becomes not Few anymore. Although the disparities in data quantities across classes exist in LT+\(\alpha\), this effect diminishes the difference between LT+\(\alpha\) and U\(\beta\) with more synthetic data.
**Performance according to the quality of synthetic data.** We evaluate SYNAuG on ImageNet100-LT. We conduct an ablation study to investigate the impact of data quality of SYNAuG by controlling the diffusion step parameter of Stable Diffusion [55], which is known to affect the quality of generated images. As shown in Fig. 5-(Top), the generation quality is low when the number of steps is very small, but there is no big difference to the naked eye as it goes up to a certain number. Figure 5-(Bottom) shows the quantitative results. Compared to the CE method [13] trained on the original long-tailed data, while the accuracy of the Many class is degraded, we achieve large improvement in the Medium, Few, and even All cases regardless of the synthetic image quality. However, there is a certain level of quality that exhibits a surge point in performance. The difference becomes negligible when the step value exceeds a certain threshold, _i.e_., quality.
### Model Fairness
Group imbalance stands for the data imbalance between groups, such as ethnicity. We empirically observe that the group imbalance with the class imbalance amplifies unfair classifiers, as shown in Fig. 6. The class imbalance affects the unfair classifier more than the group imbalance. Both class and group imbalance contribute to the unfair classifier.
Model fairness, one of the problems caused by group imbalance, is essential to prevent unexpected social confusion. Fairness metrics have been proposed to measure the fairness performance of models: Demographic Parity (DP) \(=\max_{z}|P(y_{p}\!\!=\!\!\!1|z)\!\!-\!\!P(y_{p}\!\!=\!\!\!1)|\)[16], Equal Opportunity (EO) \(=\max_{z_{i},z_{j},y_{p}}|P_{z_{i}}(y_{p}|y)\!\!-\!\!P_{z_{j}}(y_{p}|y)|\)[19, 27], and Equalized Odds (ED) \(=\max_{z,y,y_{p}}|P(y_{p}|z,y)\!\!-\!\!P(y_{p}|y)|\)[19], where \(y_{p}\) is the prediction, \(y\) is the class label, and \(z\) is the sensitive attribute. These metrics are based on the difference in the performance of the learned classifiers depending on groups, _i.e_., the sensitive attributes. Lower values of fair
Figure 4: **Accuracy [%] (y-axis) vs. number of samples [K] (x-axis).** As expected, the performance improves as more synthetic samples are added. Additionally, it is improved significantly when the Few class disappears as the number of samples per class increases.
Figure 5: **Ablation study according to sample quality. (Top) quality of the generated samples according to the number of steps, (Bottom) long-tailed recognition performance (%) according to the different times of steps for generating synthetic data, which affects sample quality. We use ImageNet100-LT with ResNet50.**
ness metrics indicate that the model is fairer.
**Experiments on UTKFace.** We employ UTKFace [84] composed of 23,708 images with age, gender, and race labels. We use race annotation as a sensitive attribute (group label) and gender as the class label. For SYNAuG, we augment the data to mitigate the class imbalance across the sensitive attribute; the female and male ratio of each sensitive attribute becomes equal. We evaluate the accuracy and fairness metrics from the model at the last epoch. Firstly, we validate the effectiveness of SYNAuG in Table 3(a). The result shows that SYNAuG outperforms ERM in accuracy and fairness metrics on both ResNet18 and ResNet50.
**Ablation with other algorithms.** In Table 3(b), we evaluate the performance of the two algorithms Group-DRO [58] and Re-Sampling (RS). Note that we do not apply Mixup and fine-tuning in this experiment. Group-DRO and RS improve the fairness metrics of ERM at the same time. SYNAuG without Group-DRO outperforms the accuracy and two fairness metrics, ED and EO, compared to Group-DRO. Developing a fairness algorithm with synthetic data might be a promising direction toward a fair model.
**Augmentation ablation.** In Table 3(c), we compare the effect of data augmentations, Mixup [80] and CutMix [78]. In this ablation study, we do not apply fine-tuning for clear comparison. Both augmentations improve the accuracy of ERM; Mixup also works in the fairness metrics. Compared to ERM with Mixup, SYNAuG shows higher accuracy and better fairness metrics. SYNAuG with Mixup outperforms more in accuracy and fairness metrics compared to ERM with Mixup.
**No prior of sensitive attribute.** Labeling sensitive attributes might be expensive. In this ablation study, we augment the synthetic data to mitigate the class imbalance regardless of sensitive attributes. We denote this setting as SYNAuG\({}^{*}\). As shown in Table 3(d), SYNAuG\({}^{*}\) shows better fairness metrics compared to ERM. However, exploiting the knowledge of sensitive attributes is more effective.
**Summary.** Class imbalance in fairness can easily cause an unfair model. This motivates us to balance the class imbalance using synthetic data before tackling fairness directly. We observe that the synthetic data improves both model accuracy and fairness simultaneously. The experimental results also demonstrate that SYNAuG is compati
\begin{table}
\end{table}
Table 4: **Fairness performance.** (a) accuracy and model fairness results of our SYNAuG, (b) compatibility with other fairness algorithms, Group-DRO and Re-Sampling (RS), (c) ablation study with data augmentation, Mixup and Cuxmix, and (d) ablation study using the prior about sensitive attribute. **Bold** means the highest accuracy and the best fairness performance in a table. Higher is better in accuracy, and lower is better in fairness metrics.
Figure 6: **Influence of the class and group imbalance on classifier during training. The 2D data are sampled from the normal distributions with four different means and the same covariance. We simulate 4 different experiments with the latent group imbalance (sensitive attributes) by adjusting the number of data in each group. The total number of samples is the same. We train classifiers for the classes on different imbalance settings and visualize the learned classifiers (bold black lines). The fairer the classifiers, the more vertically aligned. The classifier trained on the class imbalance is more unfair than the one on the group imbalance.**
ble with other training algorithms, data augmentation, and network architecture.
### Model Robustness to Spurious Correlation
A class may include dominant patterns, _i.e_., spurious correlations. For example, waterbirds are usually on water rather than land. It leads DNNs to rely heavily on these spurious features rather than reasoning; thereby, DNNs classify water images as waterbirds regardless of the existence of waterbirds. This spurious correlation is also caused by data imbalance because fewer waterbirds are located on land. While spurious correlation occurs naturally, we can mitigate their impact by resolving data imbalance similar to model fairness. We demonstrate whether SYNAuG can mitigate the data imbalance problem of spurious correlations.
Experiments.We use Waterbirds dataset [58], which is a synthetic dataset created by combining images of birds from the CUB dataset [74] with backgrounds. The birds are grouped into two categories: waterbirds, which include seabirds and waterful, and landbirds. Land and water background are spurious features. Let \(G^{\text{class}}_{\text{background}}\) be the class with the background, _e.g_., \(G^{\text{landbird}}_{\text{water}}\) is the landbird with water background. In the Waterbirds dataset, \(G^{\text{landbird}}_{\text{land}}\) has more samples than \(G^{\text{landbird}}_{\text{water}}\), and \(G^{\text{waterbird}}_{\text{water}}\) has more samples than \(G^{\text{waterbird}}_{\text{land}}\). We generate the samples to match the number of samples such as \(|G^{\text{landbird}}_{\text{land}}|=|G^{\text{landbird}}_{\text{water}}|\) and \(|G^{\text{waterbird}}_{\text{water}}|=|G^{\text{waterbird}}_{\text{land}}|\). We report the result over 5 independent runs using the code from DFR [32]. We reproduce the BaseModel and DFR and report the performance at the last epoch3.
Footnote 3: [https://github.com/PolinaxIrichenko/deep_feature_reweighting](https://github.com/PolinaxIrichenko/deep_feature_reweighting)
In Table 5, our SYNAuG generates samples not to be correlated with spurious features, which improves the performance in BaseModel both on worst and mean accuracies. When applying DFR, the synthetic data can increase the worst and mean accuracy consistently. We also observe that fine-tuning is more effective compared to re-train, which is consistent with Table 3. The overall results demonstrate that the synthetic data from the generative model can be exploited to mitigate the spurious correlation.
## 5 Conclusion and Discussion
We propose SYNAuG which deals with long-tailed recognition, model fairness, and robustness to spurious correlations as data imbalance problems. The development process of the machine learning model can be roughly divided into data curation, model training, and model management. Since the data comes first in the process, a flaw in the dataset affects the subsequent phases; thus, it is crucial. Our study suggests the importance of controlling imbalance from the data perspective. We believe that taking the controllability of data is a promising research direction to resolve the early bottleneck in machine learning model development. While we focus on the data perspective, improving the model in multiple views is necessary for effective solutions to data imbalance. We conclude our work with the following discussions.
Other perspectives.We have suggested the usage of synthetic data from pre-trained generative models as a new data perspective baseline for the data imbalance problem, but there may be other perspectives. We observed a gradual performance decline when substituting real data with synthetic data, suggesting the potential need for domain adaptation. There could be future research directions, _e.g_., more sophisticated data augmentation, automated data curation, transfer learning, the usage of differentiability of the generative models, and comprehending taxonomies across classes. While we emphasize that our work suggests a promising way to redraw the direction to overcome the data imbalance problems in the data perspective, more interesting future work will come with integrating multiple levels.
Limitations of using generative models.The generation of synthetic data demands additional time and computational resources. While the curation of a real dataset requires enormous time, human, and financial resources, the process of generating synthetic data becomes increasingly challenging as the volume of data needed increases. Also, the quality of the synthesized data varies depending on factors such as the prompt, guidance level, and step value of the diffusion model, impacting the overall performance of the model. However, since generative models have been continuously developed in terms of sample quality, time efficiency, and controllability, we believe that exploiting generation models as a data source is a promising research direction as the performance of generation models is improved.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Waterbirds**} \\ \cline{2-3} & **Worst** & **Mean** \\ \hline ERM & 72.6 & 97.3 \\ JTTIOLr21[39] & 86.7 & 93.3 \\ Group-DRO |
2301.02450 | New insights on the near-infrared veiling of young stars using
CFHT/SPIRou data | Veiling is ubiquitous at different wavelength ranges in accreting stars.
However, the origin of the veiling in the IR domain is not well understood. The
accretion spot alone is not enough to explain the shallow photospheric IR lines
in accreting systems, suggesting that another source is contributing to the
veiling in the NIR. The inner disk is often quoted as the additional emitting
source meant to explain the IR veiling. In this work, we aim to measure and
discuss the NIR veiling to understand its origins and variability timescale,
using a sample of 14 accreting stars observed with the CFHT/SPIRou
spectrograph, within the framework of the SPIRou Legacy Survey. We compared the
veiling measurements with accretion and inner disk diagnostics. The measured
veiling grows from the Y to the K band for most of the targets in our sample.
The IR veiling agrees with NIR emission excess obtained using photometric data.
However, we also find a linear correlation between the veiling and the
accretion properties of the system, showing that accretion contributes to the
inner disk heating and, consequently, to the inner disk emission excess. We
also show a connection between the NIR veiling and the system's inclination
with respect to our line of sight. This is probably due to the reduction of the
visible part of the inner disk edge, where the NIR emission excess is expected
to arise, as the inclination of the system increases. The NIR veiling appears
variable on a timescale of a day, showing the night-by-night dynamics of the
optical veiling variability. In the long term, the mean NIR veiling seems to be
stable for most of the targets on timescales of a month to a few years.
However, during occasional episodes of high accretion, which affect the
system's dynamic, the veiling also seems to be much more prominent at such
times, as we found in the case of the target RU Lup. | A. P. Sousa, J. Bouvier, S. H. P. Alencar, J. -F. Donati, C. Dougados, E. Alecian, A. Carmona, L. Rebull, N. Cook, E. Artigau, P. Fouqué, R. Doyon, the SLS consortium | 2023-01-06T10:14:33Z | http://arxiv.org/abs/2301.02450v1 | # New insights on the near-infrared veiling of young stars using CFHT/SPIRou data+
###### Abstract
Context:Veling is ubiquitous at different wavelength ranges in classical T Tauri stars. However, the origin of the veiling in the infrared (IR) domain is not well understood at present. The accretion spot alone is not enough to explain the shallow photospheric IR lines in accreting systems, suggesting that another source is contributing to the veiling in the near-infrared (NIR). The inner disk is often quoted as the additional emitting source meant to explain the IR veiling.
Aims:In this work, we aim to measure and discuss the NIR veiling to understand its origins and variability timescale.
Methods:We used a sample of 14 accreting stars observed with the CFHT/SPIRou spectrograph, within the framework of the SPIRou Legacy Survey, to measure the NIR veiling along the \(YJHK\) bands. We compared the veiling measurements with accretion and inner disk diagnostics. We also analyzed circumstellar emission lines and photometric observations from the literature.
Results:The measured veiling grows from the \(Y\) to the \(K\) band for most of the targets in our sample. The IR veiling agrees with NIR emission excess obtained using photometric data. However, we also find a linear correlation between the veiling and the accretion properties of the system, showing that accretion contributes to the inner disk heating and, consequently, to the inner disk emission excess. We also show a connection between the NIR veiling and the system's inclination with respect to our line of sight. This is probably due to the reduction of the visible part of the inner disk edge, where the NIR emission excess is expected to arise, as the inclination of the system increases. Our search for periods on the veiling variability showed that the IR veiling is not clearly periodic in the typical timescale of stellar rotation - which, again, is broadly consistent with the idea that the veiling comes from the inner disk region. The NIR veiling appears variable on a timescale of a day, showing the night-by-night dynamics of the optical veiling variability. In the long term, the mean NIR veiling seems to be stable for most of the targets on timescales of a month to a few years. However, during occasional episodes of high accretion in classical T Tauri stars, which affect the system's dynamic, the veiling also seems to be much more prominent at such times, as we found in the case of the target RU Lup.
Conclusions:We provide further evidence that for most targets in our sample, the veiling that mainly occurs in the \(JHK\) bands arises from dust in the inner disk.
## 1 Introduction
The photospheric lines of young low-mass accreting systems, commonly referred to as Classical T Tauri stars (CTTS), are shallower and present smaller equivalent widths than those of non-accreting stars with a similar spectral type. This phenomenon is known as the veiling of the photospheric lines (e.g., Hartigan et al. 1991; Valenti et al. 1993; Folha & Emerson 1999; Fischer et al. 2011). The presence of veiling suggests an additional emitting source, beyond the stellar photosphere contributing to the spectra of the targets, that is responsible for filling in the photospheric lines (e.g., Hartmann & Kenyon 1990; Gullbring et al. 1998; Calvet & Gullbring 1998; Johns-Krull & Valenti 2001).
The veiling in the optical and in the IR wavelengths has been studied using different approaches in the aim to understand its origins and variability (e.g., Basri & Batalha 1990; Edwards et al. 2006; Fischer et al. 2011; Antonucci et al. 2017; Gullbring et al. 2017; Ingleby et al. 2013; McClure et al. 2013; Kidder et al. 2021). The veiling variability along the stellar spectra depends on wavelength (e.g., Fischer et al. 2011; Faesi et al. 2012; McClure et al. 2013; Rei et al. 2018), and optical veiling is often associated with the accretion process, while the accretion shock is thought to be at the origin of an additional continuum emitting source to the stellar spectrum. (e.g., Gullbring et al. 1998). Usually, the accretion spot presents an emission contribution maximum around the ultraviolet domain, which decreases as the wavelength increases (e.g., Calvet & Gullbring 1998). Nevertheless, we do not expect a significant contribution of the ac
cretion spot continuum emission in the IR wavelength; therefore, the accretion spot alone cannot explain the veiling in the IR domain.
In the IR region, the veiling increases with wavelength and in some cases, it becomes greater than the veiling in the optical domain (e.g., McClure et al. 2013). The central star illuminates the inner disk and this region absorbs photons from the star, the accretion spot, and even the accretion funnel, then re-emitting them in the infrared (IR) as the system rotates (e.g., Chiang & Goldreich 1997, 1999). Therefore, the inner disk is suggested as the origin of the additional continuum emission that is essential to explaining the near-infrared (NIR) veiling, although the measured veiling is often too great to be explained as coming merely from the disk emission, based on model predictions (e.g., Folha & Emerson 1999; Johns-Krull & Valenti 2001). Many authors have used different techniques to connect the observed veiling with inner disk emission, such as measuring the temperature of the region where the veiling comes from and using a black body fit to the veiling. For most of these systems, they found temperatures compatible with the dust temperature in the inner disk (e.g., Fischer et al. 2011; Antoniucci et al. 2017; Alcala et al. 2021). For a few targets, the blackbody temperature measured using veiling is too high for dust to survive in the inner disk; this would indicate that the veiling should arise from the gas in the inner disk inside the star-disk co-rotation radius (e.g., Antoniucci et al. 2017; Alcala et al. 2021). However, McClure et al. (2013) found no evidence of hot gas inside the inner disk, which would be responsible for the NIR veiling. Instead, they explained the IR veiling as the combined emission from the accretion shock on the stellar surface and dust around the sublimation rim of the inner disk.
The veiling around 1\(\mu\)m and the veiling in the \(K\) band and beyond can also have different origins. While the accretion spot emission contribution is not very substantial around 1\(\mu\)m, we do not expect a significant contribution from the inner disk either. The origin of the veiling in this spectral domain is poorly understood. However, significant veiling was measured around 1\(\mu\)m, primarily for high accretion rate systems (e.g., Edwards et al. 2006; Fischer et al. 2011; Ingleby et al. 2013). In the literature, there are only a few plausible explanations given for that case of veiling, such as a contribution from emission lines, filling in of the photospheric lines, or origination from the accretion shock (e.g., Sicilia-Aguilar et al. 2015; Dodin & Lamzin 2013). Even if we do not detect these extra emission lines directly, they can contribute to making the photospheric lines of the stellar spectra shallower, which increases the veiling measurement.
We cannot exclude other possible explanations to the IR veiling, such as an envelope around a star that can also be a source of additional emission to the photospheric continuum, with emission compatible with NIR veiling (e.g., Calvet et al. 1997). However, CTTSs are usually Class II stars and we would not expect a significant contribution from a dusty envelope. Furthermore, the veiling in the \(K\) band is higher than the dust envelope emission can explain (Folha & Emerson 1999).
In this work, we study the veiling and its variability in the NIR, using a sample of young accreting stars observed with the Canada-France-Hawaii Telescope SPectropolarimetre InfraROUge (CFHT/SPIRou). Our sample comprises stars with different properties, such as the mass accretion rate, spectral type, and inclination with respect to our line of sight. We computed the veiling for the \(YJHK\) bands and compared the results with accretion and inner disk diagnostics.
We organized the paper as follows. In Sect. 2, we present the sample of stars that we used in this work, and we describe the data used. In Sect. 3 we show the procedures to measure the veiling. We describe the results obtained from the veiling measurements in Sect. 4. In Sect. 5, we discuss the possible origin of the veiling, and we compare our results with those of previous works. In Sect. 6, we present our conclusions.
## 2 Observations and targets selection
The sample of stars used in this work is composed of well-known young stars that are part of the SPIRou Legacy Survey-SLS science program: "Magnetic PMS star/planet survey" of some 50 class I, II, and III stars. The SPectropolarimetre InfraROUge (SPIRou) is a high-resolution velocimeter and polarimeter spectrograph (\(R\sim 75\,000\)) covering the NIR wavelength range \(\sim 0.98-2.35\,\mu\)m, corresponding to the spectral domain of the \(YJHK\) bands (Donati et al. 2018). The main science goals of CFHT/SPIRou Legacy Survey are the search for and characterization of planets around low-mass stars, and investigating the impact of the stellar magnetic field on planet and star formation in young systems (Donati et al. 2020b).
We aim to investigate the veiling of accreting young stars. Therefore, we selected, among the sample of stars observed by the SLS, 13 stars reported as accreting systems in the literature and for which we had a reasonable number of observations in time in comparison to the stellar rotation period. Most of these targets are classified as Class II and CTTS systems, and only V347 Aur is a Class I target. In addition, we added the T Tauri star J1604 (RX J1604.3-2130A) to the sample, which is not part of the SLS program, however, its CFHT/SPIRou observations are available.
In Table 1, we show the list of young stars analyzed in this work and the number of observations that we have for each target and each observational period. We also list three non-accreting T Tauri stars that cover our sample's spectral types and are also slow rotators (\(v\sin i<15\) km/s). We used these stars as templates to compute veiling and the residual profiles, as described in the following sections.
Each CFHT/SPIRou observation consists of four sub-exposures to measure Stokes V, taken at different orientations of the polarimeter and used to compute the non-polarized and the circularly polarized profiles. As the focus of this work is to analyze only the non-polarized component of the spectrum, we averaged the four sub-exposures to increase the signal-to-noise ratio (S/N) of the spectra obtained at each night. For the non-accreting systems, we averaged all the observations obtaining a mean spectra that were used as templates. The CFHT/SPIRou data were reduced, while the telluric-corrected spectra were obtained using the data reduction system APERO, versions 0.6.131, 0.6.132, and 0.7.232 (Cook et al. 2022). The spectra were corrected for the barycentric velocity and locally normalized to the continuum level, using a polynomial function to fit the continuum.
## 3 Procedures to measure the veiling
We computed the IR veiling of the targets following the method described by Hartigan et al. (1989), where we compare the spectra of the target with the spectrum of a non-accreting T Tauri star of a similar spectral type. The Zeeman broadening of the photospheric lines can affect the veiling measurements. Therefore, we used WTTSs as templates that should present a similar magnetic activity as the CTTS (e.g., Johns-Krull et al. 2000). The WTTSs also present comparable physical properties
to the CTTSs, such as chromospheric activity and surface gravity, which make WTTSs stars suitable to measure the veiling in accreting systems. We list the templates applied to each star in Table 2.
Before comparing the target and template spectra, we shifted and broadened the template spectra to match the target spectra, using the radial velocities of the targets.1 and the literature \(v\sin i\) values of the targets. Due to the IR veiling wavelength dependence (e.g., Alcala et al. 2021), we measured the veiling in four different spectral regions, 10710A-10810A, 12840A-12910A, 16120A-16220A, and 22600A-22690A, which we call \(r_{Y}\), \(r_{J}\), \(r_{H}\), and \(r_{K}\), representing the \(YJHK\) veilings, respectively. The stars RU Lup, DO Tau and, DG Tau present many emission lines along the spectra, probably originating from the accretion shock, which is a characteristic of high-mass accretion rate systems, which prevented us from using the same \(Y\) and \(J\) spectral regions for the veiling calculations. Then, for these targets, we used the spectral regions 10760A-10785A and 12400A-12550A to measure the \(r_{Y}\) and \(r_{J}\) veilings, respectively. On some nights, the 10710A-10810A region of the TW Hya and V2247 Oph spectra presented features that made veiling measurements impossible and we had to use the 10864A-10920A region to measure \(r_{Y}\) veiling instead.
Footnote 1: For most targets, the radial velocities were computed using the CCF profiles generated by the SPIRou pipeline, implementing a numerical mask corresponding to the target spectral type. For TW Hya, we computed the radial velocity cross-correlating the target spectra with a WTTS with a similar spectral type.
We determined the best veiling value for each target spectrum through a \(\chi^{2}\) minimization routine. The veiling was defined as the ratio of the continuum excess flux to the stellar photospheric flux (\(r_{A}=F_{\lambda_{Feuna}}/F_{\lambda_{Bun}}\)). Then, zero veiling means the system has no additional excess to the stellar photosphere. In the spectral range of our sample (K2 to M3), the choice of template does not interfere much in the computed veiling, as shown by Folha & Emerson (1999), since the difference between templates of different spectral types is small and produces an almost null relative veiling, within the uncertainties. We also measured the systematic veiling between our templates, which corresponds to the average veiling resulting from the computed veiling comparing each template with the other. We found it to be \(r_{Y}=0.098\pm 0.068\), \(r_{J}=0.02\pm 0.02\), \(r_{H}=0.05\pm 0.02\) and, \(r_{K}=0.04\pm 0.02\), which should only affect the veiling determination of stars with very small or no veiling.
In Fig. 1, we present the results for the four photospheric regions used to measure the veiling for CI Tau from two nights, representing spectra with smaller and higher veiling values. We show the target spectrum, and the unveiled and veiled template spectra. We also computed the residual profile, which is the target's spectrum subtracted from the veiled template. Most of the residual profiles show almost no features at the location of photospheric lines, indicating that the veiling measurements were correctly determined for most nights, and that the photospheric lines were correctly removed. However, some of the spectra were quite noisy and this affected the veiling determinations. The veiling error obtained for each night, written in the plot, comes from a Chi-square minimization process, where we compare the target with the template spectra. Besides taking into account the noise of the target and template spectra2 to compute the veiling, there are other errors, such as those associated with the normalization processes that we did not consider in estimating the veiling error.
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l l l} \hline \hline Star & SpT & \(\mathrm{v}\sin i\) & Av & \(\mathrm{i}^{\mathrm{rd}}\) & 2018 & 2019a & 2019b & 2020a & 2020b & 2021a & 2021b & 2022a & References3 \\ & & (km/s) & (mag) & (\(\arcdeg\)) & \multicolumn{1}{c}{Accreting systems} & & & & & & & \\ \hline CI Tau & K4 & 9.5\(\pm\)0.5 & 0.65 & 55\(\mathrm{{}^{+13}_{-5}}\) & 2 & 6 & 26 & 5 & 39 & - & - & - & (1),(2),(2),(2) \\ DoAr 44 & K2-K3 & 17.0\(\pm\)1.1 & 2.0\(\pm\)0.2 & 30\(\mathrm{{}^{+8}_{-5}}\) & - & 8 & - & - & - & - & - & (3),(3),(3),(3) \\ GO Lup & K7 & 5\(\pm\)1 & 0.7 & -30 & - & 8 & - & 18 & 6 & - & - & - & - & (4),(5),(4),(5) \\ TW Hya & K6 & 6.0\(\pm\)1.2 & 0.0 & 18\(\pm\)10 & - & 12 & - & 14 & - & 27 & - & - & (1),(1),(6),(7) \\ V2129 Oph & K5 & 14.5\(\pm\)0.3 & 0.6 & 60 & 9 & - & - & 17 & 8 & - & - & - & (1),(1),(8),(9) \\ BP Tau & K5 & 9.0\(\pm\)0.5 & 0.45 & -45 & - & - & 21 & - & - & - & 34 & - & (10,(11),6),(11) \\ V347 Aur & M2-M3 & 11.7\({}^{+0.76}_{-0.7}\) & 3.4 & 40 & - & - & 18 & - & 13 & 12 & 22 & - & (12,(13),(12),(14)) \\ DG Tau & K6 & 24.7\({}^{+0.7}_{-0.7}\) & 1.60\(\pm\)0.15 & 38\(\pm\)2 & - & - & - & 2 & 29 & - & - & - & (10,(10),(6),(15)) \\ RU Lup & K7 & 8.5\(\pm\)4.8 & 0.0 & 24 & - & - & - & 9 & - & 17 & 13 & 1 & (16,(17),(14),(18)) \\ V2247 Oph & M0 & 20.5\(\pm\)0.5 & 0.98\(\pm\)0.02 & 45\(\pm\)10 & - & - & - & 9 & 7 & - & - & - & (19),(20),(19,(20)) \\ DO Tau & M0 & 14.3\(\pm\)0.5 & 0.75 & 37.0\(\pm\)3.7 & - & - & - & 3 & 8 & - & - & - & (6),(21),(6),(15) \\ J1604\({}^{\prime}\) & K3 & 17.3\(\pm\)0.4 & 1.0 & -561 & - & - & - & 12 & - & - & - & - & (22),(22),(24),(23) \\ PDS 70 & K7 & 16.0\(\pm\)0.5 & 0.01\(\pm\)0.07 & 5048 & - & - & - & 4 & - & - & - & 6 & (19),(25),(19),(25) \\ GM Aur & K4-K5 & 14.9\(\pm\)0.3 & 0.3\(\pm\)0.3 & \(\geq\)63 & - & - & - & - & 2 & - & 34 & - & (26),(26),(26),(26) \\ \hline V819 Tau & K4 & 9.5 & & & - & - & 1 & - & - & - & - & - & (27),(27) \\ TWA 25 & M0.5 & 12.9\(\pm\)1.2 & & & - & - & 25 & 14 & - & - & - & - & (6),(1) \\ TWA 9A & K6 & 7\(\pm\)3 & & & - & - & 1 & - & - & - & - & - & (6),(1) \\ \hline \end{tabular} 1
\end{table}
Table 1: Sample of stars and number of observations per observational period
## 4 Results
We present in Fig. 2 the NIR average veiling over the observation nights, measured for all the four regions, referred to as \(r_{Y}\), \(r_{J}\), \(r_{H}\), and \(r_{K}\). For readability, we split the targets into two groups; systems with \(r_{K}\) higher and smaller than 1. We also show the averaged veiling obtained for each target in Table 2. The veiling values increase from the \(Y\) to the \(K\) band for most of the targets, similar to the results found in previous works (e.g., McClure et al. 2013; Sousa et al. 2021; Alcala et al. 2021). Besides this general result, the average veiling for some individual targets remains the same from the \(Y\) to \(K\) band. For example, the veiling measured for V2247 Oph. However, this target does not present significant veiling in any band, probably due to the fact that it is a more evolved system, where the accretion and the dust in the inner disk are too faint to be detected. Furthermore, this M-type star makes detecting excess in the IR difficult due to the low contrast between the stellar photosphere and the inner disk emission (e.g., Ercolano et al. 2009). Another example is CI Tau, where the average veiling decreases from the \(Y\) to the \(H\) band, followed by an increase to the \(K\) band. In that case, each band's veiling variability is high and the difference in the \(Y\) to \(H\) average veiling is smaller than the standard deviation.
Due to the small number of photospheric lines and the lower S/N values in the \(Y\) region of the spectra, the veiling in this region was not as well determined, compared to the other bands. For some nights or targets, the veiling is even slightly negative, which has no physical meaning - in such cases, we can assume that the system in this region has zero veiling.
We also checked the variability of the veiling measured in each band, computing for each target the RMS of the \(YJHK\)
Figure 1: Examples of the four spectral regions used to measure the veiling of CI Tau. In each panel, we show two different nights, representing a small and a high veiling estimated for this target. We show the CI Tau spectrum in black and the residual profile (in red) obtained after subtracting the veiled template. The V819 Tau template spectra are also displayed before (orange) and after (blue) applying the veiling correction. The photospheric lines were removed in the residual profiles, showing that the veiling was accurately determined for most nights.
Figure 2: Average NIR veiling (_left_) and the veiling variability diagnostic (_right_) measured in different wavelength regions. The top and bottom panels show the targets with \(r_{K}\) higher and lower than 1, respectively. The veiling increases from the \(Y\) to the \(K\) band for most of the targets. The error bars in the left panel represent the standard deviation of the average veiling over all the observation nights. In this figure and the following figures, the color and symbol codes in the panels identify each target.
veilings, using all the observed nights. To characterize the variability over the noise, we subtracted the average error of the veiling (\(\sigma\)) from the RMS. Then, we computed \(S=\sqrt{rms^{2}-\sigma^{2}}\), as a veiling variability diagnostic, following Cody et al. (2014). We show the results in Figs. 2 and 3 as a function of the band and veiling measured, respectively. Most of the targets present variability above the error level. The veiling in the \(H\) band appears to be less variable than the other bands, while the \(K\) band presents the highest variability, driven by the systems with the highest veilings, which are also the most variable ones (see Fig. 3).
The average veiling measured in RU Lup is relatively high compared to the other targets, mainly in the \(K\) band. These high average veilings are primarily due to the veiling measured in the 2021a period of observations (see Sect. 4.3). In that period, RU Lup also presented an increase in the photometric AAVSO \(V\) band brightness (about 0.3 mag smaller than the next observational period, where the veiling starts to go down). Therefore, the average veiling of RU Lup probably is overestimated and does not represent a quiescent value.
### Veiling compared to inner disk diagnostics
The emission from the inner disk is claimed to contribute to the veiling. In such a case, we would expect a correlation between the veiling and other inner disk emission diagnostics from the photometric data.
The slope of the spectral energy distribution (SED) is often used as a disk emission diagnostic (e.g., Lada et al. 2006; Muzerolle et al. 2010; Teixeira et al. 2012). Using photometric data from different surveys, such as SDSS (Gunn et al. 1998), Gaia DR2 (Gaia Collaboration et al. 2018), 2MASS, WISE (Wright et al. 2010), Spitzer (Fazio et al. 2004; Rieke et al. 2004), Herschel (PACS), Akari (IRC and FIS), we constructed the SED of the targets and measured the slope of the SED between 2 and 8 \(\mu\)m (\(\alpha_{2-8}\)), which is the spectral range that indicates significant inner disk emission. The \(\alpha_{2-8}\) slope is smaller (more negative) for systems with less or no inner disk emission, and is higher (and even positive) for systems that present inner disk emission excess (e.g., Lada et al. 2006; Muzerolle et al. 2010). We list the \(\alpha_{2-8}\) slope computed for the targets in Table 2.
We show in Fig. 4 the NIR veiling in the four spectral regions (\(YJHK\)), averaged over all the observation nights, as a function of \(\alpha_{2-8}\). The veiling presents a clear linear correlation with the SED slope, mainly from the \(J\) to \(K\) band. The \(Y\) band veiling values are still correlated to the SED slope but less than the other bands, probably due to the contribution from the inner disk emission becoming more important at longer wavelength. The \(W_{1}-W_{2}\) ([3.4]-[4.6]) color index (not shown here) comes from the WISE telescope (Wright et al. 2010) is also correlated with the NIR veiling, presenting similar results as \(\alpha_{2-8}\).
Figure 4: Average NIR veiling (\(r_{\lambda}\)) as a function of spectral energy distribution slope between 2 and 8 \(\mu\)m (\(\alpha_{2,8}\)), which we used as the NIR disk emission diagnostic. The solid line is the linear fit to the data, and the correspondent slope is written in each panel. The NIR veiling seems to scale with the SED slope.
Figure 5: Comparison between the color excess computed using the average NIR veiling and the 2MASS photometry. _left_: \((H-K_{\lambda})_{excess}\), _right_: \((J-K_{\lambda})_{excess}\). See the text for the color excess definition. The dashed line represents a slope equal to 1. The NIR color excesses computed using the average veiling and the 2MASS magnitudes agree for most targets.
Figure 3: Veiling variability diagnostic as a function of the average veiling. The rms value refers to the root-mean-square of the veiling variability and \(\sigma\) is the average error on the veiling measurements. Each panel shows the veiling \(r_{\lambda}\) measured in a different band (\(YJHK\)). Systems with higher veiling also present higher veiling variability.
We could expect that the NIR excess computed using the veiling would scale with the emission excess from NIR photometric data, which is often used as an inner disk emission indicator (Hillenbrand et al., 1998; Rebull, 2001; Rebull et al., 2002). We computed the \((H-K_{\rm s})_{excess}\), using the observed \(H-K_{\rm s}\) color from 2MASS, corrected for extinction, using the \(A_{V}\) quoted in Table 1 and the SVO Filter (Rodrigo et al., 2012; Rodrigo and Solano, 2020) \(A_{\lambda}/A_{V}\) relations to obtain the \(A_{H}\) and \(A_{K}\) extinctions. Then, we compared this dereddened object excess to the intrinsic color \((H-K_{\rm s})_{o}\) expected for an object with the same spectral type (Pecaut and Mamajek, 2013). The color excess is \((H-K_{\rm s})_{excess}=(H-K_{\rm s})_{obs,red}-(H-K_{\rm s})_{o}\). We also relate the color excess and the excess flux, leading to \((H-K_{\rm s})_{excess}=-2.5\log{(1+r_{H})/(r_{\lambda}+r_{\lambda})}\), where we used the veiling definition as \(r_{\lambda}=F_{\lambda_{\rm vemax}}/F_{J_{\rm ve}}\). Then we can directly compare the color excess computed using the veiling and using the photometric measurements. We compare the two sides of this equation in Fig. 5, which shows a linear tendency and similar values considering the error of the measurements. Performing similar procedures, we computed the \((J-K_{\rm s})_{excess}\), and the results are also presented in Fig. 5. The color excess computed using the 2MASS photometry is dependent on the \(A_{v}\) of the systems, and the targets V347 Aur and CI Tau present discrepant \(A_{v}\) values in the literature. CI Tau presents \(A_{v}=0.65\) mag (Donati et al., 2020) and \(A_{V}=1.90\) mag (Herczge and Hillenbrand, 2014), while the \((J-K_{\rm s})_{excess}\) computed using the largest \(A_{V}\) better agrees with the color excess calculated using veiling, the \((H-K_{\rm s})_{excess}\), and the mass accretion rate computed in the next section seem to be in better agreement with \(A_{V}=0.65\) mag; thus, we used this extinction value in the paper. The \(A_{V}\) range of V347 Aur is even larger, with values from 2.2 to 7.5 mag (e.g., Dahm and Hillenbrand, 2020). The \(A_{V}=3.4\) mag computed using NIR colors by Connelley and Greene (2010) seems to better represent the value obtained from the veiling calculations.
The 2MASS photometric magnitudes used for this analysis were obtained years before our observations. However, we do not expect a significant change in the NIR magnitudes over the next few years, aside from the daily timescale variation (e.g., Carpenter et al., 2001). It is likely that RU Lup will stand as an exception, as the average veiling was measured in a non-quiescent period. Then, the color excess computed using the veiling is higher than that obtained using the 2MASS magnitudes.
All these relations between the NIR veiling and the inner disk emission diagnostics show that a higher veiled system also presents higher inner disk emission, which is expected if the veiling has a contribution from the inner disk. However, to draw this conclusion, we assumed that these inner disk indicators, such as the slope of the spectral energy distribution, are suitable inner disk diagnostics. Kidder et al. (2021) showed that some of the targets classified as Class III using these disk indicators still show some inner disk emission based on the \(K\) band excess. They checked the emission excess of V819 Tau, which we used as a template to compute the veiling, but the \(H\) and \(K\) excesses found were very small, similar to the systematic veiling we obtained in Sect. 3. The relation between veiling and inner disk emission is clear for the \(K\) band and less for other bands, probably due to the influence of another additional continuum source for the other spectral regions and/or a smaller contribution from the inner disk to these shorter wavelengths.
### Veiling compared to accretion diagnostics
We know that CTTSs are still accreting gas from the disk and accreting systems typically present strong and variable emission lines that form in the accretion funnel or in the disk wind (e.g., Muzerolle et al., 1998; White and Basri, 2003; Edwards et al., 2003; Kwan and Fischer, 2011; Alencar et al., 2012). The CFHT/SPIRou wavelength range includes some emission lines from hydrogen and helium, such as Pa\(\beta\) and Br\(\gamma\) as well as the He i (10830 A) triplet; in particular, the latter is very sensitive to accretion and ejection processes (e.g., Kwan et al., 2007; Sousa et al., 2021). The dynamics of the circumstellar lines for this sample of stars will be analyzed in an accompanying paper (Sousa et al. in prep.).
We measured the equivalent width of the circumstellar lines, and the average over all observing nights is listed in Table 2. First, we used the equivalent width as an accretion diagnostic (Alcala et al., 2017), as systems that present larger equivalent widths are supposed to present higher mass accretion rates as well.
We corrected the equivalent width for the veiling as \(EW=EW_{measured}(r_{\lambda}+1)\), where the \(r_{\lambda}\) represents the veiling computed close to each emission line. In Fig. 6, we show the veiling as a function of the veiling corrected equivalent width of the circumstellar emission lines. We see a clear relationship between the NIR veiling and the accretion diagnostics. It means that higher mass-accretion rate systems also present a higher degree of veiling; this is a similar result to that found by Folha and Emerson (1999), demonstrating that although the veiling shows a contribution from the inner disk emission, it also suggests a connection with the accretion process.
We do not have photometric data simultaneous with our spectra to accurately compute the mass accretion rates using the equivalent width of the emission lines. However, most of our systems' NIR magnitudes are relatively long-term stable. We used the 2MASS \(J\) and \(K\) magnitudes to estimate the continuum flux and then calculate the mass accretion rate using the Pa\(\beta\) and Br\(\gamma\) lines, respectively. The star V347 Aur is known to present long-term photometric variations (e.g., Dahm and Hillenbrand, 2020), and we did not compute the mass accretion rate of this target.
We followed the procedures described by Gullbring et al. (1998) to compute the line flux and luminosity. The stellar parameters used are listed in Table 3, and we used the stellar distance from the Gaia collaboration (Gaia Collaboration et al., 2021). We dereddened the 2MASS magnitudes using the same method described in the previous section. To compute the accretion luminosity, we used the fits proposed by Alcala et al. (2017), which show the relation between the line and accretion luminosities. Then, we determined the accretion rate setting the system inner radius as 5\(R_{\ast}\)(Gullbring et al., 1998). In Table 2, we show the individual mass accretion rates computed using the Pa\(\beta\) and Br\(\gamma\) lines. In Fig. 7, we show the average mass accretion rate as a function of the \(Y\) to \(K\) band veiling. Once again, we can connect the highest accreting system with the highest NIR veiling computed in the four bands.
### Veiling night-to-night variability
In this study, we have access to observations obtained on different nights and sometimes different observational periods for our sample of stars. This allowed us to analyze the night-to-night veiling variation and a possible long-term veiling variability on a timescale of two years. In Fig. 8, we show the veiling measured as a function of the observation dates. We used the modified Lomb-Scargle periodogram (Horne and Baliunas, 1986) to study a possible periodicity of the veiling variations. We performed the periodogram analysis of the veiling in two ways: using all
**References.** (1) Donati et al. (2020a); (2) Bouvier et al. (2020); (3) Alcala et al. (2017); (4) Donati et al. (2011); (5) Alencar et al. (2012); (6) Johns-Krull (2007); (7) Alcala et al. (2014); (8) Donati et al. (2010); (9) Ricci et al. (2010); (10) Sicilia-Aguilar et al. (2020); (11) Muller et al. (2018); (12) Bouvier et al. (in prep.)
shows that for all the veiling variable targets, the veiling varies on a timescale of at least one day and the veiling computed in the four spectral regions presents the same variability timescale. These results show that whatever region the IR veiling comes from, this region is dynamic and its flux changes on a timescale of days.
We also investigated veiling variability on a timescale of months to a few years; a change in the veiling in that timescale can have a different origin from the day-scale veiling variability, discussed above. The latter can be associated with the dynamic of the system's rotation, while the possible long-term veiling variability reflects a change in the system's accretion and/or inner disk conditions. In Figure 9, we present the averaged veiling measured at each observational period. This plot displays nine systems that were observed in more than one observational season. Most targets do not present a significant difference in the veiling along the observational period. The \(K\) band veiling of CI Tau and the \(YJH\) veiling of DO Tau show a possible small change in this timescale. Then, we conclude that the veiling variability on a timescale of months-to-years is on the same order of magnitude as the day-to-day variability. RU Lup is the unique target with an evident change in the veiling along the observational periods in the four bands, but much more pronounced in the \(K\) band, along with a high standard deviation. We associate this change in the veiling with an occasional high accretion episode that occurred in 2021a, and despite the veiling still being high in 2021b, it seems to start to diminish later on. In 2022a, it is even smaller, but we have only one observation to serve as the basis for this assumption. The circumstellar emission lines' equivalent widths corroborate with this assumption, as they increase in 2021a and start to decrease in the subsequent observation periods, similarly to the veiling. In contrast, the average veiling is stable, at least for most of the stars we analyzed, except for very high accreting systems, such as RU Lup, which can present episodic high veiling. Furthermore, in the same observational period, some targets show a few episodes of an increase in veiling, such as V347 Aur (Fig. 8); however, the average veiling values are still sustained.
## 5 Discussion
The dependence of veiling on wavelength is ubiquitous from the UV to the NIR range. The veiling in the optical domain decreases from the blue to the red part of the spectra, which is an effect of the decrease of the accretion spot continuum contribution (e.g., Calvet & Gullbring 1998); also, the veiling also does not vary for some wavelength ranges (e.g., Basri & Batalha 1990). On the other hand, in the IR range, the veiling increases with wavelength, as seen in Figs. 2 and 8, which is in agreement with similar results in the literature (e.g., Fischer et al. 2011; Alcala et al. 2021)
The average veiling value for the entire sample of CTTS is \(\langle r_{Y}\rangle=0.2\pm 0.3\), \(\langle r_{J}\rangle=0.4\pm 0.4\), \(\langle r_{H}\rangle=0.5\pm 0.5\), and \(\langle r_{K}\rangle=1.4\pm 1.6\). We note that in the \(Y\) band, the average veiling is the lowest. Over these wavelengths from \(Y\) to \(K\), the veiling can have contributions from different sources. For example, we expect the veiling in the \(K\) band to present more contributions from the inner disk than in the \(Y\) band. In Fig. 10, we show the \(YJH\) veiling as a function of the \(K\) band veiling. We can see that the \(J\) and \(H\) veilings seem to increase as the \(K\) band veiling increases; however, there is smaller correlation with the \(Y\) band veiling. These results are supported by the analysis of the correlation of the veiling samples and the linear fit showed in the Fig. 10. We computed the linear correlation coefficient (\(r\)) between two samples, where r=1 represents a perfect correlation and when r=0 there is no correlation. The \(Y\) and \(K\) band veilings present a correlation coefficient of 0.87, while the coefficient of the \(J\) and \(H\) and the \(K\) band are 0.98 and 0.96, respectively. Similar results were found by Cieza et al. (2005), where they compare the excess in the \(J\) and \(H\) bands with the \(K\) band excess, showing that both presented a linear correlation with the \(K\) band excess, and this was explained as due to the \(JHK\) excess arising from the same region.
The NIR veiling should be the result of a combination of physical processes. Alcala et al. (2021) computed the NIR veiling at several wavelengths for a sample of very high-mass accretion rate systems, including DG Tau (included in our sample). These authors fitted the veiling as a function of wavelength using a blackbody function and found temperatures compatible with the presence of dust in the inner disk. However, the temperature was too high (\(>2000\)\(K\)) for the dust to survive in a few of their cases. They also argued that the veiling should have a contribution from the hot gas inside the disk sublimation radius, and similar results were found by Fischer et al. (2011). To investigate this proposition, we looked at the CO bandhead at 2.3\(\mu\)m of the CFHT/SPIRou data. This band, when in emission, is expected to form in the hot gas in the inner disk. Using the \(K\) band veiling, we veiled the template and removed the photospheric lines of the CO bandhead to obtain the residual CO profiles. Most targets do not present clear signs of CO emission, showing that this band is strictly photospheric. However, a few residual profiles of V347 Aur, which is a Class I object, along with DO Tau and RU Lup, which are strong accretors, present CO emission in some observations, indicating the presence of hot gas in the inner disk. In particular, RU Lup presents these hints of CO emission in the observational period when the veiling was high, and the system probably ensued a high episodic accretion. A further analysis of the CO bandhead is beyond the scope of this paper and it will
Figure 7: Average mass accretion rate as a function of average NIR veiling. The average mass accretion was computed using the line fluxes of Pa\(\beta\) and Br\(\gamma\). The error bar is the standard deviation between the two measurements. We see an association between the veiling and the mass accretion rate.
be carried out in a dedicated paper exploring the significance of these CO emissions.
In the previous section, we show that the NIR veiling, mainly \(r_{K}\), presents a good correlation with the inner disk emission diagnostics obtained from the NIR photometric data and SED fit, demonstrating that the NIR veiling has an important contribution from the inner disk. However, we also see a correlation between veiling and accretion diagnostics. A high accretion-rate system presents larger veiling values in the IR. This shows that high-mass accretion rate systems should feature higher inner disk temperatures than the NIR veiling.
Figure 8: Night-by-night veiling values measured in four different spectral regions. We show each observational period per target in an individual panel. A missing veiling point in a specific band means that the spectral regions were subject to effects that prevented the veiling from being measured. In addition to the variability in terms of veiling, it is also not clearly periodic, at least on the timescale of stellar rotation. For more details, see text.
heating, thus higher temperatures and stronger inner disk emission excess as a consequence. Espaillat et al. (2022) fit most of the continuum spectra from NUV to NIR of the accreting star CVSO 109A quite well, using a combination of emission from the accretion shock (multiple funnel flow model) on the stellar surface and emission from the irradiated inner edge of the dusty disk. However, the inner disk and accretion shock model do not adequately reproduce the continuum excess in the \(Y\) to \(J\) band.
Indeed, while the veiling in the \(J\) to \(K\) band seems to point to a significant contribution on the part of the inner disk emission excess, the \(Y\) band veiling origin is still unknown. Dodin & Lamzin (2013) predicted significant veiling in the \(Y\) band from the accretion spot. They argued that the accretion spot (continuum emission and emission lines formed in the accretion shock) could account for optical and near IR veiling up to the \(J\) band. Unfortunately, our \(Y\) band veiling was not as well computed as for the other bands due to several issues in this spectral region and given the photospheric lines are not quite so prominent. Despite these obstacles, the Y band veiling agrees on a smaller scale with both accretion and inner disk diagnostics.
We checked if the inclination of the system has any impact on the measured veiling values. In Fig. 11, we show the NIR veiling as a function of the inclination of the system with respect to our line of sight, listed in Table 1. In addition, two discrepant systems (TW Hya and V2247 Oph), we can see an anti-correlation between veiling and the system's inclination. This anti-correlation is not pronounced, as we can see in the linear fit slope, due to the spread of points (the correlation coefficients between the inclination and the \(YJHK\) veilings are -0.64, -0.76, -0.80, and -0.70, respectively), but the decreasing tendency of veiling with inclination is clear. We also advise that the inclinations used for DG Tau and DO Tau are the outer disk inclination, while the disk and the stellar inclination are not necessarily the same (Bohn et al., 2022). If confirmed, this anti-correlation can be due to a geometric effect: the more the system is inclined, the less we see (from the inner disk edge) where the inIR veiling is supposed to arise. The two targets, TW Hya and V2247 Oph, which do not seem to follow this tendency, do not have dust in the inner disk any longer and they are known to have gaps or holes in their inner disks (Calvet et al., 2002; Pontoppidan et al., 2008). In that case, independently of the system's inclination, we would not expect to detect IR veiling, assuming that the IR veiling is due to dust emission in the inner disk.
## 6 Conclusion
In this work, we analyze the NIR veiling computed using high-resolution data from CFHT/SPIRou of a sample of 14 low-mass young stars. We found the veiling to increase from the \(Y\) to the \(K\) band, as a result of the increase of the emission contribution from the inner disk as a function of wavelength.
The veiling correlates with other photometric inner disk diagnostics, such as color excess and the slope of the spectral energy distribution, mainly in the \(JHK\) band, providing further evidence that the NIR veiling arises from hot dust in the inner disk. We also found a linear correlation between veiling and the accretion properties of the system. This shows that accretion contributes to inner disk heating and, consequently, to the inner disk emission excess. This effect is enhanced in high-mass accretion rate systems that also present a denser inner disk and higher inner disk emission (e.g., Sullivan & Kraus, 2022).
We analyzed the NIR veiling variability through the modified Lomb-Scargle periodogram and we did not find any significant periodic signal in the four bands in timescales typical of stellar rotation (\(<15\) days), which also suggests the veiling comes from the dust emission in the inner disk. However, we show that the veiling is variable for most targets on a timescale of at least one day. Besides the night-by-night veiling variability, the mean NIR veiling per season appears to be mostly stable, for most targets and on timescales of several months to years.
###### Acknowledgements.
We thank the referee for the suggestions that helped to clarify this paper. We want to thank Claire Motouto, Sylvie Cabt, Nicolas Grosse, and Konstantin Grankin for carefully reading the manuscript and giving suggestions to improve the paper. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 742095; SPIDI: Star-Planets-Inner Disk-Interactions; [http://www-spidi-eu.org](http://www-spidi-eu.org) and grant agreement No. 740651 NewWorlds). We acknowledge financial support from CNPq, CAPES and Fapenig. We acknowledge funding from the French National Research Agency (ANR) under contract number ANR-18-CE31-0019 (SPlaSH). This research has made use of the SVO Filter Profile Service
Figure 9: Average of the NIR veiling computed at each observational period, with the error bar as the standard deviation of the computed mean veiling. We merged the measured veiling in 2020a and 2020b of the stars V2129 Oph and GQ Lup, as they are successive observations. The mean NIR veiling seems to be stable for most of the targets for a few months or years.
([http://swo2.cab.inta-csic.es/theory/fps/](http://swo2.cab.inta-csic.es/theory/fps/)) supported from the Spanish MINECO through grant AYA2017-84089. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of MaunKea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research.
|
2303.03137 | Intergalactic magnetic field studies by means of $γ$-ray emission
from GRB 190114C | The presence of delayed GeV emission after a strong transient, such as a GRB
(Gamma-Ray Burst), in the VHE (Very-High Energy, $E>100$ GeV) band can be the
signature of a non-zero magnetic field in the intergalactic medium. We used a
synchrotron self-Compton multiwavelength model to infer an analytical
description of the intrinsic VHE spectrum (corrected for absorption by the
Extragalactic Background Light, EBL) of GRB$\,$190114C to predict the
lightcurves and SEDs of the delayed emission with Monte Carlo simulations for
different IGMF (Intergalactic Magnetic Field) configurations (strengths
$B=8\times10^{-21}$ G, $10^{-20}$ G, $3\times 10^{-20}$G and correlation length
$\lambda>1$ Mpc), and compared them with the Fermi-LAT (Fermi Large Area
Telescope) limits computed for several exposure times. We found that Fermi LAT
is not sensitive enough to constrain any IGMF strengths using GRB$\,$190114C. | Paolo Da Vela, Guillem Martí-Devesa, Francesco Gabriele Saturni, Peter Veres, Antonio Stamerra, Francesco Longo | 2023-03-06T13:55:23Z | http://arxiv.org/abs/2303.03137v1 | # Intergalactic magnetic field studies by means of \(\gamma\)-ray emission from GRB 190114C
###### Abstract
The presence of delayed GeV emission after a strong transient, such as a GRB (Gamma-Ray Burst), in the VHE (Very-High Energy, \(E>100\) GeV) band can be the signature of a non-zero magnetic field in the intergalactic medium. We used a synchrotron self-Compton multiwavelength model to infer an analytical description of the intrinsic VHE spectrum (corrected for absorption by the Extragalactic Background Light, EBL) of GRB 190114C to predict the lightcurves and SEDs of the delayed emission with Monte Carlo simulations for different IGMF (Intergalactic Magnetic Field) configurations (strengths \(B=8\times 10^{-21}\) G, \(10^{-20}\) G, \(3\times 10^{-20}\)G and correlation length \(\lambda>1\) Mpc), and compared them with the _Fermi_-LAT (_Fermi_ Large Area Telescope) limits computed for several exposure times. We found that _Fermi_ LAT is not sensitive enough to constrain any IGMF strengths using GRB 190114C.
## I Introduction
Magnetic fields are present everywhere in the Universe, from stars to galaxies and even clusters of galaxies. But the origin of the large-scale magnetic fields is one of the long-standing problems in cosmology. There is a general agreement that the magnetic fields in the galaxies originate from the amplification of pre-existing weak seed fields (see e.g [1] and [2]). However, the origin of these seeds is still not known. Two main hypotheses exist: the astrophysical scenario and the cosmological scenario (see e.g [3] and [4]). If the magnetic fields originate in the early Universe, then a non-zero magnetic field is expected in the Intergalactic Medium (IGM) today. Whereas if the magnetic fields originate in large-scale structures during their formation, a negligible Intergalactic Magnetic Field (IGMF) would be expected unless galactic outflows effectively seed the magnetic fields in the deep IGM. Recently, Jedmazik & Pogosian [5] showed that the presence of primordial magnetic fields originated before recombination could resolve the discrepancy between the measurement of the Hubble constant derived by the Planck Collaboration [6] and the one performed by means of type Ia supernovae [7]. To shed some light on the origin of the magnetic fields it is crucial to look for signatures of magnetization in the voids among the galaxies. Due to the difficulties of direct detection (e.g. [8]), the observation of extragalactic \(\gamma\)-ray sources can be used to constrain the IGMF.
Very-High Energy (VHE, \(E>100\) GeV) gamma-rays from extragalactic sources are not able to propagate over large distances (\(\sim 1\) Gpc) because they are absorbed by the Extragalactic Background Light (EBL) via the pair-production process (\(\gamma+\gamma\to e^{+}+e^{-}\)) [9; 10]. For this reason, the primary VHE spectra of the sources are partially absorbed during the propagation in the IGM. The larger the distance of the source, the more pronounced is this effect.
In addition, the EBL absorption is stronger for higher primary photon energies. The created pairs lose energy by means of the Inverse Compton (IC) process with the Cosmic Microwave Background (CMB) producing secondary \(\gamma\)-rays. Typical energies of the IC photons are \(E\simeq 70(E_{0}/10\) TeV\()^{2}\) GeV [11], where \(E_{0}\) is the energy of the primary source photon. Yet a non-negligible IGMF can deflect the pairs during their propagation to Earth. Due to the subsequent longer path length, the secondary GeV \(\gamma\)-rays result in a "pair-echo" delayed with respect to the primary emission from the source. The presence of this new component in the GeV domain provides a way to study the IGMF. This method was first proposed by Plaga [12] and later developed by Ichicki et al. [13], Murase et al. [14], and Takahashi et al. [15] in the context of Gamma-ray Bursts (GRBs).
GRBs have been proposed to derive limits on IGMF (see e.g. [16]), and the recent discovery of VHE emission from GRB 190114C [17] (redshift \(z\simeq 0.42\)) was used to constrain the IGMF.
Wang et al. [18] performed an analytical calculation of the echo emission flux for different IGMF strengths and observing times. For their calculation they assumed a power law with spectral index 2, which is slightly harder than the 2.22 index reported by the MAGIC Collaboration between 200 GeV and 1 TeV as the primary source spectrum. The flux was then extrapolated up to 6 s after GRB trigger time which is where, reasonably, the af
terglow emission started [19]. Comparing the predicted pair-echo Spectral Energy Distributions (SEDs) with the _Fermi_ Large Area Telescope (_Fermi_-LAT) upper limits, the authors derived a lower bound on IGMF \(B>10^{-19.5}\) G assuming a correlation length \(\lambda\leq 1\) Mpc. They also verified that changing the maximum energy of the primary spectrum from 1 TeV to 15 TeV does not affect the result. On the other hand, Dzhatdeev et al. [20] first reconstructed the primary source spectrum from the VHE spectral data points testing several EBL models and looking for a possible cutoff at higher energies. Then they used the publicly available code ELMAG3 [21] to predict the pair echo emission from 20000 s after the burst time to 1 month. The VHE flux used by the authors, in this case, is the one measured by the MAGIC Collaboration during the time window 62 s - 2400 s after the GRB trigger time. Comparing the predicted pair-echo SED with the _Fermi_-LAT upper limits in the GeV domain, the authors conclude that the sensitivity of the _Fermi_ LAT is not sufficient to constrain the IGMF.
In this paper, we present the calculation of the expected pair echo SED and lightcurve for several observation times and IGMF strengths using a different approach. The choice of the GRB intrinsic spectrum is a key point: differently from [18] and [20] we do not use a purely phenomenological primary spectrum, but a physically motivated synchrotron self-Compton (SSC) spectrum fitting the multiwavelength observations of the GRB afterglow. Then we used CRPropa 3 [22] to simulate the cascade emission in the GeV domain and derive the SEDs and lightcurves for several IGMF strengths and observation times, taking into account the time activity of the GRB in the VHE band. Finally, we compared the simulated lightcurves and SED with the results obtained by analyzing the _Fermi_-LAT data.
## II Analytic description
To identify the relevant aspects required in our simulation, we begin with the analytic description of the involved processes. The flux produced by the cascade radiation is given by IC [23] between the electron-positron pairs and the CMB assuming Thomson scattering:
\[f_{\varepsilon_{s}}=\frac{3}{2}\left(\frac{\varepsilon_{s}}{\varepsilon_{0}} \right)^{2}\int\frac{d\gamma}{\gamma^{4}}\left(1-\frac{\varepsilon_{s}}{4 \gamma^{2}\varepsilon_{0}}\right)\int d\gamma_{i}C_{T}\frac{f_{\varepsilon}(e ^{\tau_{EBL}}-1)}{\varepsilon^{2}} \tag{1}\]
where \(f_{\varepsilon_{s}}=E_{\gamma}^{2}dN/dE_{\gamma}\) is the scattered \(\nu F_{\nu}\) flux at energy \(\varepsilon_{s}\) measured in units of \(m_{e}c^{2}\), \(\varepsilon=E_{\gamma}/m_{e}c^{2}\) is the energy of the VHE photons directly produced by the GRB, \(\gamma_{i}=\varepsilon/2\) is the Lorentz factor of the pairs, \(\tau_{EBL}\) is the optical depth of the EBL. The inner integral describes the production of pairs by the VHE spectrum (\(f_{\varepsilon}=E^{2}F_{E}^{GRB}\)). The outer integral accounts for the IC scattering of the pairs on the CMB with typical energy \(\varepsilon_{0}=2.7kT_{CMB}/m_{e}c^{2}\approx 1.24\times 10^{-9}\). Note that Eq. 1 only accounts for the first generation of the cascade, and the pairs will only radiate for a time \(\Delta T_{IC}=\lambda_{T}/2\gamma c\). Here \(\lambda_{T}=3m_{e}c/4\sigma_{T}u_{0}\gamma\) is the IC cooling length of a pair in the CMB with \(u_{0}\) energy density.
We can account for the finite duration (\(\Delta T_{activity}\)) of the VHE emission and for the finite observation \(\Delta T_{obs}\) window of the _Fermi_ LAT by scaling the expression for \(f_{\varepsilon_{s}}\) by the ratio of these timescales, \(C_{T}\). The photons that contribute to the echo flux need to arrive in the window defined by the observation time, the angular spreading time, \(\Delta T_{\rm A}=(\lambda_{T}+\lambda_{\gamma\gamma})/2\gamma^{2}c\) and the echo duration from the deflection in the IGMF \(\Delta T_{B}=(\lambda_{T}+\lambda_{\gamma\gamma})\theta_{B}^{2}/2c\), where \(\theta_{B}\) is the pair deflection angle induced by the IGMF. Here \(\lambda_{\gamma\gamma}=D/\tau_{EBL}\) is the mean free path of the VHE photons before interacting with the EBL for a source at distance \(D\).
The delay of an echo photon compared to the photon arriving directly, without undergoing absorption is determined by a simple geometry [11],
\[c\Delta t=\lambda_{\gamma\gamma}+x-D\approx\frac{\lambda_{\gamma\gamma}}{2} \theta_{B}^{2}\left(1-\frac{\lambda_{\gamma\gamma}}{D}\right), \tag{2}\]
where \(x\) is distance travelled by the IC cascade photons. In the case of simulations, we also know the arrival times of individual photons and we account for different emission and observation scenarios by considering the arrival times of individual simulated photons.
## III Simulation of pair-echo emission
In order to model the pair-echo emission for different IGMF settings we used the Monte-Carlo code CRPropa [22]: given a particular primary photon spectrum this code traces the development of the cascade in the IGM. Hereafter we assume the cosmological parameters \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\Lambda}=0.7\), and \(\Omega_{M}=0.3\). The source is located at the centre of a sphere of radius \(D\), which corresponds to the co-moving distance of the Earth to GRB 190114C (\(z=0.42\)). In order to contain a standard GRB jet aperture, we conservatively inject and track all primary photons within a \(10^{\circ}\) cone and, as target photon fields for \(\gamma\)-\(\gamma\) and IC interactions, we use the CMB and the Franceschini et al. [24] model for the EBL background. A photon that hits the sphere and has energy larger than 0.05 GeV represents a particle arriving and being detected at Earth. The magnetic field is assumed to be a turbulent zero-mean Gaussian random field with a Kolmogorov spectrum; it is defined in the Fourier space, transformed into real space, and then projected onto a (50 Mpc)\({}^{3}\) grid with \(100^{3}\) cells. The minimum scale that can be resolved is 1 Mpc and the maximum set scale is 25 Mpc. For such a configuration the correlation length is \(L_{c}\simeq 5\) Mpc. Given the primary gamma ray photon energies used here (i.e. 0.2-10 TeV), the correlation length is much larger than the loss length of the pairs (the largest loss length would be \(\lambda_{T}=0.8\) Mpc for \(E_{\gamma}=0.2\)
TeV). In this regime, the deflection angle of the pairs does not depend on the correlation length. The eventual lower bound on the IGMF can be easily re-scaled for the low correlation length regime considering the dependence of the deflection angle on the correlation length, this is \(\theta_{B}=\sqrt{\lambda_{B}\lambda_{T}}/R_{L}\), where \(R_{L}\) is the Larmor radius of the pair [11]. The grid is periodically repeated to cover the whole volume between the GRB and the Earth (\(D\simeq 1.6\) Gpc). For each magnetic field strength (root mean square) tested, we used CRPropa to inject \(10^{3}\) primary \(\gamma\)-ray photons and repeated the procedure \(10^{3}\) times (i.e. simulating \(10^{6}\) photons in total). Further, for each run we changed the seed used to generate the magnetic field grid in order to avoid spurious features due to the choice of that particular realization of a magnetic field. All particles are traced with a minimum step size of \(10^{-4}\) pc, which is sufficient to reproduce time delay with an accuracy better than 3 hours. We only consider the first cascade generation, since we find the contribution of further generations to be negligible for these settings.
The choice of the primary spectrum to be injected in the IGM is a crucial point that highly impacts the derived cascade spectrum. Hence solely a realistic, physically motivated intrinsic spectrum will provide a sensible cascade flux. With this in mind, we inferred the VHE spectral shape at energies higher than 1 TeV from the SSC model fitted to the multiwavelength SED by the MAGIC Collaboration [25]. We estimated the best-fit parameters of the time-averaged log-parabola shape in the energy range 0.2-1 TeV:
\[F_{E}^{GRB}\propto\left(\frac{E}{E_{0}}\right)^{-\langle\alpha\rangle-\langle \eta\rangle\log\left(E/E_{0}\right)} \tag{3}\]
Here \(E_{0}\) is the pivot energy, \(\langle\alpha\rangle\) is the average spectral slope and \(\langle\eta\rangle\) is the spectral curvature. First, we fixed \(E_{0}\) at 0.4 TeV as done in [25]; then, we estimated \(\langle\alpha\rangle=2.51\) and \(\langle\eta\rangle=0.21\) by averaging the GRB 190114C spectral slopes and curvature indices in different time bins presented in their Table 1.
To build the SED of the cascade emission in the _Fermi_-LAT band we first calculated the arrival directions of the cascade photons. This is needed because the SED is computed within the point spread function (PSF) of the _Fermi_ LAT. Following the scheme presented in [23] (Fig. 1) the observer is assumed to be perfectly aligned with the emission cone axis: in such a configuration the cascade photon is detected at an angle \(\theta\) with respect to the line of sight given by \(\sin\theta=(\lambda_{\gamma\gamma}/D)\sin\theta_{B}\), where \(\lambda_{\gamma\gamma}\) is the mean free path of the primary \(\gamma\)-ray photon, \(D\) is the distance to the source. Considering \(T_{0}=20\):\(57\):\(03.19\) UTC as the burst trigger time [26], we looked for the echo emission after \(T_{0}+2\cdot 10^{4}\) s to exclude all photons associated with the GRB afterglow in the GeV domain [27]. The cascade spectrum within a certain observation time \(\Delta T\) is calculated this way:
\[F_{E}=\frac{F^{GRB}(>200\text{ GeV})}{F_{sim}}\frac{\Delta N_{cascade}(E, \theta<\theta_{PSF})}{\Delta T\Delta S\Delta E}=\]
\[\frac{F^{GRB}(>200\text{ GeV})}{\Delta N_{sim}}\frac{\Delta T_{activity}}{ \Delta T}\frac{\Delta N_{cascade}(E,\theta<\theta_{PSF})}{\Delta E} \tag{4}\]
where \(F^{GRB}(E>200\text{ GeV})\) is the integrated flux (number of photons/cm\({}^{2}\) s) of the GRB measured in the VHE band, \(F_{sim}\) is the integrated flux of the GRB inferred from the simulation in the same energy band, \(\Delta N_{sim}\) is the total number of injected GRB photons not absorbed by the EBL for all realisations (i.e. after \(10^{3}\) simulations), \(\Delta S\) is the projected simulation area for our \(10^{\circ}\) cone selection, \(\Delta T_{activity}\simeq 40\) minutes is the time activity of the GRB in the VHE band, \(\Delta N_{cascade}\) is number of cascade photons collected at energy \(E\) and within \(\theta_{PSF}\), and \(\theta_{PSF}\) is the _Fermi_ LAT's PSF 68% containment angle at 1 GeV [28]. Concerning \(F^{GRB}(E>200\text{ GeV})\), MAGIC telescopes started to observe the GRB after at \(T_{0}+62\) s. Since the VHE emission likely started at \(T_{0}+6\) s (when the power law decay of the afterglow starts) we extrapolated the measured flux down to \(T_{0}+6\) s using the best fit power law decay to the VHE data [17]. This provides a total flux about a factor of five larger than the average flux published by the MAGIC Collaboration.
The upper limits in the _Fermi_-LAT band have been derived from \(T_{0}+2\cdot 10^{4}\) s for different exposure times (see next Sec. IV). To take into account the dilution in time of the echo flux, both the spectra \(F_{E}(E)\) and the lightcurves \(F(T)\) have been averaged over the corresponding time window. Given a certain exposure time \(T\) we then calculated \(\langle F_{E}(T)\rangle=\int_{0}^{T}F_{E}(t)dt/T\) and \(\langle F(T)\rangle=\int_{0}^{T}F(t)dt/T\) from the simulations. The lightcurves have been evaluated in the same energy range used to compute the upper limits in the GeV domain (1 GeV \(<E<100\) GeV). In Fig. 1 the expected lightcurves for different IGMF strengths are plotted together with
Figure 1: Expected echo daily lightcurves between 1 GeV and 100 GeV for different IGMF strengths and maximum primary energies. The lightcurves are plotted together with the _Fermi_-LAT upper limits.
the _Fermi_-LAT upper limits derived for 15 days, 1, 3, 6, 9, 15 and 24 months of observation time. Concerning the magnetic field strengths, we tested the same as in [18] - this is, \(B=10^{-20}\) G and \(3\times 10^{-20}\) G. Since none of them can be constrained, we also tested a weaker strength, namely \(B=8\times 10^{-21}\) G. Given that the flux does not change dramatically for even lower magnetic field strengths we decided not to decrease further the tested IGMF strength. As stated before, we conservatively set the maximum energy of the primary GRB to the one reported by the MAGIC Collaboration, namely 10 TeV. However, to test how this choice can affect our procedure we also tested \(E_{max}=50\) TeV. The results are plotted in the same figure together with the case \(E_{max}=10\) TeV.
In Fig. 2 we reported the expected SEDs (\(E^{2}F_{E}\)) as inferred from the simulations for different IGMF strengths, \(E_{max}=10\) and 50 TeV and for different exposures. The SEDs are plotted together with the differential upper limits of the _Fermi_ LAT.
## IV _Fermi_-LAT data analysis
The simulations described before are compared with data from the _Fermi_ LAT [29]. We include observations taken between \(T_{0}+2\cdot 10^{4}\) s and \(T_{0}+24\) months, selecting events with energies between 1 and 100 GeV in a region of interest (ROI) of \(10^{\circ}\times 10^{\circ}\) centred on the GRB coordinates [30]. As previously stated, this selection guarantees no contamination from the burst itself [27] and focuses on the most sensitive energy range of the _Fermi_ LAT. We select PSR3 SOURCE data (evclass \(=128\)) with a FRONT+BACK event type (evtype \(=3\)), applying a maximum zenith angle cut at \(100^{\circ}\) to prevent Earth limb contamination.
Using _Fermitools_ (version 2.0.8) and _fermipy_ (version v1.0.1) [31], we perform a binned maximum likelihood analysis on our dataset [32]. Subsequently, we account for the PSF and energy dispersion (edisp_bins \(=-1\); excluding the isotropic diffuse component) using the instrument response functions PSR3_SOURCE_V3. As our background source model we use a \(15^{\circ}\times 15^{\circ}\) selection of the 4FGL-DR2 ('gll_psc_v27') catalogue [33; 34] centred on the burst together with the recommended galactic and isotropic diffuse components - 'gll_iem_v07' and 'iso_P8R3_SOURCE_V3_v1', respectively. The detection significance of these ROI sources is evaluated with the test statistic \(TS=-2\left(L_{0}/L_{1}\right)\), where \(L_{0}\) is the log-likelihood of the null hypothesis and \(L_{1}\) the log-likelihood of the complete model. After a preliminary iterative optimization of the ROI (optimize function, fitting first sources with larger predicted counts based on the catalogue), sources detected with \(TS<4\) (i.e. \(2\sigma\)) are removed to avoid unnecessary degrees of freedom. We also notice that the blazar PKS 0346-27 is in our ROI and has been flaring occasionally since 2018, thus including the observational window of this study [35; 36; 37]. Its spectral model from the 4FG-DR2 catalogue - a log-parabola - does not characterise properly the flaring state, while a power law with an exponential cut-off can account for the spectrum observed by the _Fermi_ LAT. We therefore modify accordingly the background model and free the spectral parameters of PKS 0346-27 in our fit, together with the normalization of all sources within \(3^{\circ}\) of the ROI's centre. Such analysis is performed in datasets lasting 0.5, 1, 3, 6, 9, 15, and 24 months with no detection (\(TS\) lies between 0.0 and 0.1 for the different \(\Delta T\)), therefore we extracted upper limits at 95% confidence level. We achieved this by adding a point source modelled as a power law with spectral index 2 at the GRB nominal position. No significant difference is found assuming the spectral shape of the putative cascade obtained from the simulations.
## V Discussion and conclusions
In this paper, we used the \(\gamma\)-ray emission from GRB 190114C to infer the pair echo SED and lightcurves for different IGMF strengths. We used CRPropa 3 to simulate the cascade emission in the GeV domain originated by the interaction of the primary VHE GRB spectrum with the IGM. We then compared the expected SEDs and lightcurves with the differential and integrated flux upper limits derived by analyzing the _Fermi_ data. From both Fig. 1 and 2 we clearly see that no IGMF strengths can be constrained because the flux upper limits are well above the predicted cascade flux. For a given observation time, the amount of cascade flux depends on the strength of the IGMF: as expected, increasing the IGMF strength the cascade is more diluted in time due to the larger delay experienced by the pairs, and the largest tested magnetic field strength always corresponds to the lowest cascade flux (Fig. 1 and Fig. 2). This is also compatible with the results in [18] and [20].
The evolution of the SEDs as a function of observation time and the shape of the lightcurves can be explained in this way: for \(T_{obs}=15\) days we have the maximum level of cascade flux. On the other hand for such an exposure time the _Fermi_ limits are also the largest. As soon as we increase the observation time, the _Fermi_ limits improve (roughly \(F_{U.L.}\propto 1/\sqrt{T_{obs}}\)) but, due to the temporal evolution of the cascade signal, the echo flux also decreases. As it is described in Sec. III we used, as primary VHE spectrum, a log-parabola up to 10 TeV. We also tested the possibility that \(E_{max}\) might be larger (\(E_{max}=50\) TeV) and how this affects our results. Since the spectrum is curved, at the largest energies the flux is very low. For this reason, although we see that moving from \(E_{max}=10\) TeV to \(E_{max}=50\) TeV the level of cascade increases especially at \(E>50\) GeV, the overall cascade flux does not change dramatically and our main conclusion remains unchanged.
One of the reasons why, despite the very promising
GRB, the IGMF remains unconstrained can be understood from Eq. 4: the amount of cascade flux is proportional to the GRB time activity in the VHE band. We would need the activity to be at least a factor of 5 larger (namely \(\Delta T_{activity}>25\) hours) in order to exclude IGMF strengths larger than \(10^{-20}\) G for \(T_{obs}>9\) months. In this regard, we note the reported detection at VHE \(\gamma\)-rays from the afterglow of GRB 190829A by the H.E.S.S. Collaboration [38]. In this case, the estimated power law index of the intrinsic spectrum is again around \(\sim 2\), while the redshift is considerably lower (\(z=0.0785\)) than for GRB 190114C. But the time activity in the VHE band measured by H.E.S.S. is about 51 hours, more than a factor of 10 larger than the one of GRB 190114C, at a
Figure 2: Expected SEDs for different IGMF strengths, observation times and for \(E_{max}=10\) and 50 TeV. The _Fermi_-LAT differential upper limits are also shown.
similar flux level. To test whether GRB 190829A could be a better target for IGMF studies we repeated the same procedure using a power law with index 2 but adding an exponential cutoff at 4 TeV (the maximum estimated energy in the VHE spectrum) as the primary spectrum. Due to the low redshift, the cascade SED in the energy range 0.1--100 GeV and for \(B=10^{-20}\) G, after 1 month of observation time is more than 4 orders of magnitude lower than the _Fermi_-LAT upper limits[39]
Back to GRB 190114C, from Fig. 1 we see that the _Fermi_-LAT upper limits decrease faster than the predicted cascade flux with the observation time. To verify whether for large observation times the GeV upper limits might be lower than the predicted cascade flux, we simulated the _Fermi_-LAT sensitivity as a function of the observation time. Consequently, we used the same instrument response functions and diffuse models and re-scaled our 24 months exposure map to various times between 20 days and 25 years. We also assumed again a power law with spectral index 2, requiring at least a \(2\sigma\) detection and 3 counts above 1 GeV. Finally, we compared the _Fermi_-LAT sensitivity with the cascade light curve for \(B=8\times 10^{-21}\) G (the case in which we have the largest cascade flux within the first 2 years of observation time) extrapolated up to \(T_{obs}=10^{4}\) d \(\simeq 27.4\) yr.
As we can see from Fig. 3 from roughly \(T_{obs}\simeq 150\) d the sensitivity and the cascade lightcurve start to have the same slope. For this reason, there is no chance that the two curves can cross for a finite observation time.
Another test we performed concerns the _Fermi_-LAT PSF: in Eq. 4 the cascade SED and lightcurve are calculated counting, in the simulations, the cascade photons within \(\theta_{PSF}\). However, due to the deflection of the pairs, the cascade emission is also extended. As a consequence it might be possible that by increasing the angular extension used to compute the cascade SED and lightcurve, the level of cascade flux could increase. On the other hand, the _Fermi_-LAT analysis should be changed accordingly because the morphological model assumed in the analysis described in the previous section is point-like. To verify this hypothesis we produced the angular distribution of the cascade in the first 24 months after the GRB: in this time range all the cascade photons are within the PSF of the instrument for each IGMF strength tested, therefore our result does not depend on the limited \(\theta_{PSF}\) and no extension is expected.
As described in the introduction, two previous papers report different results. While in [18] the authors were able to calculate a lower limit on the IGMF strength, in [20] no IGMF strengths can be constrained. In [18] the authors comment that this discrepancy can be due to the fact that Dzhatdoev et al. did not extrapolate the VHE flux up the first 6 seconds after the burst. This, of course, decreases significantly the cascade power. Although this is a crucial point, we find that even considering the extrapolation of the VHE flux up to \(T_{0}+6\) s, no IGMF limits can be placed with this GRB. There is an important difference between our procedure and the ones adopted in [18] and [20]: we chose, as the primary VHE spectrum, the one derived from the multiwavelength SED model published by the MAGIC Collaboration [25]. In this way our treatment is model dependent but, given the log-parabola shape (Eq. 3), the VHE flux at the highest energies is lower than the one we would have had choosing as primary spectrum a simple power law such as in [18] and [20]. In this way, our choice is more conservative because the cascade power is lower. Furthermore, such a model justifies our extrapolation to earlier times as a reliable assumption: the fast cooling of the electrons likely implies that radiative losses start at the beginning of the afterglow, also shifting the peak of the synchrotron self-Compton component to lower energies - thus the GRB would presumably exhibit harder spectra at earlier times [25] In spite of this crucial difference, the cascade flux that we inferred is still lower than the reported one in the two cited papers and we cannot reproduce their results.
We performed this study assuming that the only mechanism through which the electron-positron pairs lose energy is IC. An alternative competing energy loss mechanism to IC is through beam-plasma instabilities. The plasma instabilities were firstly proposed by Broderick et al. [40] to explain the non-detection of the electromagnetic cascade in blazar SEDs at GeV energies, as well as the lack of extended emission. Many subsequent studies have attempted to quantify how the plasma instabilities can efficiently cool down the pairs (see e.g. [41; 42; 43; 44; 45; 46; 47; 48; 49]) compared to the IC process. But the results of these studies strongly depend on the assumptions used; the extreme contrast between parameters of the interacting components - such as the huge difference between the densities
Figure 3: _Fermi_-LAT sensitivity (95% confidence level) in the energy range \(E=\) 1—100 GeV as a function of the observation time and simulated cascade lightcurve for \(B=8\times 10^{-21}\) G and \(E_{max}\)=10 TeV in the same energy band extrapolated up to \(T_{obs}=10^{4}\) d. The slope change for times shorter than 80 days is caused by the absence of at least 3 counts, requiring larger fluxes.
of the electron beam and the background plasma - make the impact of the instabilities on the development of a cascade almost impossible to evaluate. However, the instabilities might not be a problem for the specific case of a GRB: in order to develop themselves, the instabilities require a certain amount of time (\(\sim 300\) yr, [40]). Since \(\Delta T_{activity}\) is much lower than this characteristic time, the instabilities might not have enough time to develop [48], making the studies of the IGMF by means of GRB and VHE flares of blazars robust.
###### Acknowledgements.
The Fermi LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat a l'Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucleaire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K. A. Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase from the following agencies is also gratefully acknowledged: the Istituto Nazionale di Astrofisica in Italy and the Centre National d'Etudes Spatiales in France. This work performed in part under DOE Contract DE-AC02-76SF00515. P.V. acknowledges support from NASA grant NNM11AA01A.
|
2305.05810 | Stochastic Texture Filtering | 2D texture maps and 3D voxel arrays are widely used to add rich detail to the
surfaces and volumes of rendered scenes, and filtered texture lookups are
integral to producing high-quality imagery. We show that filtering textures
after evaluating lighting, rather than before BSDF evaluation as is current
practice, gives a more accurate solution to the rendering equation. These
benefits are not merely theoretical, but are apparent in common cases. We
further show that stochastically sampling texture filters is crucial for
enabling this approach, which has not been possible previously except in
limited cases. Stochastic texture filtering offers additional benefits,
including efficient implementation of high-quality texture filters and
efficient filtering of textures stored in compressed and sparse data
structures, including neural representations. We demonstrate applications in
both real-time and offline rendering and show that the additional stochastic
error is minimal. Furthermore, this error is handled well by either
spatiotemporal denoising or moderate pixel sampling rates. | Marcos Fajardo, Bartlomiej Wronski, Marco Salvi, Matt Pharr | 2023-05-09T23:50:25Z | http://arxiv.org/abs/2305.05810v2 | # Stochastic Texture Filtering
###### Abstract
2D texture maps and 3D voxel arrays are widely used to add rich detail to the surfaces and volumes of rendered scenes, and filtered texture lookups are integral to producing high-quality imagery. We show that filtering textures after evaluating lighting, rather than before BSDF evaluation as is current practice, gives a more accurate solution to the rendering equation. These benefits are not merely theoretical, but are apparent in common cases. We further show that stochastically sampling texture filters is crucial for enabling this approach, which has not been possible previously except in limited cases.
Stochastic texture filtering offers additional benefits, including efficient implementation of high-quality texture filters and efficient filtering of textures stored in compressed and sparse data structures, including neural representations. We demonstrate applications in both real-time and offline rendering and show that the additional stochastic error is minimal. Furthermore, this error is handled well by either spatiotemporal denoising or moderate pixel sampling rates.
## 1 Introduction
Image texture maps are essential to rich surface detail in most rendered images, thanks to the advanced texture painting tools available today, and the precise artistic control they allow. Three-dimensional voxel grids play a similar role for volumetric effects like clouds, smoke, and fire, allowing detailed offline physical simulations to be used. The number and resolution of both has continued to increase over the years.
Texture maps consist of uniform or sparsely distributed discrete points, which require continuous reconstruction through filtering. For computational efficiency, texture filtering is traditionally done prior to shading. For instance, GPUs are equipped with dedicated filtering units capable of bilinear or trilinear filtering, often at no
additional cost. However, this approach often results in low-quality reconstruction. We argue that it is generally better to filter _after_ shading and address this gap in our work.
Texture mapping can be a dominant cost of offline rendering pipelines [18, 19] and the introduction of hardware-accelerated ray tracing to real-time renderers has caused the fraction of rendering time spent in texturing to correspondingly increase [14]. Billions of lookups from textures and voxel grids may be necessary to render a single image, especially with multi-bounce path tracing where shaders are evaluated at every ray intersection. Higher-quality filters, such as anisotropic filters, generally require more texel lookups than simple filters, increasing the amount of memory bandwidth consumed. To save memory usage and bandwidth, recent works propose to store textures in more compressed formats and representations; examples include UDIM's adaptive tiling, multi-level sparse grids [20], and, recently, neural representations [19, 21]. Those can reduce memory usage significantly, but are incompatible with hardware-accelerated filtering, and texture access is more computationally costly.
In this work, we introduce _stochastic texture filtering_, applying stochastic sampling to texture filtering and material network evaluation. Our contributions are as follows:
* We describe two ways of stochastically filtering textures, discuss their theoretical and practical differences, and connect them to prior work.
* We show that using stochastic texture filtering after lighting, rather than filtering the texture data, produces more accurate and _appearance-preserving_ results.
* We demonstrate that the additional noise introduced by stochastic filtering in offline rendering is negligible and that moderate pixel sampling rates handle it well. In real-time rendering, this noise is effectively suppressed by using spatiotemporal reconstruction algorithms and blue-noise sampling patterns.
* We analyze how by decoding only a single source texel at each look-up, our algorithms make computationally-expensive compressed texture representations (traditional, sparse, or neural compressed) more viable.
* Finally, we show that our stochastic filtering algorithms further improve image quality by the use of high-quality and higher-order interpolating and approximating texture filters at a lower cost than trilinear filtering.
## 2 Background and Previous Work
The use of image textures in rendering dates to Blinn and Newell [19, 20]. Subsequent milestones in texture mapping include the introduction of spatially-varying filters [19] and the use of image pyramids for efficient filtering [18, 21]. See Heckbert's survey article for comprehensive coverage of early work in this area [19] and see Section 2.1 for further discussion of texture filtering.
A wide range of texture encodings have been developed, trading off memory and bandwidth consumption, computation, and compression error. Block-based compression [16] saves memory and bandwidth in exchange for some error; it is ubiquitous in GPUs today [1, 20, 21]. Higher compression rates and lower error can be achieved with adaptive and neural representations [1, 20, 21, 22], though at a cost of multiple memory accesses and additional computation for each texel lookup; such formats are not supported by current GPUs and require manual filtering in shaders.
Monte Carlo estimation via stochastic sampling [1, 20, 21, 22] has become the foundation of most approaches to rendering today. Production rendering has embraced path tracing for over a decade [23], and there is now early adoption of path tracing for real-time rendering [20]. Although lighting integrals are evaluated stochastically, their integrands are usually evaluated analytically. Integrals that are themselves stochastic have been used for complex BSDF models that cannot be evaluated analytically [17, 18]. Related to our approach, stochastic evaluation of analytic quantities has been used to improve efficiency for multi-lobe BSDF evaluation [24] and for many light sampling [21, 23].
Real-time rendering has also embraced stochastic approaches. UV jittering as an alternative to bilinear filter dates as far back as the 90s and video games Star Trek: 25th Anniversary [16] and the original Unreal Engine [22]. More contemporary examples include stochastic alpha testing techniques that replace alpha blending with depth-tested random sampling [1, 20], stochastic filtering of reflections [21], and raytraced ambient occlusion [2]. Key enabling technologies are temporal anti-aliasing (TAA) [23, 24] and temporal super-resolution (TSS) [25]. Both are based on recursive filters and exponential-moving-averaging with adaptive history modification and rejection. TAA and TSS publications commonly describe the practice of _negative MIP biasing_ used with screen-space jittering for a sharper image and approximate anisotropic filtering to improve appearance. We take this ad-hoc approach, formalize it, analyze how it deviates from anisotropic filtering, and show why it produces a more accurate filtered shading result.
The motivation for our work includes the filtering algorithms introduced by Hofmann et al. [17] and Vaidyanathan et al. [21], who used stochastic trilinear filtering to improve performance. By avoiding evaluating an expensive decompression algorithm multiple times per voxel or pixel they see significant speedups. The OpenImageIO library [18] also supports stochastic sampling of both MIP levels and anisotropic probes, and Lee et al. replaced filtered texture lookups with nearest-neighbor point samples, relying on the high sampling rates common in film production to resolve texture aliasing [19]. We expand on their results and provide a theoretical framework for a wider category of texture filters.
The pioneering work of Reeves et al. on shadow map filtering was the first to distinguish between filtering before lighting versus filtering afterward; their percentage closer filtering algorithm is based on filtering binary visibility rather than depth [22]. They further showed the application of stochastic sampling to the filtering computation.
### Texture Filtering
Textures are given as discrete, uniformly-spaced samples. Filtering texture lookups is challenging since each access generally requires a spatially-varying anisotropic filter that accesses multiple source texture samples. We can distinguish two main types of texture filtering: interpolation for translation and magnification, and lowpass filtering for unification. For both, we use the notation of a filter function \(f\) that is defined over the texture-space coordinates domain \(\mathbb{R}^{n}\to\mathbb{R}\) and \((u,v,\dots)\) as its inputs. Without loss of generality, we use simplified, two-dimensional notation due to the separability of the sampling process. The filtered texture value is an integral of the product of \(f\) and the texture look-up function \(t\):
\[F=\int f(u,v)\,t(u,v)\,\mathrm{d}u\,\mathrm{d}v. \tag{1}\]
The texture function \(t\) is defined everywhere, but is non-zero only at discrete locations (typically uniform grid) due to impulse train sampling. We describe two practical realizations of this integral--a discrete one in Section 2.2, and a continuous one in Section 4.4.
**Interpolation:** Interpolation of an \(n\)-dimensional texture is typically done by sequentially interpolating all of the dimensions. If a texture represents a sampled continuous bandlimited signal, the original function value between samples can be perfectly reconstructed using sinc basis functions, though this is impractical due to the sinc's infinite spatial support and often produces overshoots and ringing artifacts. Many alternative lowpass filters have been proposed, with various trade-offs in computation and number of texels accessed [11, 12, 13]. Interpolating polynomial kernels are also often useful--nearest-neighbor (box kernel), linear (tent), and cubic [10] interpolation are all widely used. We present some of those kernels in the supplementary material.
**Non-interpolating and convolutional kernels:** Non-interpolating (approximating) kernels--where the original texel values are not preserved--are often used in practice, especially when a mild blurriness is preferred to aliasing or overshoots. The cubic B-spline is useful for texture magnification, as is an approximating quadratic kernel [10] and the truncated Gaussian. The Gaussian filter is the only separable radially symmetric kernel in \(\mathbb{R}^{n}\) and can yield more pleasant reconstruction of diagonal edges than other kernels.
**Minification:** Minification during rendering transforms a high-resolution source texture to a lower resolution by mapping multiple source texels to a single pixel. Failure to filter those texels may result in significant aliasing. This process is more difficult than magnification; due to perspective transformation, the mapping is non-orthogonal--a single on-screen pixel can map to a trapezoid in texture space (Figure 2).
Because the input texels no longer form a regular grid, simple linear filters are not used in practice. The most common approach is trilinear MIP mapping [13]. MIP mapping computes a bounding box of the filtering extent and selects the MIP map based on the largest of the two axes, leading to over blurring when the texture is not mapped orthogonally to the viewing plane. The practical solution to this blurring is anisotropic filtering of multiple samples from a higher-resolution MIP map. Examples include the elliptically weighted average (EWA) filter [13, 14] and a variety of techniques that approximate high quality filters using multiple bilinear lookups [1, 1, 12].
### Sampling Techniques
For reference, we summarize the well-known sampling techniques that we apply. See the books by Pharr et al. [13] or Ross [15] for further background. In the following, we will use \(\xi\) to denote uniform random variables in \([0,1)\) and angled brackets to denote expectation.
**Separable functions:** An \(n\)-dimensional function that is a product of 1D functions can be sampled by independently sampling each dimension. Many filters used for textures, including Gaussian and polynomials (linear, cubic, etc.) are separable.
**Weighted sums:** The simplest texture filtering function \(f\) can be represented as a set of discrete weights \(w\) defined for multiple discrete texture samples \(t\). Given normalized weights \(w_{i}\) and texture values \(t_{i}\), the filtered texture value is given by
\[F=\sum_{i=1}^{n}w_{i}t_{i}. \tag{2}\]
If a term \(j\) of the sum is sampled with probability equal to \(w_{i}\), then an unbiased estimate of \(F\) is given by the corresponding texture value, unweighted:
\[\langle F\rangle=t_{j}. \tag{3}\]
Under the assumption that \(w_{i}\) are normalized, this is a special case of sampling a term according to probabilities \(p_{i}\propto w_{i}\) and applying the standard Monte Carlo estimator \(f_{j}/p_{j}\). The weights \(w_{i}\) are often not normalized, and so must be normalized to find weights \(\tilde{w}_{i}=\nicefrac{{w_{i}}}{{\sum_{j}w_{j}}}\) before filtering. However, in this case, we can simply skip normalization, sample \(j\) with probability proportional to \(w_{i}\), and still apply Equation 3 to get the correct result.
**Uniform sample reuse:** Whenever a 1D random variable \(\xi\) is used to make a discrete sampling decision based on a probability \(p\), then a new independent random variable \(\xi^{\prime}\in[0,1)\) can be derived from \(\xi\)[12]:
\[\xi^{\prime}=\begin{cases}\xi/p&\text{if $\xi<p$}\\ (\xi-p)/(1-p)&\text{otherwise}.\end{cases} \tag{4}\]
This technique can be useful when \(\xi\) is well-distributed (e.g., with
Figure 2: Uniform jittering in screen-space within pixel bounds (**left**) produces trapezoid, non-uniform coverage in the UV texture space (**middle**). Filter importance sampling then _additionally_ jitters the resulting UVs in texture space for a desired reconstruction filter (Section 4.4), for example with Gaussian distribution (**right**).
a blue noise spectrum [14] or with low discrepancy), allowing additional dimensions to benefit from \(\xi\)'s distribution as well as saving the cost of generating additional random samples.
**Sampling arrays:** An array of weights \(w_{i}\) (as from Equation 2) can be sampled by summing the weights and selecting the first item \(j\) where \(\xi<\sum_{j}^{n}w_{j}/\sum_{i}^{n}w_{i}\).
**Weighted reservoir sampling:** Storing or recomputing all of the weights \(w_{i}\) may be undesirable, especially on GPUs. Weighted reservoir sampling [10] with sample reuse [11] can be applied with weights generated sequentially.
**Posivization:** Although negative weights can be sampled with probability based on their absolute value, doing so does not reduce variance as well as importance sampling does with positive functions [12]. All interpolating filters of a higher order than the linear filter have negative lobes and being able to estimate them with low variance is essential for stochastic texture filtering. We apply posivivization [1], partitioning the filter weights \(w_{i}\) into positive (\(w_{i}^{+}\)) and negative (\(w_{i}^{-}\)) sets and sampling once from each set. Given respective sample indices \(j^{+}\) and \(j^{-}\), the estimator of the filtered texture value of Equation 2 is
\[\langle F\rangle=\sum_{i}w_{i}^{+}t_{j^{+}}-\sum_{i}w_{i}^{-}t_{j^{-}}. \tag{5}\]
If the original filter was normalized, the resulting positive and negative parts won't be and if Equation 2 is used, both sums need to be weighted. We include a practical example of positivization used for sampling the Mitchell bicubic filter in the supplementary material.
## 3 Effect of Texture Filtering on Rendering
Current practice in rendering is to filter textures before performing the lighting calculation, rather than applying the texture filter to the result of light lighting equation. We will start by formalizing the differences between those two approaches. In the following, we will define \(\hat{f}\) as the BSDF times the Lambertian cosine factor and parameterize it with the texture maps \(t_{i}\) that it depends on. We assume a single texture filter \(f\) and \((u,v)\) parameterization.2
Footnote 2: The generalization to different filters and texture coordinate parameterizations for different textures is straightforward, but clutters notation with change of variables factors.
With this notation, the traditional lighting integral that gives outgoing radiance \(L_{0}\) at a point \(p\) in direction \(\omega_{0}\) is written:
\[L_{0}(p)=\int_{\mathbb{S}^{2}}\hat{f}\left(\omega_{0},\omega^{\prime},\int f (u,v)\,t_{1}(u,v)\,\mathrm{d}u\,\mathrm{d}v,\ldots\right)L_{i}(p,\omega^{ \prime})\,\mathrm{d}\omega^{\prime}, \tag{6}\]
where the BSDF's parameters beyond the two directions are filtered textures.
Alternatively, we may write the integral with the order of integration exchanged, first integrating over the texture filter's extent and then integrating to compute outgoing radiance at points within the filter:
\[L_{0}(p)=\int f(u,v)\int_{\mathbb{S}^{2}}\hat{f}\left(\omega_{0},\omega^{ \prime},t_{1}(u,v)),\ldots\right)L_{1}(p,\omega^{\prime})\,\mathrm{d}\omega^{ \prime}\,\mathrm{d}u\,\mathrm{d}v. \tag{7}\]
The difference is that rather than using filtered values in the lighting integral, Equation 7 is _applying the filter to the result of the lighting integral itself_.
If a texture makes an affine contribution to the lighting integral (i.e., is a factor or a linear term of it), then both Equations 6 and 7 give the same result, since integration is a linear operator. Thus, they are the same with a texture value used as a diffuse coefficient but differ with a textured surface roughness used in an exponent. (The systematic error in Equation 6 in such cases can be analyzed using the Taylor series expansion of the integrated function. If the function is well-approximated by the first, linear term around the expansion point, the difference will be negligible but for highly non-linear functions with large higher-order Taylor series terms, the error is significant.)
Although filtering textures before integrating lighting is common practice in rendering, we argue that filtering outside of the lighting integral is preferable. There is precedent for this view: for example, this distinction is fundamental to percentage-closer shadow filtering (PCF), which is based on the insight that filtering depth values with shadow map lookups gives incorrect results, and filtering binary visibility is superior [15]. Another motivating example comes from textures that are stored in non-linear formats like sRGB. When using such textures, inverting the non-linearity before filtering is essential for interpolation and unification correctness [13]. Section 5.1 shows results with a number of other examples that illuminate cases where filtering lighting instead of textures gives superior results.
It is straightforward to filter textures first, but other than in special cases like PCF, it has not been obvious how to filter the lighting calculation. However, it is straightforward to apply stochastic sampling to the filter function in Equation 7: we can sample the filter \(f\) to find discrete texture coordinates \((u^{\prime},v^{\prime})\) and use the corresponding texel values when evaluating the lighting integral. If \(f\) is normalized, then the Monte Carlo estimator \(f(u^{\prime},v^{\prime})/p(u^{\prime},v^{\prime})=1\), the filter factor disappears, and we are left to complete the lighting calculation using texels at \((u^{\prime},v^{\prime})\).
## 4 Filtering Algorithms and Rendering
In order to derive practical stochastic texture filtering algorithms, we can now apply the sampling techniques from Section 2.2 to the filters introduced in Section 2.1.
### Linear Filters
Direct application of the array sampling algorithm from Section 2.2 and then Equation 3 gives the following estimator for linear interpolation over \([0,1]\), \(\mathit{l}\mathit{e}\mathit{r}(v_{0},v_{1},t)=(1-t)v_{0}+tv_{1}\):
\[\langle\mathit{l}\mathit{e}\mathit{r}\rangle=\begin{cases}v_{0},&\text{if } \xi>t\\ v_{1}&\text{otherwise.}\end{cases} \tag{8}\]
Bilinear interpolation of values at the four corners of the unit square, \(\mathit{bilrep}\left(v_{00},v_{10},v_{01},v_{11},s,t\right)\), can be implemented with nested linear interpolations. Applying the same approach and
reusing the sample, we have:
\[\langle\mathit{bilerp}\rangle(s,t)=\left\{\begin{array}{ll}v_{00},&\text{if } \xi>s\text{ and }(\xi-s)/(1-s)>t\\ v_{01},&\text{if }\xi>s\text{ and }(\xi-s)/(1-s)\leq t\\ v_{10},&\text{if }\xi\leq s\text{ and }\xi/s>t\\ v_{11},&\text{otherwise.}\end{array}\right. \tag{9}\]
It is straightforward to extend this estimator to trilinear interpolation, as used with MIP mapping and 3D voxel grids. More generally, the technique can be applied to \(n\)-dimensional interpolation, reducing from \(2^{n}\) texture lookups to a single one.
### B-Spline and Anisotropic Filters
Multidimensional B-spline filters are defined as a product of 1D cubic B-splines. For example, in 2D, given a lookup point \((s,t)\) in \([0,1]^{2}\) with associated texture raster coordinates \((\bar{s},\bar{t})\), the filtered texture value is given by \(4\times 4\) weighted text values:
\[\sum_{i=-1}^{2}\sum_{j=-1}^{2}K_{bs}(\lfloor\bar{s}\rfloor+i)\,K_{bs}(\lfloor \bar{t}\rfloor+j)\,t(\lfloor\bar{s}\rfloor+i,\lfloor\bar{t}\rfloor+j). \tag{10}\]
The B-spline filter is separable, and we apply weighted reservoir sampling to each dimension; in \(s\), for example, we sample \(i^{\prime}\in[-1,0,1,2]\) according to the weights \(K_{bs}(\lfloor\bar{s}-1\rfloor)\), \(K_{bs}(\lfloor\bar{s}\rfloor)\), \(K_{bs}(\lfloor\bar{s}+1\rfloor)\), and \(K_{bs}(\lfloor\bar{s}+2\rfloor)\). The single texel value \(t(\lfloor\bar{s}\rfloor+i^{\prime},\lfloor\bar{t}\rfloor+j^{\prime})\) is then the unbiased estimator of Equation 10. Sampling higher-dimensional B-spline filters follows the same approach. For an \(n\) dimensional filter, \(4^{n}\) texture lookups are replaced with a single lookup. Separable sampling reduces the sample selection cost from \(4^{n}\) to \(4n\).
Our implementation of stochastic sampling of the elliptically weighted average filter is also based on reservoir sampling: after stochastically selecting a MIP level based on the ellipse's extent, we then simply compute all of the EWA filter weights and sample one based on their distribution.
### Material Graphs
Complex patterns are often generated using graphs composed of simple nodes such as scales, mixtures, and color corrections, with textures at the leaves. In offline rendering, it is not uncommon for these graphs to have hundreds of nodes and use many source textures, each of which is filtered at each shading point. Linear combinations of textures can be evaluated stochastically using Equation 8 and more complex blends such as triplanar mapping, which is based on a blend of three textures weighted by the orientation of the normal vector, can also be sampled stochastically.
### Filter Importance Sampling (FIS)
We have thus far introduced a toolbox of stochastic techniques for estimating discrete image filters. We can use a different approach to stochastically sample continuous filters without discretizing them. For a filtering operation given by the product of a normalized continuous convolutional filter \(f(u,v)\) with a texture \(t(u,v)\) expressed in the form of Equation 1, an unbiased estimate of \(F\) can be found using filter importance sampling [1, 1, 2] (FIS): \((u^{\prime},v^{\prime})\) is sampled from \(f(u,v)\)'s distribution and the standard Monte Carlo estimator is applied, giving \(\langle F\rangle=t(u^{\prime},v^{\prime})\). This approach is appealing for stochastic texture filtering since it allows for filters with infinite spatial support and doesn't have a cost that necessarily scales with the filter's width. The FIS framework can be used with positivization (Section 2.2) for low variance evaluation of filters with negative lobes.
Filter importance sampling a screen-space reconstruction filter is a common practice in production renderers. It can effectively approximate a minification filter, such as an anisotropic filter (Figure 2 left and middle). However, it is not enough to perform UV jittering for magnification, as it would produce nearest-neighbor interpolated texture and visual artifacts. This motivated prior work to use software bilinear filtering instead [1]. We propose to use FIS for texture reconstruction and sampling in addition to screen-space reconstruction filtering.
However, FIS assumes the integration of a product of two continuous functions. When using it to filter discrete samples, a practical realization draws a sample \(x^{\prime}\) from \(f\) and then selects the closest texel \(\lfloor x^{\prime}+1/2\rfloor\). For \(n\)-dimensional filtering, this corresponds to applying a box reconstruction filter over \([-\nicefrac{{1}}{{2}},\nicefrac{{1}}{{2}}]^{n}\) to the texture to make a continuous function \(t(x)\). Equivalently, it corresponds to convolving the original filter function \(f\) with a box filter, changing its shape. Thus, the filter function that is sampled should be the deconvolution of the desired filter with the box function. This perspective allows us to better understand Hofmann et al.'s stochastic trilinear sampling algorithm, which is based on independent, uniform jittering in each dimension and then applying nearest neighbor sampling [1]. Their jittering corresponds to applying FIS
Figure 3: Appearance of a normal mapped material under minification. Stochastic texture filtering more accurately reconstructs the material’s appearance by filtering the material itself, while traditional texture filtering filters the surface normal before shading.
to sample the box filter which is then convolved with another box function, giving their stochastic trilinear interpolant.
We can thus filter with a B-spline filter of degree \(n\) by sampling a spline of degree \(n-1\) and performing a nearest lookup, since approximating B-splines are constructed by repeated convolution of a box filter via the Cox-de Boor recursion formula [1]. (For example, a quadratic B-spline filter can be achieved by sampling a triangular PDF over \([-1.5,1.5]^{2}\).) Sampling can either be performed via CDF inversion or by adding \(n\) uniformly-distributed random variables (also following the Cox-de Boor recursion).
This additional box function can be useful for rapidly changing filters such as a small-sigma Gaussian: evaluating it at discrete points results in subsampling error [20] and the correction requires evaluating the _erf_ error function. Filter importance sampling a regular, analytical normal distribution produces the same effect due to the convolution of a nearest-neighbor box function with the Gaussian. Furthermore, a Gaussian convolutional filter is an example of an infinite filter that is truncated in practice. With FIS, it is possible to evaluate an infinite filter by sampling the filter without truncation. This can simplify implementation (it is not necessary to carefully window the filter), as well as save the computational cost of multiple discrete weight evaluations and sample selection.
## 5 Results
We have evaluated stochastic texture filtering in the context of both real-time rasterization and path tracing using _Falcor_[14], as well as offline rendering using _pbrt-v4_[15]. All performance measurements were taken using an NVIDIA RTX 4090 GPU.
### Filtering Order
It is well known that linear filtering of normal maps is incorrect and leads to changes in appearance [1]. An example is shown in Figures 3(a)-(c), where hardware texture filtering is used on a minified normal mapped surface. At points toward the horizon, the filter kernel is wide and filtering of the normals gives values are close to the average normal in of the texture. Comparing to the reference image in Figure 3(h), which was rendered with no filtering and many pixel samples, we see that filtering before computing lighting introduces a significant error.
Stochastically filtering the textures allows the use of the estimator in Equation 7, which can be understood to be filtering the material itself over its distribution of normals. Results are shown in Figures 3(d)-(f), which are much closer to the reference; stochastic filtering effectively translates minified bumps and imperfections into increased roughness appearance. Our stochastic filters have some error due to their use of MIP maps, which are linearly filtered, though this error is small, as can be seen by comparing Figures 3(e) and (g), which were rendered using the same settings, save for Figure 3(g) using only the most detailed MIP level.
Because stochastic filtering only uses uninterpolated single texel values, only normals that are present in the normal map are used for lighting calculations. Thus, it can be understood as filtering discrete piecewise-linear microgeometry specified by the normal map, rather than using the normals to reconstruct a smooth underlying surface. Depending on the artist's intent, this behavior may be desirable--consider the example shown in Figure 4 where adjacent texels have significantly-different normals. With bilinear filtering, the filtered normals vary smoothly, corresponding to a smooth underlying surface, while stochastic filtering returns discrete normals.
Filtering BRDF properties prior to shading can lead to values violating the physical constraints of a BRDF model. Consider for example a texture with a scalar "metalness" parameter for a physically-based material model, where texels only have the values 0 and 1: with our approach, the material is only evaluated with metalness values of 0 and 1. At areas where the texture filter spans both values, we filter the material itself with only those two values. With traditional texture filtering, metalness values between 0 and 1 result, which may be nonsensical, depending on the material model. Our proposed filtering order allows for a more artist-friendly, non-linear, and compressed representation of full BRDF material models.
An example is shown in Figures 5(a) and (b), where a grid of temperature values is used to describe the full emission spectrum using Planck's law, which is non-linear. With the traditional approach, filtered temperature values are used to compute the emission spectrum at points in the volume. In contrast, stochastic filtering effectively computes emission spectra at the grid points and then filters those spectra; it thus preserves appearance under minification, while filtering the temperatures does not. Figure 5(c) shows the error introduced if volumetric MIP maps are used under minification, due to linear filtering of the non-linear temperature. In contrast, using a stochastic minification filter (here, a Gaussian in the plane tangent to the ray), preserves appearance under minification, as shown in Figure 5(d).
### Real-time Rendering
We evaluate stochastic texture filtering in a real-time renderer. Unlike software (CPU) renderers, real-time rendering with GPUs can use the hardware texturing unit with excellent bilinear filtering performance on standard texture formats. We do not expect stochastic texture filtering to provide performance benefits with those formats. We show, however, that it allows for efficient and high performance use of novel texture representation and compression formats not supported by existing hardware, as well as optimization of material graphs. Furthermore, we demonstrate how stochastic texture filtering enables magnification filters of significantly higher quality than
Figure 4: (a) Two texels with normals nearly 90 degrees apart. (b) With bilinear filtering, a smooth distribution of normals is reconstructed. (c) Stochastic filtering always uses single texel values from the image, so reconstructs an edge in this case.
the bilinear filter at the same cost, and more correct appearance preservation and minification.
In our experiments, we used DLSS [11] as a robust temporal integrator. Screen-space jittering for DLSS employs a 32-sample Halton sequence, while Spatio-Temporal Blue Noise (STBN) masks [20] are used as the source of random numbers for stochastic filtering. Our implementation performs stochastic filtering in the shading pass, which uses the Disney BRDF [12] and a single directional light. All images and performance measurements in this section were taken at 4K (\(3840\times 2160\)) resolution.
**Magnification, discrete filters:** For magnification, we analyze the visual benefits of high-quality bicubic Mitchell and truncated Gaussian filters with stochastic texture filtering by comparing with a simple bilinear filter, which is known for producing diamond-like artifacts and over-blurring. While the implementation of the stochastic Gaussian filter is straightforward, the Mitchell filter has negative weights and so we apply positivization (Section 2.2). In Figure 6 we observe better image quality from the higher-quality filters: either sharper response without bilinear filtering artifacts, or more pleasant diagonal edges and image smoothness. The use of STBN and DLSS results in no objectionable noise or flicker and the same performance cost as the bilinear filter.
**Magnification, filter importance sampling:** Filter importance sampling makes it possible to use infinite-extent filters without truncation. We compare FIS to sampling discrete filter weights using three Gaussian filters in Figure 7. For discrete sampling, we choose a single sample in the closest \(4\times 4\) window of texels and for FIS, we use the Box-Muller transform to sample the Gaussian, followed by a nearest-neighbor lookup.
Results are visually indistinguishable for \(\sigma=0.5\) but differ for the two other sigmas. With a very small \(\sigma\), we observe undersampling with discrete sample weights. For the large \(\sigma\), the limited radius of discrete sampling truncates the Gaussian kernel and produces visual artifacts. This can be improved by enlarging the filtering window, though with a corresponding increase in cost in sampling. FIS does not suffer from either of those issues, though it requires two random variables and cannot filter with exact kernels when convolution with a box filter is not desirable.
**Anisotropic filtering and minification:** Anisotropic filtering techniques commonly model the filter footprint as an ellipse, with axes derived from the partial derivatives of texture coordinates relative to screen coordinates. We build on that theory, but to save computational cost, we do not sample the ellipse in the shader but rely on screen-space jittering within the pixel to approximately sample the same extent. As shown in Figure 2, uniform jittering within the pixel gives a trapezoidal shape and projection in UV space. Although this does not preserve area or the original sample point distribution, it has no additional computational cost and in our experiments, approximates anisotropic filtering well.
Figure 5: Effect of applying a non-linear mapping after filtering versus before. Traditional trilinear filtering (a) filters first, then uses Planck’s law to compute the volumetric emission spectrum. In contrast, stochastic trilinear filtering (b) takes a sample according to the texture filter and applies Planck’s law. Because Planck’s law is highly non-linear, the results differ. Under minification, (c) MIP mapping introduces error by applying linear filtering to nonlinear quantities. Appearance is more accurately preserved with (d) a stochastic minification filter and no MIP maps.
Figure 6: Bilinear filtering (a) compared to stochastic, single sample estimation of the bicubic Mitchell (b) and Gaussian (c) filters, resolved with DLSS’s temporal accumulation. The bicubic Mitchell filter is much sharper than the bilinear filter and does not produce diamond-like artifacts. The Gaussian filter is isotropic and although it tends to blur textures, it produces the most pleasing and natural reconstruction of diagonal lines.
Figure 7: Gaussian texture filtering with varying \(\sigma\), comparing discrete sample stochastic filtering and filter importance sampling. For \(\sigma=0.5\), both produce visually indistinguishable results. FIS gives better results for both relatively small and a large \(\sigma\).
The degree of anisotropy is determined by the ratio between the major and minor axes of the ellipse. We choose a MIP level based on the length of the minor axis and sample a single MIP level stochastically. Unlike current GPU hardware filtering, which has a maximum anisotropy ratio of 16, our method allows any anisotropy. We limit the ratio to 64 to avoid GPU texture cache thrashing, rescaling the minor axis if necessary.
Figure 8 shows a plane textured with a checkerboard pattern. Magnification is handled using filter importance sampling. The image reconstructed by DLSS is temporally stable, with occasional flickering in regions containing very high-frequency details. In motion, we observe sporadic ghosting and other temporal artifacts introduced by DLSS, but the overall image quality remains comparable to hardware anisotropic filtering. Although DLSS doesn't completely remove noise caused by stochastic texture sampling, STBN reduces it, making it barely perceptible and only in magnified high-contrast areas. Figure 3 also demonstrates that temporal reconstruction is effective in recovering a high-quality anisotropically filtered image while only using 1 spp. We note that our approach of combining screen-space jittering with a higher-resolution MIP selection is similar to the ad-hoc practice of _negative MIP biasing_[13, 14].
**Triplanar mapping:** Triplanar mapping samples all textures three times with UV coordinates aligned to the _XY_, _XZ_, and _YZ_ planes and blends the filtered results based on the surface normal direction to avoid excessive texture stretching. Since it is a weighted average of three values, we can evaluate it stochastically using Equation 3. Results are shown in Figure 9. We find that DLSS resolves the stochastic sampling error effectively and observe no temporal visual artifacts such as flicker or ghosting. For this scene, the visual differences from filtering before shading versus filtering after shading are minor.
**Texture compression:** Stochastic texture filtering enables the use of more advanced texture compression and decompression algorithms by requiring only a single texel to be decoded at each lookup [15, 23]. To connect those observations to our work, we implemented a much simpler real-time decompression algorithm--the 2D discrete cosine transform (DCT), where \(8\times 8\) texel blocks store only 4 bytes per channel. We store the six lowest-frequency DCT coefficients for each channel, allocating 7 bits for the DC component and 5 bits for the remaining coefficients, achieving 16\(\times\) compression for 8-bit data. This representation is not supported by GPU texture hardware, so texels must be decoded in the material evaluation shader and filtering must be performed manually.
As shown in Figure 10, stochastic trilinear filtering gives nearly identical visual results to deterministic trilinear filtering. We measure a 2.9\(\times\) performance improvement; when combined with stochastic triplanar mapping, and performance is 7.9\(\times\) better compared to fully-deterministic filtering.
Figure 8: A checkerboard rendered using stochastic anisotropic and bicubic filtering (**top**). Red and blue insets (**bottom rows**) show magnified and magnified areas, respectively, comparing stochastic bilinear and bicubic filtering with hardware anisotropic filtering and a 1024 spp reference solution. Stochastic filtering uses FIS with STBN, except for the reference image that used a uniform distribution for the filtering.
Figure 10: A 4K-resolution rendering of a DCT-compressed 9-channel 4096 \(\times\) 4096 texture set using stochastic filtering (**left**). Despite some loss of higher frequency details in the original uncompressed texture (**upper left inset**), the stochastic trilinear (**upper right inset**) and deterministic trilinear (**lower left inset**) filtering results appear virtually identical, as shown by the 10\(\times\) magnified error image (**bottom right**). Stochastic filtering reduces rendering time from 1.66 ms to 0.57 ms.
Figure 9: Full triplanar mapping (**top**) compared to its stochastic, single sample estimation (**bottom**). From left to right we present pure diffuse shading without normal mapping, diffuse shading with normal mapping, and full specular and diffuse lighting. Insets show error magnified 10\(\times\).
**Visual noise ablation study:** To validate the effectiveness of DLSS [11] as the temporal integrator and Spatio-Temporal Blue Noise (STBN) [22] as the source of the randomness, we performed an ablation study presented in Figure 11 and using extreme zoom-in on a high contrast area. We verify that as compared to white noise, STBN dramatically reduces the appearance of noise and improves its perceptual characteristics. Similarly, DLSS removes most of the noise--both in the case of white noise and STBN. When DLSS is used in combination with white noise, some visual grain remains, but it disappears completely when combined with STBN.
### Offline Rendering
Offline rendering enjoys more generous pixel sampling rates than real-time rendering, which we have found to be sufficient to resolve the variance introduced by stochastic texture filtering. For example, Figure 12 shows a close view of a region of the _pbrt-v4 Watercolor_ scene that exhibits texture magnification, rendered at just 8 spp. Both regular and stochastic texture filtering give very similar results, though stochastic filtering accesses as much as \(7.85\times\) fewer texels. Augmenting stochastic EWA with a stochastic bicubic magnification filter gives the best results, with \(3.3\times\) fewer texel accesses than regular trilinear filtering.
In order to evaluate the error introduced by stochastic filtering when used with volumetric path tracing, we rendered a view of the _Disney Cloud_ data set [20]. _pbrt_'s volumetric path tracer is based on delta tracking with null scattering [1, 19] and uses ratio tracking [13] for transmittance. Because the cloud's density is used to scale the absorption and scattering coefficients and since those make affine contributions to the estimated radiance values, both filtering approaches converge to the same result.
We converted the OpenVDB data set to NanoVDB for use on the GPU and used the \(8\times\) downsampled version of the cloud in order to make the differences between filtering algorithms more apparent. The image in Figure 1 was rendered at 1080p resolution with 256 samples per pixel (spp). Trilinear filtering causes block- and diamond-shaped artifacts that are not present with tricubic filtering. Stochastic filtering gives images that are visually indistinguishable from traditional filtering; the error it introduces is far less than the error from Monte Carlo path tracing. For this scene, we saw less than a 5% increase in mean squared error (MSE) due to the stochastic filters. Table 1 reports performance: compared to trilinear filtering, tricubic filtering doubles rendering time since it requires \(8\times\) more texel lookups in the NanoVDB multilevel grid. With stochastic filtering, we are able to render using a high-quality tricubic filter in less time than trilinear filtering, with \(1/8\) as many texel lookups.
## 6 Discussion and Future Work
We have shown that stochastic texture filtering makes it possible to perform filtering outside of the lighting integral, rather than first filtering the texture parameters used by it. By doing so, systematic error is eliminated from rendered images in the common case where a textured parameter has a non-affine contribution to the final result. Examples include shadow mapping, normal mapping, roughness values used for microfacet distributions, and temperatures mapped to emission spectra. Filtering lighting in this way provides the benefit of preserving appearance at different scales.
\begin{table}
\begin{tabular}{l|c c c c} & Rendering & \multicolumn{3}{c}{Filtering} \\ & Time & Speedup & Time & Speedup \\ \hline Trilinear & 43.30 s & & 14.12 s & \\ Stoch. Trilinear & 27.13 s & 1.60\(\times\) & 3.27 s & 4.32\(\times\) \\ \hline Tricubic & 87.28 s & & 62.25 s & \\ Stoch. Tricubic & 31.51 s & 2.77\(\times\) & 5.10s & 12.2\(\times\) \\ \end{tabular}
\end{table}
Table 1: Performance when rendering Figure 1 at 1080p resolution with 256 spp. Stochastic filtering gives a significant performance benefit, both in overall rendering time and time spent filtering.
Figure 11: Ablation study on the effectiveness of DLSS and STBN on noise removal in a real-time setting. White noise (**left column**) creates visually distracting patterns of noise, while STBN (**right column**) dramatically reduces its appearance. When using DLSS as a temporal integrator (**bottom row**) the noise is dramatically reduced as compared to a single frame result (**top row**). DLSS and STBN work very well when combined, making the noise almost imperceptible.
Figure 12: Stochastic texture filtering of a magnified texture, rendered with 8 spp in a path tracer. EWA gives better results along edges than trilinear filtering, though still has artifacts, which a bicubic magnification filter improves. The noise from stochastic texture filtering is minimal, while the reduction in number of texels accessed (ratios at the bottom) ranges from \(3.26-7.85\times\).
Stochastic filtering offers additional benefits, including making more complex compressed texture representations viable by reducing complex filters to a single texel lookup. It further allows the use of higher-quality texture filters, as we have shown with bicubic and Gaussian filters; stochastic filtering makes it possible to use high-quality filters in high-performance code, providing further improvements in image quality. We hope that our work will contribute to the adoption of higher-order texture magnification filters in real-time rendering. This shift would reduce the reliance on low-quality bilinear filters, given our demonstration that the minor noise introduced by stochastic texture filtering can be effectively managed using temporal filtering algorithms like DLSS, or by employing moderate pixel sampling rates.
We note that the change to filtering the lighting calculation presents a challenge. Different renderers may produce varying results depending on which filtering method they use, even if their lighting and material systems are the same. It also means that our method could change the appearance of existing 3D assets, requiring art review before being used as a drop-in replacement.
For real-time rendering, we used DLSS to perform temporal filtering. DLSS is a learning-based solution and was not trained on such data. While the overall reconstruction quality is satisfactory, minor flickering and ghosting artifacts remain, especially in high-contrast areas and patterns like a checkerboard. Including stochastically-filtered texture in the training datasets would likely improve the reconstruction quality.
Our approach makes it feasible to use more complex reconstruction filters than are commonly used today. For example, non-linear content-dependent filters (such as steering kernels or the bilateral kernel) can be effective at reconstructing features like edges in images [16] and volumes [16] and are essential for super-resolution. If such non-linear, local filter parameters or weights can be obtained cheaply (for example, from preprocessing or computed at a lower resolution [17]), our stochastic filtering framework could be applied to them, giving further improvements to image quality.
## Acknowledgments
We would like to thank Aaron Lefohn and NVIDIA for supporting this work, John Burgess for suggesting the connection to percentage closer filtering, and Karthik Vaidyanathan for many discussions and suggestions. We are grateful to Walt Disney Animation Studios for making the detailed cloud model available and to Lennart Demes, author of the _ambientCG_ website, for providing a public-domain PBR material database that we used to produce the real-time rendering figures.
|
2301.06636 | Valuing Distributed Energy Resources for Non-Wires Alternatives | Distributed energy resources (DER) as non-wires alternatives, regardless of
owner, have the potential to reduce system operating costs and delay system
upgrades. However, it is difficult to determine the appropriate economic signal
to incentivize DER investors to install capacity that will benefit both the DER
investors and the system operator. In an attempt to determine this co-optimal
price signal, we present a bilevel optimization framework for determining the
least cost solution to distribution system over-loads. A key output of the
framework is a spatiotemporal price signal to DER owners that simultaneously
guarantees the DER owners' required rate of return and minimizes the system
operation costs. The framework is demonstrated with a case by which the system
operator considers utility owned battery energy storage systems, traditional
system upgrades, and energy purchase from DER owners. The results show that by
valuing DER for non-wires alternatives the utility owned storage system sizes
can be reduced, less hardware upgrades are necessary, and upfront capital costs
as well as operating costs are reduced. | Nicholas D. Laws, Michael E. Webber | 2023-01-16T23:39:54Z | http://arxiv.org/abs/2301.06636v2 | # Valuing Distributed Energy Resources for Non-Wires Alternatives
###### Abstract
Distributed energy resources (DER) as non-wires alternatives, regardless of owner, have the potential to reduce system operating costs and delay system upgrades. However, it is difficult to determine the appropriate economic signal to incentivize DER investors to install capacity that will benefit both the DER investors and the system operator. In an attempt to determine this co-optimal price signal, we present a bilevel optimization framework for determining the least cost solution to distribution system over-loads. A key output of the framework is a spatiotemporal price signal to DER owners that simultaneously guarantees the DER owners' required rate of return and minimizes the system operation costs. The framework is demonstrated with a case by which the system operator considers utility owned battery energy storage systems, traditional system upgrades, and energy purchase from DER owners. The results show that by valuing DER for non-wires alternatives the utility owned storage system sizes can be reduced, less hardware upgrades are necessary, and upfront capital costs as well as operating costs are reduced.
Power distribution planning, Power distribution economics, Optimization methods, Non-wires Alternatives, Distributed energy resources.
## I Introduction and Background
As population and electrification grow, there is pressure on local electric distribution utilities to increase overall capacity and upgrade equipment to maintain high reliability. These actions include simple maintenance such as vegetation management, but also replacing and upgrading transformers, installing more lines, replacing lines with newer ones, and so forth.
However, those traditional actions related to the wires and poles of the distribution system might not keep pace with load growth that will accommodate rapid electric vehicle adoption or widespread installation of electric heat pumps as a way to reduce on-site fuel use for space and water heating. As a consequence, there is an acute need for non-wires alternatives that can be used to improve overall system performance. Some of those alternatives include demand response and distributed energy resources, such as local power generation and/or storage.
Though distributed energy resources avoid additional loading on the distribution lines, traditional utility funding models do not always support their installation. Furthermore, market signals can be confusing and do not encourage DER installation even though novel business models are emerging that would reduce total system cost.
Given this context, this research seeks to explain how DER might be appropriately valued in light of increasing electrification and EV adoption in an era when traditional grid enhancements are hobbled by cost and policy hurdles.
Early evaluations of DER for non-wires alternatives compared costs and benefits of known DER capacities and locations against capacity upgrade costs [1]. A common theme in the literature for valuing DER as non-wires alternatives accounted for the single perspective of the distribution system operator (DSO). For example, Contreras-Ocana _et al._ developed a model that puts DER costs and benefits in competition with upgrade deferrals from a single perspective, at a single location (substation or transformer) with forecasted overloads [2]. By neglecting power flow constraints they were able to account for many types of DER including energy efficiency investments. However, without a network model the DER are presumably installed at the single, overloaded location.
The valuable work by Andrianesis _et al._ demonstrates how
\begin{table}
\begin{tabular}{l l} \multicolumn{2}{l}{**Decision Variables**} \\ \hline \hline \(x\in\mathcal{R}^{M}\) & upper level primal decision variables \\ \(y\in\mathcal{R}^{N}\) & lower level primal decision variables \\ \(x\in(0,1)^{K}\) & upper level primal binary decision variables \\ \(\lambda\in\mathcal{R}^{J}\) & lower level, dual variables for equality constraints \\ \(\overline{\mu}\in\mathcal{R}^{N}\) & lower level dual variables for upper bounds \\ \(\underline{\mu}\in\mathcal{R}^{N}_{+}\) & lower level dual variables for lower bounds \\ \(\textbf{Parameters}\) & \\ \hline \hline \(V\in\mathcal{R}^{J\times N}\) & lower level equality constraint coefficients \\ \(w\in\mathcal{R}^{J}\) & lower level equality constraints right-hand-side \\ \(\overline{y}\in\mathcal{R}^{N}\) & upper bounds for lower level, primal decision variables \\ \(\underline{y}\in\mathcal{R}^{N}\) & lower bounds for lower level, primal decision variables \\ \(a\) & upper level scaling coefficient for cost of DER energy \\ \(b\) & lower level scaling coefficient for income from selling \\ \(c\in\mathcal{R}^{N}\) & lower level cost coefficients for lower level decisions \(y\) \\
**Sets and Indices** \\ \hline \hline \(\mathcal{E}\) & set of edges in the network \\ \(\mathcal{N}\) & set of nodes in the network \\ \(\mathcal{N}_{\text{DER}}\) & set of nodes for potential DER investors \\ \(\mathcal{N}_{\text{RESS}}\) & set of nodes for potential BESS installations \\ \(\mathcal{N}_{\text{TREX}}\) & set of nodes for potential transformer upgrades \\ \(\mathcal{N}_{\text{LINE}}\) & set of nodes for potential line upgrades \\ \(\mathcal{S}\) & set of demand charge periods \\ \(\mathcal{T}\) & set of time steps \\ \(\Phi_{j}\) & set of phases connected to node \(j\) \\ \end{tabular}
\end{table} TABLE I: Nomenclature |
2310.10715 | The warm-hot circumgalactic medium of the Milky Way as seen by eROSITA | The first all-sky maps of the diffuse emission of high ionization lines
observed in X-rays by SRG/eROSITA, provide an excellent probe for the study of
the warm-hot phase (T~10^6 K) of the circumgalactic medium (CGM) of the Milky
Way (MW). In this work we analyse the O VIII line detected in the first eROSITA
All-Sky Survey data (eRASS1). We fit a sky map made in a narrow energy bin
around this line, with physical emission models embedded in a 3D geometry to
constrain the density distribution of the warm-hot gas around our Galaxy, with
a focus on mid and high (absolute) Galactic latitudes. By masking out the
eROSITA bubbles and other bright extended foreground sources, we find that an
oblate geometry of the warm-hot gas (T~0.15-0.17 keV), flattened around the
Galactic disk with scale height z_h~1-3 kpc, best describes the eRASS1 O VIII
map, with most of the observed emission resulting to be produced within a few
kpc from the Sun. The additional presence of a large scale warm-hot spherical
halo, while providing a minor contribute to the X-ray emission, accounts for
the high O VII absorption column densities detected with XMM-Newton, as well as
most of the baryon budget of the CGM of the MW. The eROSITA data carry the
largest amount of information and detail of O VIII CGM intensities to date,
allowing for a significant reduction of the statistical uncertainties of the
inferred physical parameters. | N. Locatelli, G. Ponti, X. Zheng, A. Merloni, W. Becker, J. Comparat, K. Dennerl, M. J. Freyberg, M. Sasaki, M. C. H. Yeung | 2023-10-16T18:00:00Z | http://arxiv.org/abs/2310.10715v1 | # The warm-hot circumgalactic medium of the Milky Way as seen by eROSITA
###### Abstract
Context:The first all-sky maps of the diffuse emission of high ionization lines observed in X-rays by SRG/eROSITA provide an excellent probe for the study of the warm-hot phase (\(T\sim 10^{6}\) K) of the circumgalactic medium (CGM) of the Milky Way (MW). In this work we analyse the O VIII line detected in the first eROSITA All-Sky Survey data (eRASS1). We fit a sky map made in a narrow energy bin around this line, with physical emission models embedded in a 3D geometry to constrain the density distribution of the warm-hot gas around our Galaxy, with a focus on mid and high(absolute) Galactic latitudes. By masking out the eROSITA bubbles and other bright extended foreground sources, we find that an oblate geometry of the warm-hot gas (\(T\equiv 0.15-0.17\) keV), flattened around the Galactic disk with scale height \(z_{b}\sim 1-3\) kpc, best describes the eRASS1 O VIII map, with most of the observed emission resulting to be produced within a few kpc from the Sun. The additional presence of a large scale warm-hot spherical halo, while providing a minor contribute to the X-ray emission, accounts for the high O VII absorption column densities detected with XMM-_Newton_, as well as most of the baryon budget of the CGM of the MW. The eROSITA data carry the largest amount of information and detail of O VIII CGM intensities to date, allowing for a significant reduction of the statistical uncertainties of the inferred physical parameters.
## 1 Introduction
The largest contribution to the gas mass budget of galaxies is expected to be retained in a hot phase in the halo which extents over scales that are comparable to their virial radius \(R_{\rm vir}\), with temperature \(T\sim 10^{5}-10^{7}\) K (White & Rees 1978; Tumlinson et al. 2017). The presence of these hot gas halos results from the infall of intergalactic material onto the spines and nodes of the dark matter structure of the Universe (i.e. the cosmic web), from the \(\sim\)Mpc scales down onto the smaller scale peaks (\(\sim\)100 kpc) corresponding to the galactic dark matter halos. The infalling gas bulk motions and collisions power stationary shock waves at the boundaries of the potential wells. These stationary shocks compress and heat up the gas inside the well, up to a temperature similar (but not necessarily equal, see Lochhaas et al. 2022) to the virial temperature \(T_{\rm vir}\propto R_{\rm vir}/M_{\rm vir}\sim 10^{5}-10^{7}\) K usually computed at \(R_{\rm vir}\equiv R_{200}\) (see Oppenheimer et al. 2018; Nelson et al. 2018, and references therein). The medium enclosed within this radius and found outside the stellar disk of a galaxy is usually defined as the circumgalactic medium (CGM).
In this warm-hot gas phase of the CGM, collisions between the gas atoms dominate the energy exchange between them and in turn the overall ionisation of the atomic species. The collisional ionisation equilibrium hypothesis thus provides a theoretical framework to compute the expected brightness of the ionisation lines of the warm-hot gas phase (Smith et al. 2001). Such a phase has been observed as all-sky diffuse X-ray emission since about three decades already (e.g. Kuntz & Snowden 2000, based on ROSAT all-sky X-ray survey data). High ionisation lines of species like C IV, O VII, O VIII or Ne IX are the most common states and their presence has been confirmed around external galaxies by several independent probes: via the absorption features they produce along the lines of sight towards bright active galactic nuclei (Gupta et al. 2012; Miller & Bregman 2013; Nicastro et al. 2023; Das et al. 2019, 2019); via their associated emission lines studies (Forman et al. 1985; O'Sullivan et al. 2001; Yao et al. 2009; Henley & Shelton 2010; Bogdan et al. 2013; Miller & Bregman 2015; Goulding et al. 2016; Faerman et al. 2017, although see potential biases pointed out by Zheng et al. 2020); by detecting diffuse emission around external galaxies (Strickland et al. 2004; Tullmann et al. 2006; Li et al. 2017; Hodges-Kluck et al. 2018); via stacking experiments over a large sample of distant galaxies, revealing the presence of a layer of hot gas distributed closely within and around galactic stellar discs (Anderson et al. 2015; Comparat et al. 2022; Chadayamuri et al. 2022). In the context of the MW, the spectral evidence of both warm-hot \(T\simeq 0.2-0.3\) keV and hot \(T\simeq 0.7-1\) keV gas phases have been reported (Yoshino et al. 2009; Gupta et al. 2012; Nakashima et al. 2018; Das et al. 2019, 2020; Kaaret et al. 2020; Bhattacharya et al. 2022; Ponti et al. 2022; Bluem et al. 2022) and similar components have also been recently associated to the Large Magellanic Cloud (Gulick et al. 2021).
The same set of evidences however, may also suggest the alternative scenario in which the heated gas is expelled from the stellar disc from the explosions of supernovae, by mechanical or radiative energy feedback. The gas outflowing from the stellar disk may then cool, precipitate and fall back onto the disc creating a re-cycle of the gas powering new episodes of star-formation (Shapiro & Field 1976; Bregman 1980). Given the sensitivity of X-ray experiments to particle density, usually increasing to
wards the inner portions of galactic halos, this scenario is a complementary alternative to the gravitational infall in providing an explanation for the presence of a hot gas phase around galaxies.
The current picture for the MW indicates the presence of both types of scenarios (gas accreted from the large scale environment vs. outflowing from the Galactic disc), with a component distributed in a disk-like geometry extending \(\sim\) a few kpc above and below the Galactic plane producing most of the observed X-ray CGM emission, while a large scale (\(\sim 100-300\) kpc) halo is present but provides a minor contribution to the emission (Nakashima et al., 2018; Kaaret et al., 2020; Qu & Bregman, 2019; Bluem et al., 2022). The halo component however, would contain most of the mass associated to the hot gas phase due to its huge volume.
A crucial aspect in studies of the diffuse emission of the MW is to cover large portions of the sky with sufficient spatial resolution to discriminate sources morphology and sufficient spectral resolution to distinguish emission components. The sky fraction sampled by instruments with good spatial and spectral resolution (e.g. XMM-_Newton_, _Chandra_) is small due to the relatively small field of view (compared to 4\(\pi\) sr). In this respect, the _ROSAT_ mission (Snowden & Schmitt, 1990; Snowden et al., 1997) operational in the '90s set a milestone producing the first all-sky X-ray map. The spectral resolution of _ROSAT_, providing five broad bands from 0.1 to 2.4 keV, prevented however studies of single emission lines and an easy identification of the different sources of diffuse emission in a given energy band. More recently the HaloSat instrument provided a larger coverage at high Galactic latitudes with 10 deg spatial resolution and relatively good spectral resolution (85 eV at 0.68 keV, Kaaret et al., 2019). The best figure of merit exploiting a high survey speed and a sufficient spectral resolution, combined with a high spatial resolution, has finally been reached by the extended ROentgen Survey with an Imaging Telescope Array (eROSITA) instrument onboard the Spectrum-Roentgen-Gamma (SRG) space observatory, launched in July 2019. eROSITA is a space X-ray telescope featuring a large effective area from 0.2 to 8 keV (comparable to that of XMM-_Newton_ in the 0.3-2 keV band), in combination with a large field of view (\(\sim 1\) deg\({}^{2}\)), high spatial resolution (\(\sim 30\) arcsec) and instrumental energy resolution of \(\sim 80\) eV at 1 keV (Merloni et al., 2012; Sunyaev et al., 2021; Predehl et al., 2021). These features combined, allow for the first time to break down the soft X-ray background (0.2-1 keV) into its components, including the hot CGM of the MW, in both spectra and high-resolved images.
In this paper, we present the analysis of the first ever O VIII half-sky1 emission lines maps (Zheng et al., in preparation), aimed at constraining the density distribution and overall geometry of the warm-hot CGM of the MW. The spectral analysis performed on the eROSITA Final Equatorial Depth Survey (eFEDS, Brunner et al., 2022), a deep \(\sim 140\) deg\({}^{2}\) field at moderate Galactic latitudes (\(20^{\circ}<b<40^{\circ}\), Ponti et al., 2022, and references therein), provides us with information on the temperature and metal content of the detected CGM phases. By adopting these information, we can derive the X-ray emission of simple 3D geometrical models for the CGM density, and look for the one best describing the narrow-band map of the detected O VIII emission line.
Footnote 1: the Western half of the sky at \(359.9442>l>179.9442\) degrees (great circle over galactic poles and Sgr A*). These are proprietary data of the eROSITA_DE consortium.
In Sec. 2 we briefly summarize the data reduction extensively presented in Zheng et al., submitted; in Sec. 3 we describe the components used to describe the line emission and the method used to fit the CGM geometry to the data; in Sec. 4 we present our main results and in Sec. 5 we discuss them in the context of the literature; in Sec. 6 we summarize our results and draw our conclusions.
## 2 Data
The main emphasis in this work is given to the modeling of the line emission detected in the first all-sky survey of the eROSITA data (eRASS1). However, a comprehensive analysis of the hot CGM of the MW, which is the scientific driver of our research, cannot ignore additional and complementary information retrieved by independent missions or methods. Among the data sets presented below, the eRASS1 maps retain the highest statistical power thanks to the orders-of-magnitude larger sample size.
We analyse the data from the first eROSITA All-Sky Survey of the eROSITA_DE consortium. We exploit images of the count rate per pixel generated from the original event files of eROSITA in different narrow energy bands as presented in Zheng et al., in preparation (please refer here for details on the image production). The narrow band encompasses the [0.614-0.694] keV range and is named after the most prominent emission line included within the range, namely the O VIII line (\(\sim\)0.654 keV). The energy range has been fixed to 80 eV around the line centroid, in order to approximately account for the eROSITA energy resolution at around the corresponding energies (Predehl et al., 2021). The O VIII band map has been created using the eROSITA Science Analysis Software System (eASS, version 020) and is shown in Fig. 1. The eROSITA data have been checked and validated against the RASS data (Snowden et al., 1997), finding values consistent with the ones presented here (Zheng et al., submitted).
The eRASS1 data feature a highly non-uniform exposure across the sky, producing a direction-dependent sensitivity threshold to point source detection. Removing point-source emission homogeneously (including AGN) and taking the subtraction correctly into account in the modeling of the cosmic X-ray background (CXB) is non-trivial. We thus decide to subtract only the brightest point sources detected in the 0.2-2.3 keV band (Merloni et al. in prep.) above a flux of \(10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\). This high threshold allows to remove bright source contamination in the maps while keeping the CXB contribute uniform across the sky. We convert the original FITS maps in HEALPix format exploiting the HEALPy Python library (Zonca et al., 2019). The HEALPix maps, defined to provide sky area units of equal surface (Gorski et al., 2005), have been used for the analysis, whereas a Zenith Equal Area projection is used throughout this work only for displaying purposes.
In addition, we consider a catalogue of O VII \(K\alpha\) absorption column densities retrieved thanks to the higher spectral resolution of the XMM-_Newton_ Reflection Grating Spectrometer (Bregman & Lloyd-Davies, 2007; Miller & Bregman, 2013). The column density \(N\) of the absorption lines is directly proportional to (the integral of) the density of the medium \(N\propto nL\) (whereas emission intensity goes as \(\propto n^{2}L\)). For this reason O VII absorption data are particularly relevant to constrain lower density plasma located at large distances from the Sun.
### eRASS1 data selection
The eRASS1 data cover the Western Galactic sky. We show the O VIII narrow energy band map in Fig. 1. Several large angular
scale features and sources show up in the map (e.g. the eROSITA bubbles, the Eridanus-Orion superbubble, the Monogem Ring supernova remnant, etc.). These extended sources would bias our analysis if kept in the data. The analysis of the hot CGM of the MW presented in this work requires to exclude the photons coming from these sources. We thus select all pixels within \(2\sigma_{b}\) of Fig. 1 (left panel), where \(\sigma_{b}\) represents the root-mean-square value computed at every latitude \(b\) in the \(220\deg<l<250\deg\) stripe at the same latitude \(b\). The longitude range considered is in fact clear from extended foreground sources, as evident from Fig. 1.
In addition, we exclude all regions holding total hydrogen column densities \(\rm N_{H}>1.6\times 10^{21}\,cm^{-2}\). Oversetination of the total hydrogen column density may in fact affect lines of sight in which not all of the emission comes from behind the absorption layer. This effect may become more prominent closer to the disk, where column density are also higher. The column density threshold of \(\rm N_{H}>1.6\times 10^{21}\,cm^{-2}\) fixes the bias on the absorbed emission model in the O VIII band to be \(<30\%\) for any absorbed source, for assumed \(\rm N_{H}\) values within a factor +50% of the true value. The selected lines of sight used in our analysis are shown in Fig. D.1 in the Appendix. They are mostly found at \(|b|>15\deg\). Cluster of galaxies from the MCXC catalogue (Pfiffaretti et al. 2011) are also masked up to twice their \(R_{500}\) reported value. The regions excluded from the resulting mask well approximate either the presence of extended foreground sources or high column density regions (i.e. the Galactic disk). The final selected area used for the analysis presented in this work amounts to \(\sim 1/3\) of the Western sky (i.e. \(\sim 6.6\rm k\,deg^{2}\)).
### Ubiquitous background and foreground components
The selected data represents what commonly is referred to as the X-ray sky background. The spectrum extracted in the eFEDS region is shown in Fig. 5 of Ponti et al. (2022), together with the proposed best-fits of the data. By looking at their model components, the diffuse background in the 0.6-0.7 keV band includes the emission of the hot CGM (blue line), whose 3D structure we aim to analyse. Other components that we aim to isolate are also present, namely the instrumental background (INST, black line) and the CXB (magenta line). In softer energy bands (e.g. O VII, 0.5-0.6 keV) the addition of the Local Hot Bubble emission (LHB, red line) and the potential presence of the Solar Wind Charge Exchange (SWCX, cyan line) complicate the analysis. Despite the O VII is the most prominent CGM emission line, in this work we do not analyze the 0.5-0.6 keV energy range due to the limited knowledge about the detailed morphology of the LHB and SWCX components. Instead, both the LHB and SWCX components are found to only provide a minor contribution in the O VIII band. We then consider the O VIII band as the main driver of our results. The study of O VIII/O VII line ratio is also expected to provide deeper insight into the temperature distribution across the sky, provided that degeneracy between the various components building the O VII emission can be correctly separated. The study of the O VII line however goes beyond the scope of the analysis presented here and will be addressed in a future work.
In addition to a warm-hot medium, a hotter plasma component (\(kT\sim 10^{7}\) K) is introduced in the eFEDS spectral analysis to model excess emission found at \(\sim 1\) keV. This component may produce emission also in the O VIII band, although with an intensity comparable to the LHB and SWCX components, amount
Figure 1: eRASS1 O VIII intensity data used in this work. Data are shown using a Zenith Equal Area projection in the Galactic coordinate reference frame and centered at \((l,b)=(270,0)\deg\) throughout this work. We note that the left panel includes the contribution from the eROSITA instrumental background and the sky flux is processed through the effective area of eROSITA. The right panel instead shows the same map where the instrumental background has been subtracted, leaving the signal from the sky (see eq. 2). The sky signal has then been divided by the eROSITA effective area to obtain the surface brightness of the physical sky signal in line units (L.U. = \(\rm ph\,s^{-1}\,cm^{-2}\,sr^{-1}\)).
ing to only \(1-2\%\) of the total intensity in this band. Given the very little information available on the shape and properties of this component, as well as the minor contribution in the O VIII band, for simplicity we leave it out from the analysis carried in this work.
We thus describe the CGM intensity as
\[I_{\rm CGM}(\mathbf{s},E)\,e^{-\sigma(E)N_{\rm H}(\mathbf{s})} =I_{\rm eRASS1}^{\rm day}(\mathbf{s},E)-I_{\rm add}(\mathbf{s},E)\] (1) \[I_{\rm eRASS1}^{\rm sky}(\mathbf{s},E) =I_{\rm eRASS1}(\mathbf{s},E)-I_{\rm NIST}(E)\] (2) \[I_{\rm add}(\mathbf{s},E) =I_{\rm CXB}(E)\,e^{-\sigma(E)N_{\rm H}(\mathbf{s})}+\] \[\
Kaaret et al. 2020). In addition, we test a model with \(kT=0.225\) keV and \(Z=0.3\,Z_{\odot}\) to assess potential systematic uncertainties in the fit results. We note that a constant temperature profile is also expected for any virialized halo following a total mass profile \(M(<r)\propto r\), such as the one derived by assuming a Navarro-Frenk-White dark matter profile (Navarro et al. 1997) in the theoretical framework of \(\Lambda\)CDM cosmology. Important deviations from the virial temperature may anyway be common depending on the amount of turbulence, bulk motions, magnetic fields and accelerated cosmic rays in one galaxy. A uniform metal abundance \(Z=0.1\,Z_{\odot}\) is also assumed for the halo (Ponti et al. 2022).
#### 3.2.1 Geometry: spherical \(\sim\) virialized halo
In the extended halo model the density of the hot material is most simply described by a spherical \(\beta\) model
\[n(r)=n_{0}\left[1+\left(\frac{r}{r_{0}}\right)^{2}\right]^{-\frac{1}{2}\beta} \tag{4}\]
where \(r\) is the distance to the Galactic center, \(n_{0}\) and \(r_{0}\) describe the flattening of the inner profile, while \(\beta\) describes the roll off of the density at large radii. In fact, since in practice we masked out most directions in the quarter slab close to the Galactic center (\(|l|\geq 270\) deg) due to either foreground structures or high absorption, a simpler formula for the \(\beta\) model is considered, by taking the limit of the model to large radii. Small radii are in fact probed mostly by directions close to the Galactic Centre, excluded from our analysis. The asymptotic formula also reduces the number of degree of freedom as
\[n(r>>r_{0})=n_{0}r_{0}^{3\beta}r^{-3\beta}=C\,r^{-3\beta} \tag{5}\]
This treatment allows to ignore the degeneracy between the central parameters and to potentially obtain a more robust estimate for \(\beta\) which is key to correctly infer the mass of the baryons.
#### 3.2.2 Geometry: oblate disk-like component
In an alternative scenario the X-ray emission can be produced by a hot corona powered by outflows of hot gas driven by supernovae explosions, surrounding the stellar disc, or by an unresolved population of Galactic sources distributed in and around the disk. Both imply a non-spherical geometry and are expected to mimic the flatter distribution characteristic of the stellar/ISM disc although with potentially different scale length/height. To model this kind of flattened density distribution, we can either set independent scale heights along the radial direction (R direction) and perpendicularly to the mid-plane (z direction).
Plasma processed by supernovae expanding from the stellar disc and/or condensating fountains of material falling back to the disk from where it was expelled, both contribute to creating a hot and thick atmosphere around the stellar disk. Hydrodynamic (non-)equilibrium arguments imply a steeper decrease of the density with the distance with respect to the beta model. It makes sense then to model this kind of disk-like extended corona with an exponential function of the radius rather than a power-law model. We describe this oblate/disk-like atmosphere with the following model (e.g. Yao et al. 2009; Li & Bregman 2017).
\[n(r)=n_{0}e^{-R/R_{\rm orb}}e^{-|z|/\alpha} \tag{6}\]
We note that a similar (i.e. exponential) distribution is expected also in the case that the emission is related to an unresolved population of hot stars in the disk rather than a truly diffuse plasma. In this case, the scale height will be related to the actual distribution of such star population. The topology of the resulting emission can be modelled similarly as a thick disk. We neglect flattened distributions tilted with respect to the plane of the galaxy for simplicity.
### Estimated model intensity
The model in Eq. 1 is described in Galactic coordinates and thus assumes the observer is located at the position of the Sun, at distance \(R_{0}=8.2\) kpc from the Galactic center. However, as can be seen from Eq. 4 to 6, the models are first defined in the reference frame of the Galactic center. They thus have to be transformed to the Sun reference frame through the following set of equations (Miller & Bregman 2015):
\[R^{2} =R_{0}^{2}+s^{2}\cos^{2}(b)-2R_{0}\cos(b)\cos(l) \tag{7}\] \[z^{2} =s^{2}\sin^{2}(b)\] (8) \[r^{2} =R^{2}+z^{2} \tag{9}\]
where we have called \(r\) the distance to the Galactic center and \(s\) the distance relative to the Sun. The change of reference frame is crucial and introduces a direction dependence of the emission morphology even for plasma geometries spherically symmetric around the Galactic Center.
Given our assumption of constant temperature, the density of a plasma component at each point (i.e. a volume's voxel) is converted to an emission profile through a constant emissivity \(\epsilon(T)\) depending on the assumed gas temperature. We compute the line emissivities from an APEC (Smith et al. 2001) emission model and smooth its spectrum with a gaussian kernel with FWHM \(\simeq 80\) eV in order to mimic the energy resolution of eROSITA. We compute \(\epsilon(0.15{\rm keV})=1.649\) and \(\epsilon(0.225{\rm keV})=2.607\) in units of \(10^{-15}(Z/Z_{\odot})\,{\rm ph\,cm^{3}\,s^{-1}}\) in the O VIII band (dominated by the O VIII line emissivity). We then integrate over all points located along one line of sight described by a vector in Galactic coordinates \(s=(l,\,b,\,s)\) at a distance \(s\) from the Sun, up to a maximum distance \(R_{\rm out}=350\) kpc
\[I_{\rm CGM}(l,b)=\frac{1}{4\pi}\int_{0}^{R_{\rm out}}n^{2}(s)\epsilon(T)\,ds. \tag{10}\]
We note that as long as the \(n^{2}(s)\) profile is steeper than \(s^{-2}\), we have that the choice of \(R_{\rm out}\) does not affect the integral. For flatter profiles instead, \(R_{\rm out}\) can weakly affect the total CGM emission and mass, up to divergence for profiles equal or flatter than \(s^{-1}\). The above conditions (\(s^{-2}\), \(s^{-1}\)) are met for \(\beta\) models holding \(\beta=1/3\) and \(\beta=1/6\) respectively. High values of \(\beta>1/3\) imply that the bulk of the emission is provided by gas well within \(R_{\rm out}\), whereas \(\beta<1/6\) profiles will hold the bulk of the mass and the emission at the outer boundary, diverging for \(R_{\rm out}\longrightarrow\infty\). We thus consider as non-physical all values \(\beta<1/6\simeq 0.17\). In our analysis we neglect optical depth corrections and assume the gas to be optically thin.
To fit the model map to the data, we use a Markov Chain Monte Carlo bayesian algorithm implemented in Python language by the ultranest library (Buchner 2021). We define our Likelihood \(\mathcal{L}\) to be maximized (we actually minimize its logarithm) by the best-fit solution as
\[\ln\mathcal{L}(I_{\rm model}|l,b,\mathbf{\theta})=-\frac{1}{2\nu}\sum_{lb}\left( \frac{I_{\rm obs}(l,b)-I_{\rm model}(l,b,\mathbf{\theta})}{\sigma_{\rm obs}(l,b)} \right)^{2} \tag{11}\]
for any set of parameters \(\mathbf{\theta}\), where \(\sigma_{\rm obs}\) is the uncertainty on the data \(I_{\rm obs}\).
## 4 Results
The fit results of the models presented in Sec. 3 are reported in Table 1 and commented with some minimal notes to convey the main messages of each model fit. In this section we provide a more detailed description of the broad classes of models tested: the spherical \(\beta\) and the exponential disk-like models.
### Combined \(\beta\) + disk geometry
The current picture in literature describes the X-ray emission and absorption attributed to the hot CGM of the MW as a sum of two components holding a different geometry: a disk-like exponential profile (eq. 6) accounting for most of the emission and a spherical \(\beta\) (eq. 5) halo producing a minor contribute to the emission, but accounting for most of the absorption of background light.
The O VIII data, the derived best-fit model and its residual are shown in Figures 2. We assume gaussian priors for the disk scale-length \(R_{h}=12\pm 4\) kpc and scale-height \(z_{h}=3\pm 0.5\) kpc. The value \(R_{h}=12\pm 4\) kpc has been chosen to include the exponential drop of the HI gas disk of the MW (Kalberla & Dedes, 2008; McMillan, 2017). The large standard deviations (\(\sigma_{R}=4\) and \(\sigma_{z}=0.5\)) allow a wide range of values, while helping the fit convergence. In addition, we fix the slope \(\beta\) of the \(\beta\)-model. To assess the systematic on the fit results introduced by the choice of \(\beta\) however, we repeat the fit for different choices of \(\beta=0.3\), 0.5, 0.7.
The presence of a spherical component allows to account for the column densities derived from O VII absorption lines studies, otherwise systematically underestimated, as shown by Fig. 3. The absorption column density of the warm-hot plasma is proportional to the plasma density \(\propto nL\) rather than \(\propto n^{2}L\). This makes the length \(L\), and thus the plasma length-scale, more relevant with respect to the density in absorption rather than emission data. In fact, a model including only a disk-like component systematically underestimates the observed equivalent widths of \(z=0\) O VII lines detected in the spectra of background quasars (Gupta et al., 2012; Kaaret et al., 2020).
On the basis of the \(\chi^{2}\) statistic all the realization of the combined model with different \(\beta\) are similarly good, with a very small preference (\(\Delta\chi^{2}\simeq-357\) over \(\sim 31416\) d.o.f.) for the value \(\beta=0.5\) with the inclusion of a SWCX model. Given this degeneracy, in the following we will consider the combined (\(\beta=0.5\)) model as reference for comparison with other results presented in the literature. Although we stress that an actual fit of the halo \(\beta\) parameter is currently prevented by X-ray intensity data, \(\beta=0.5\) is considered with respect to other solutions also on the basis of theoretical arguments4. Nevertheless, we will take into account the systematic uncertainty introduced by the choice of \(\beta\) on other derived quantities (e.g. MW baryonic mass \(M_{b}\) and fraction \(f_{b}\), see Sec. 5).
Footnote 4: in the self-gravitating isothermal sphere (virial) model \(\beta\equiv\mu_{\rm m}\sigma_{\rm gal}^{2}/(\rm kT_{gas})=0.5\) is the square of the galaxy-to-gas velocity dispersion ratio (Sarazin, 1988; King, 1962, although see also Lochhaas et al., 2022).
### Oblate disk-like model
From the results of the combined (\(\beta\equiv 0.5\)) model, it follows that the majority of the CGM X-ray emission is produced by the disk-like component (see the lower right panel in Fig. 2). It is thus instructive to see how the fit is affected when using only this disk-like component. This case has the advantage of simplifying the model description, at the expenses of not accounting for the observed O VII absorption.
When looking at the fit residuals (Fig. 4) and \(\chi^{2}\) statistic (Table 4), the disk-like model overall does not perform significantly worse than the combined (\(\beta\equiv 0.5\)) model and provides a similar scale length \(R_{h}=8.0\pm 0.3\) kpc. However, the scale height \(z_{h}\) significantly increases to \(z_{h}=3.3\pm 0.1\) kpc. This is explained by the portion of the emission previously attributed to the halo (increasingly important at high \(|b|\), and absent in the disk-like model) now having to be accounted for in the disk-like component alone, in turn increasing the scale height (cfr. the profiles in Fig. 2 and Fig. 4). Based on the only eROSITA data we can not disfavour a scale height \(z_{h}=3.3\) kpc with respect to 1.1 kpc. However, this can be done by comparing the expected amount of O VII absorption accounted for by the different models. As pointed out above, the disk-like model alone systematically underestimates O VII absorption data.
### Spherical \(\beta\) model
For completeness we also test our data with a fit of a spherical \(\beta\) model alone. Our data selection leaves out most directions at \(l>270\) deg. This severely reduces the constraining power on the combination of central density \(n_{0}\) and scale \(r_{0}\) in a spherically symmetric geometry of the plasma. Without the lines of sight towards the Galactic Center these two parameter are in fact highly degenerate with each other. We thus fit the spherical \(\beta\) model using its \(r>>1\) approximation presented in eq. 5. Looking at the right panels in Fig. 4, the \(\beta\) model (\(\chi^{2}/\rm d.o.f.=1.89\)) can account for the sky average count rate, as the ratio data/model (upper central panel) lays within the 0.5-2 range across most of the sky. However, from the data we observe a systematic increase of the count rates with decreasing \(|b|\) at all longitudes that can not be accounted for by a spherical geometry. The same trend is made more evident by looking at the residual (data-model)/err map (Fig. 4). This evidence alone suggests the presence of an oblate disk-like component other than the INST (flat) and CXB (increasing with \(|b|\)) as a significant component of the X-ray intensity from the MW (as independently suggested by Yao et al., 2009; Li et al., 2017; Nakashima et al., 2018; Kaaret et al., 2020 using _Chandra_, XMM-_Newton_, Suzaku and HaloSat data respectively). The analysis of the eRASS1 data now provides an unambiguous signal for this oblate component, which is detected with a very high significance. We will discuss the possible nature of this emission in Section 5.
The very flat best-fit slope of the density distribution of the spherical halo \(\beta=0.23\) is likely biased low in order to accommodate the lower rates at high latitudes and the higher rates at lower latitudes together. Such a flat slope would also result in a non physical diverging emission for \(r\longrightarrow\infty\).
Previous studies focusing on only very high latitude regions found steeper slopes consistent with \(\beta=0.5\)(Li & Bregman, 2017). By fixing \(\beta\equiv 0.5\), the fit worsens (\(\chi^{2}/\rm d.o.f.=1.97\), not shown), showing even larger residuals at low latitudes and the central normalization increases by an order of magnitude probably to maintain a similar average density (i.e. intensity) value across the volume. Overall, the spherical \(\beta\) model alone reproduces the average intensity of the sky but poorly adapts to the morphology of the eRASS1 O VIII band image.
### Systematic uncertainties
We tested the combined models for different choices of \(\beta\equiv 0.3,\,0.5,\,0.7\). The best-fit parameters of the combined models still show some degeneracy with \(\beta\). The central density of the \(\beta\) component C and the scale-length of the disk-like component \(R_{h}\) increase with \(\beta\), while the central density of the disk-like component \(n_{0}\) decreases with \(\beta\). The scale-height of the disk does not show a clear trend, and is found in the range \(z_{h}\sim 1-3\) kpc.
We find a minor but not significant preference for the \(\beta\equiv 0.5\) realization of the combined model including the SWCX, based on the \(\chi^{2}\) statistic. In the \(\beta\equiv 0.3\) combined fit, the \(\beta\)-model is being suppressed, while the disk-like component dominates the model intensity, showing the same parameters as in the disk-like CGM model. This is probably due to the fact that a \(\beta\equiv 0.3\) profile looks flatter across the sky than for higher \(\beta\). This brings the dominating trend with \(|b|\) in the data to be mainly fitted by means of the other component (i.e. the disk-like), which in turns leaves only a little residual intensity available for the fit of the \(\beta\) model. The \(\beta=0.7\) fit instead shows a large central normalization C due to the mentioned degeneracy between the various parameters (see Fig. 2 in Appendix). The scale length \(R_{h}\) and height \(z_{h}\) of the disk-like component also increase, due to the density left unaccounted by the steeper roll-off (\(\beta=0.7>0.5\)).
Three additional models (combined+swcx, combined+20%instr and combined+high\(\epsilon\)) are tested to assess the systematic errors introduced respectively by: the introduction of a (minor) SWCX emission component; a potential systematic underestimation of the instrumental noise or CXB component at soft energies; the choice of temperature of the plasma component and the spectral modeling of the soft X-ray emission.
In the combined+swcx model we introduce a characterization of the SWCX component in the O VIII band of the eRASS1 data (Dennerl+ in prep., see also Appendix C). We compared the estimated CGM flux with an XMM-_Newton_ measurement obtained after subtraction of SWCX (Koutroumpa et al. 2007). We consider the latter measurement among the most detailed measurements of the MW CGM component as it relies on a careful model of the SW Parker spiral in space and time, as well as on high spectral resolution data obtained by the Reflection Grating Spectrometer onboard XMM. For the only available field in the Western sky (i.e. the Marano field: \(l,b=269.8,\,-51.7\) deg), the authors report \(\rm F_{OVIII,cgm}=1.41\pm 0.49\) L.U. From the analysis of the eRASS1 data with the inclusion of our SWCX model, we find a consistent value of \(\rm F_{OVIII,cgm}^{\rm eRASS1}=1.59\pm 0.65\) L.U.
After introducing the SWCX component in the modeling of the eFEDS spectrum the temperature of the CGM component increase from \(kT=0.15\) keV to \(kT=0.17\) keV (Ponti et al. 2022). We introduce this change accordingly in the combined+swcx model. We estimate differences of \(-30,\,+96,\,-37,\,-18\)% respectively for the \(C,\,n_{0},\,R_{h}\) and \(z_{h}\)
\begin{table}
\begin{tabular}{l l l l l l l l} name & profile & parameters & \(\chi^{2}\) & d.o.f. & \(\chi^{2}\)/d.o.f. & notes \\ \hline \hline spherical \(\beta\) & eq. 5 & \(\beta=0.26\pm 0.01\) & 59277 & 31418 & 1.89 & diverging intensity (\(\beta<1/3\)); \\ & & C= \((10.9\pm 0.6)\times 10^{-3}\) \(\dagger\) & & & & poor fit at low \(|b|\) \\ \hline disk-like & eq. 6 & \(n_{0}=(1.7\pm 0.1)\times 10^{-2}\) cm\({}^{-3}\) & 52028 & 31417 & 1.66 & good fit of the intensity; \\ & & \(R_{h}=8.0\pm 0.3\) kpc & & & & underestimates \(N_{\rm OVII}\) \\ & & \(z_{h}=3.3\pm 0.1\) kpc & & & & & \\ \hline combined (\(\beta\equiv 0.3\)) & eq. 5+ 6 & \(\beta=0.3\) & 52028 & 31416 & 1.66 & \(\sim\)flat \(\beta\) model is suppressed \\ & & \(C=(0.9\pm 1.0)\times 10^{-3}\) & & & & & \\ & & \(n_{0}=(1.7\pm 0.1)\times 10^{-2}\) cm\({}^{-3}\) & & & & & \\ & & \(R_{h}=8.0\pm 0.4\) kpc & & & & & \\ & & \(z_{h}=3.3\pm 0.2\) kpc & & & & & \\ \hline
**combined (\(\beta\equiv 0.5\))** & \(\beta\equiv 0.5\) & **51805** & **31416** & **1.65** & \(R_{h}\simeq\)**MW stellar-disk radius** \\ & & \(\bf{n_{0}=(3.2\pm 0.4)\times 10^{-2}}\) cm\({}^{-3}\) & & & & & \\ & & \(\bf{R_{0}=(6.2\pm 0.4\) kpc}\) & & & & & \\ & \(\bf{z_{h}=1.1\pm 0.1\) kpc} & & & & & & \\ \hline combined (\(\beta\equiv 0.7\)) & \(\beta\equiv 0.7\) & 51794 & 31416 & 1.65 & large central density of \\ & & \(C=(1.76\pm 0.4)\times 10^{-2}\) cm\({}^{-3}\) & & & & & the spherical halo \\ & & \(n_{0}=(1.6\pm 0.2)\times 10^{-2}\) cm\({}^{-3}\) & & & & & \\ & & \(R_{h}=9.9\pm 0.9\) kpc & & & & & \\ & & \(z_{h}=1.6\pm 0.1\) kpc & & & & & \\ \hline combined+swcx & \(\beta\equiv 0.5\) & 51448 & 31416 & 1.64 & including SWCX model, \\ & & \(C=(3.2\pm 0.1)\times 10^{-2}\) & & & & & \(kT=0.178\) keV \\ & & \(n_{0}=(6.3\pm 0.8)\times 10^{-2}\) cm\({}^{-3}\) & & & & & \\ & \(R_{h}=3.9\pm 0.2\) kpc & & & & & \\ & \(z_{h}=0.9\pm 0.1\) kpc & & & & & \\ \hline combined+20\%instr & \(\beta=0.5\) & 51836 & 31416 & 1.65 & +20\% INST (or CXB) intensity \\ & & \(C=(4.3\pm 0.1)\times 10^{-2}\) & & & & \\ & \(n_{0}=(5.5\pm 0.6)\times 10^{-2}\) cm\({}^{-3}\) & & & & & \\ & \(R_{h}=4.6\pm 0.3\) kpc & & & & & \\ & \(z_{h}=0.9\pm 0.1\) kpc & & & & & \\ \hline combined+high\(\epsilon\) & \(\beta\equiv 0.5\) & 51803 & 31416 & 1.65 & \(Z=0.3Z_{\odot}\), \(kT=0.225\) keV \\ & & \(C=(1.36\pm 0.03)\times 10^{-2}\) & & & & \\ & \(n_{0}=(9.4\pm 0.9)\times 10^{-3}\) cm\({}^{-3}\) & & & & & \\ & \(R_{h}=6.2\pm 0.4\) kpc & & & & & \\ & \(z_{h}=1.1\pm 0.1\) kpc & & & & & \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the best-fit results for the different models. \(\dagger\,\)C \(\equiv\,\)n\({}_{0}^{\rm 59}\) [cm\({}^{-3}\) kpc\({}^{\rm 39}\)] (see eq. 5) can be interpreted in terms of a central density n\({}_{0}\) [cm\({}^{-3}\)] by assuming \(r_{0}\equiv 1\) kpc.
parameters of combined+swcx with respect to the combined (\(\beta\equiv 0.5\)) model. The morphological parameters \(R_{h}\) and \(z_{h}\) change mostly due to a non-uniform morphology of the (faint) SWCX component. The shrinked scale-length \(R_{h}\) then requires the normalizations \(C\) and \(n_{0}\) to adapt, while accounting in general for the +16% increase in temperature \(kT\) (and in turn emissivity).
Similar trends are confirmed when looking at the systematic introduced by a potential underestimation of either the INST or the CXB component (or the sum of them, combined+20%instr model). Given that these are very bright components in the O VIII band, we want to assess how sensitive are our results with respect to the modeling of these components. We estimate -6, +19, -26 and -18% differences with respect to the combined (\(\beta\equiv 0.5\)) model. The systematic shift is comparable or smaller on all parameters, with respect to the one caused by the inclusion of the SWCX component.
In addition, different temperature \(kT=0.225\) keV and metallicity \(Z=0.3\,Z_{\odot}\) assumptions on the CGM modeling are tested through the combined+high\(e\) model. Temperature and metallicity both act in practice only on the emissivity \(\epsilon(T,Z)\) of the APEC model, that in turn do not affect the fit of morphological parameters (\(R_{h}\) and \(z_{h}\) do not change with respect to the combined (\(\beta\equiv 0.5\)) model). Both normalization factors \(C\) and \(n_{0}\) instead decrease by \(\sim 70\%\) compensating for the boosted emissivity \(\epsilon_{\rm OVII}\) due to the higher temperature and Oxygen abundance.
The strong dependence of normalization parameters on temperature and emissivity (both fixed quantities in our model) on the one hand stresses even more the large uncertainty affecting the baryon budget encompassed by the spherical halo component (see Sec. 5.3), while on the other hand strengthen the evidence for the presence of a disk-like component, as it relies mostly on the morphology of the data, rather than on assumptions on the physical properties of the plasma. This remains true also for a scenario in which (part of) the plasma is at a different temperature than the one assumed in this work, or out of equilibrium. Closeby emission can in principle be produced as well by outflowing hot gas out of thermal equilibrium. However, (lack of) equilibrium mainly affects the gas emissivity, which in turn plays a role only in the determination of the density normalization rather than on the fitted geometry. The \(R_{h}\) and \(z_{h}\) parameters are thus relatively solid against the assumption of a collisionally ionised plasma in thermal equilibrium, unless the distribution of temperature and equilibrium phase changes significantly across the sky.
## 5 Discussion
In this section we explore some further implications of our fit results, including the effects produced by our assumptions.
Figure 2: Fit results of the combined (\(\beta\equiv 0.5\)) density model to the O VIII eRASS1 intensity data. The four maps on the left hand side show the eRASS1 selected data (top left) and the best-fit model intensities (bottom left; including all the background and foreground components, and using the same color scale as the data); the logarithm of the ratio data / model in the range \(10^{-0.3}\simeq 0.5\) to \(10^{0.3}\simeq 2\) (top center); the residuals (data-model)/error (bottom center). The plots on the right show the data intensities against the predicted best-fit model intensities (top right) and an example latitudinal profile extracted along the \(l=240\) deg line (bottom right), highlighting the contribute of the different modelled components.
### The disk-like component is everywhere brighter than the spherical halo
What is the fraction of the observed X-ray intensity coming from the extended hot halo component? This important question has so far been left unanswered, or only partially answered, due to the limited X-ray intensity data structures available before the advent of eROSITA. By simulating and re-projecting the expected emission from the components of the combined (\(\beta\equiv 0.5\)) model (the overall model intensity is shown in the lower left panel of Fig. 2), we show in Fig. 5 the ratio between the projected O VIII intensity of the halo (\(\beta\)) and the disk-like component in the combined (\(\beta\equiv 0.5\)) model. The ratio is mostly dependent on Galactic latitude \(|b|\), with a slight dependence over longitude \(l\) too. The ratio is smaller than 1 in all directions, reaching values as low as \(10\%\) at \(|b|\sim 20\) deg. The ratio confirms that the oblate disk-like component provides most of the counts attributed to the CGM in the O VIII band in all directions. Only at high \(|b|>80\) deg the spherical and oblate components result in about the same emission. Previous analyses of high latitude soft-X background data (Kaaret et al. 2020) have already pointed out the improvement in the fit results when including a spherical and a disk-like component rather than a single one. Our result confirm that this procedure is necessary, as the model components hold in general comparable levels of emission, whereas the spherical component only provides a minor contribution at low latitudes.
We stress that the situation described by Fig. 5, as most of the results presented in this work, remains true regardless of the nature of the disk-like component. In fact, a population of unresolved sources with a (thick) disk-like geometry, as well as a truly diffuse plasma embracing the stellar-disk or even emission arising from the interstellar medium, can all be explained by a disk-like geometry component of the emission.
### Most of the plasma emission is produced within a few kpc
From the combined (\(\beta\equiv 0.5\)) model, we predict where most of the observed emission is produced. In Fig. 6 we plot the cumulative surface brightness of the spherical \(\beta\) and disk-like components of the combined (\(\beta\equiv 0.5\)) model as a function of the distance from the Sun \(s\). The height of a line at a given distance \(s\) tell us in practice how much emission has been produced within that distance by that component of the model. Given the general non-spherical symmetry of the projected emission, we compute the profiles for \((l,b)=(270,40)\) deg. A line of sight at \(b=80\) deg is also shown for the combined (\(\beta\equiv 0.5\)) model for comparison (dot-dashed line). The hatched areas encompass the \(10-90\%\) percentile of the surface brightness distribution of each component and model. We plot different models to assess the systematic errors.
We first highlight how the choice of assumed temperature \(kT\) and metallicity \(Z\) affects the density normalizations but leave the emission profile unchanged in practice. In fact, for simplicity we do not show the combined+hige model curves in Fig. 6, as they would overlay entirely with the magenta lines. We already pointed out this feature of the models. Differently, a change in \(\beta\) affects the emission profile, mainly distributing the emission in the spherical component over a broader (narrower) and larger (closer) range of distances for lower (higher) \(\beta\), while only slightly affecting the disk-like component. In general, the disk-like component always cumulatively produce a larger amount of emission and has a faster increase with \(s\). Most of the emission is in fact produced between 0.2 and 5 kpc from us (median \(\sim 1\) kpc), including all systematic uncertainties.
We note that the lower bound of this range is potentially in conflict with our assumption of the absorption happening all in front of the X-ray emission with respect to us, as clouds at latitudes \(|b|\sim 20\) deg above the Galactic plane are found up to distances of 1 kpc (Lallement et al. 2019). At the lower latitudes, breaking the assumption of a single foreground absorbing layer of colder gas may explain (part of) the positive residual still visible (e.g. Fig. 2, 4). Loosening the assumption would unfortunately require to introduce additional degrees of freedom, highly increasing the degeneracy between them. Although we choose not to increase the complexity in our analysis, we point out that only a minor part of the emission in our closest proximity (0.2-0.3 kpc) may be strongly affected by co-spatial absorption, as farther clouds at \(|b|>20\) deg only provide a minor contribute to the column density (Lallement et al. 2019).
### The spherical halo holds most of the mass, but its precise budget remains highly uncertain
Where is most of the hot gas mass? Given our density models and their components, we integrate over the volume and obtain the mass profiles shown in Fig. 7. We note that this time we integrate and plot the profile with respect to the distance from the Galactic Center rather than the position of the Sun. Although the disk-like component produces most of the emission, as we have seen above, the component holding most of the mass is the spherical \(\beta\) model. This is due to the slower roll-off of the profile with
Figure 3: O VII absorption column density data (Miller & Bregman 2013) vs. model. The disk-like model systematically underestimates the absorption column densities. The inclusion of a spherical \(\beta\) model allows to account for them. A combination of the two model geometries thus explains both absorption and emission data.
respect to the exponential of the disk. The density (and pressure) ratio between the spherical halo and the oblate components increases with distance from the Galactic Center \(r\). At large radii, the volume becomes very large \(V\propto r^{3}\), thus collecting most of the mass. The mass profile becomes steeper for lower (i.e. flatter) \(\beta\) profiles, as the density decreases slower for lower \(\beta\). Contrary to the surface brightness profiles, in this case the assumptions on temperature \(kT\) and metallicity \(Z\) of the plasma contribute largely to the systematic offset of the mass profile, as they mainly affect the normalization \(C\) and in turn the overall mass. At about the virial radius \(R_{\rm vir}\simeq 250\) kpc, depending on the choice of \(\beta,\ kT\) and \(Z\), the overall mass of hot gas ranges between some \(\times 10^{9}\) to some \(10^{10}\,M_{\odot}\). Thus, the fraction of baryons present in the MW potential well can not be compared to the cosmic fraction \(f_{b}\) to almost an order of magnitude uncertainty. Provided that it makes sense to expect a baryon fraction \(f_{b}\) within the virial radius of one galaxy to be similar to the cosmic value (Lochhaas et al., 2021), and given the very large systematic uncertainty affecting the hot gas mass models, we can only qualitatively infer some fraction of the baryons being seemingly missing from the MW expected budget.
### The disk-like emission is consistent with the projected mass distribution of the stellar disk
In this work we have confirmed the existence of a component of the soft-X-ray background emission holding a disk-like geometry (to the 0th order). A straightforward question arises, whether this component can be potentially linked to a population of sources in the MW stellar disk or instead, it truly arises from a diffuse plasma component. Provided that a diffuse disk-like plasma component would be anyway related to stellar populations simply through the shared MW gravitational potential and the origin of the heated plasma, the question can be recast as whether the oblate component is the emission of an unresolved X-ray emitting stellar population or of a diffuse plasma. In fact, as anticipated above, the geometry alone of this component does not allow to exclude one or the other hypothesis.
The hypothesis of an unresolved M dwarf stellar population producing part of the soft X-ray flux has already been suggested as an explanation for the \(\sim 0.7\) keV component, but considered as unlikely due to the smaller scale height attributed to the M-dwarf population (Masui et al., 2009; Yoshino et al., 2009). However, the scale height for the M dwarfs may be larger than what previously assumed, as new models of the mass distribution of the MW disc seems to suggest (McMillan, 2017). In addition, M-dwarfs may have an emission component at temperature as low as \(kT=0.1-0.2\) keV (Magaudda et al., 2022), thus also producing O VIII. We compute the MW mass surface density profile following (McMillan, 2017), projected it at the Sun position5. The result is shown in the right panel of Fig. 8, next to the O VIII emission measure (EM) computed for the combined (\(\beta\equiv 0.5\)) model (left panel). The profiles show a remarkably similar morphology. This is also highlighted by the scatter plot in Fig. 9, showing the two quantities/maps one versus the other. It is very interesting how, despite showing different and completely independent quantities, each derived by completely independent data, the points about follow the 1:1 relation, with some scatter. At high EM values (and high \(\Sigma_{M}\), i.e. close to the Galactic plane), the relation shows some deviation suggesting a slightly different trend. Indeed, the trend is mainly determined by the scale height
Figure 4: Results of the disk-like (left) and spherical \(\beta\) (right) density model fits to the O VIII eRASS1 intensity data. All the plot details are the same as Fig. 2
Figure 5: Intensity ratio between the spherical \(\beta\) halo and disk-like components in the O VIII band, for the combined (\(\beta\equiv 0.5\)) model. The disk-like component is everywhere brighter. The two components are about equal only at the Galactic poles.
of the combined (\(\beta\equiv 0.5\)) model (\(z_{h}=1.1\pm 0.1\) kpc) with respect to the value used to compute the mass profile (\(z_{h}=0.9\) kpc McMillan 2017). Considering the systematic uncertainty on our result, the profiles are consistent with each other.
We note that the normalizations in Fig. 9 are clearly not directly comparable, as they originate from different quantities. Furthermore, the median of the compared models is set equal by definition. The shift \(c\) between the EM\({}_{\rm OVIII}\) and \(\Sigma_{M}\) normalizations may or may not contain information on the luminosity of the stellar population potentially contributing to the X-ray background emission.
Although the geometrical similarity between the X-ray emission of the disk-like component and the projected mass profile looks interesting, we stress that the mass and the combined (\(\beta\equiv 0.5\)) models compared in Fig. 8 and 9 are the result of a particular choice among the possible models that we showed being
Figure 8: Comparison between the projected morphologies of the combined (\(\beta\equiv 0.5\)) model EM (left panel) and the MW mass surface density (right panel) profiles. The mass surface density \(\Sigma_{M}\) has been arbitrarily renormalized to show the same range of (log) values [17; 19].
Figure 6: Cumulative surface brightness as a function of the distance from the Sun \(s\). For each model, the dotted and dashed lines show the spherical and disk-like components respectively. The hatched areas encompass the \(10-90\%\) percentile of the surface brightness distribution.
Figure 7: Cumulative mass of the hot gas as a function of the distance from the Galactic Center \(r\). For the combined (\(\beta\equiv 0.5\)) model the solid and dashed lines show the spherical and disk-like components respectively. The horizontal gray lines show 10, 30 and 100% of \(f_{b}\cdot 10^{12}M_{\odot}\) where \(f_{b}\equiv\Sigma_{b}/\Sigma_{m}=0.071\) is the cosmic baryon fraction.
Figure 9: Scatter plot of the quantities shown in Fig. 8. The dashed black line shows the 1:1 relation between them.
still affected by some systematic uncertainties. In addition, using a geometry-independent approach Nakashima et al. (2018) estimated the M dwarf unresolved emission from the faint end of their \(\log N-\log S\) and found it to account for less than 20% of the CGM flux in the soft X-ray. EM and \(\Sigma_{M}\) may thus eventually differ significantly.
The EM of the warm-hot CGM component has also been reported to scale linearly with the EM of the hot component as \(\rm EM_{warm-hot}\sim 10.8\times EM_{hot}\) across high (absolute) latitude regions, after their detection by HaloSat (Bluem et al. 2022). The authors interpret the EM relation as disfavouring the stellar coronae being responsible for the emission of the hot phase. Their interpretation is based on the assumption that the warm-hot phase is produced by diffuse plasma. By giving up this assumption, the emission of both the warm-hot and the hot phases may be contributed by stellar coronae emission (in part or entirely). However, both the warm-hot and hot phases have been detected through absorption lines (Das et al. 2019a,b). The detected column densities can not be explained by the small cross section offered by the stellar coronae, while they are easier to accommodate assuming a truly diffuse nature of the plasma phases. While the absorption by the warm-hot component could be accounted for by the spherical halo, the hot phase associated with the disk-like component seems to rule out the unresolved population scenario as its main cause. Our point here is that the potential connection between the soft-X ray emission and an unresolved stellar population, while being disfavoured by different probes, is not rejected by our simple comparison. In future, a dedicated and more quantitative investigation of the contribute of stellar coronae to the soft X-ray emission, may be worth on the light of new mass models for the MW and the amount and quality of the eROSITA data.
However, as already pointed out, the same evidence of a similar scale height between the X-ray disk-like component and the MW mass distribution can be explained by an X-ray emitting gas whose dynamic is governed by the same gravitational potential followed by the stellar thick disk, producing similar scale heights. In this picture, the hot atmosphere is also expected to be stationary to first approximation, as a single episode of energy injection (e.g. an outflow from an active star forming region) is not necessarily expected to correlate with the thick disk height. In the next session, we try to assess if such a hot gaseous disk-like component can be supported by the current stellar activity in the MW disk.
### The local star formation rate can sustain the disk-like component
What is the thermal luminosity implied by the emitting gas and how does it compare with the luminosity implied by star formation (i.e. supernovae explosions)? In Sec. 5.2 we derived that most of the observed emission comes from within few kpc from the Sun. We thus compute the X-ray luminosity in the solar neighbourhood. We consider a cylinder of radius \(\rm{\Delta R}=3\) kpc centered on the Sun position and extending for \(\rm{\Delta z}=3\) kpc above and below the MW mid-plane. The mean density value weighted for the profile of the disk-like component within the cylinder is \(\rm{\langle n\rangle}=(3.2\pm 2)\times 10^{-3}\,cm^{-3}\). Under the assumption that all the emission is produced by diffuse gas of thermal energy \(kT=0.15\) keV, its soft X-ray (0.2-2 keV) luminosity is \(\rm{L_{X}}=\epsilon_{0.2-2keV}\cdot V_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{
at moderate/high Galactic latitudes (\(|b|>40\) deg). HVCs pressure measurements in fact are degenerate with respect to the (unknown) cloud distances \(d\) from us, following \(P\propto d^{-1}\)(Wakker and Schwarz, 1991). For high-latitude clouds then the assumption \(d\sim z\) introduces a smaller bias in the vertical pressure profile \(P_{z}\) then for clouds at lower latitudes. We also compute the pressure profile derived for the warm-hot disk-like component of our best-fit model. The model is computed at the Sun position, following \(n_{\odot}e^{-|b|/z_{\rm th}}\).
We find that the pressure of the disk-like gas component is in the same ballpark as the ambient pressure required for HVCs, with moderate scatter. We note that the detailed picture for each cloud may be dependent on other sources of (non-thermal) pressure support within the clouds (e.g. turbulence, magnetic, cosmic rays) that may produce the scatter in the profiles. The HVCs with highest pressure may be even found at distances larger than 5 kpc where the spherical halo pressure component starts to dominate over the disk. However, the general order of magnitude agreement in the disk suggests a physical link between the hot gas pressure and the HVCs pressure, which would not have reason to match otherwise. In addition, by assuming our model, the distance to the HVCs can be roughly derived at few kpc from the Sun, broadly agreeing with the recent constraints put on other HVCs of similar properties (Lehner et al., 2022).
### Results into context
Analyses of the O VIII line intensities for the study of the diffuse Galactic background have been already conducted using different instruments. In general, different instruments (e.g. XMM-_Newton_, Suzaku, HaloSat, eROSITA) have different field of view, spectral resolution, reach different spatial resolution and cover a different total sky area. In addition, and potentially based on the information retrieved by a given instrument, different authors may adopt different assumptions on the physical properties of the hot CGM of the MW (e.g. \(kT\), \(Z\)) when required. Despite the differences however, a consistent picture is arising from these experiments.
In Fig. 11, we show the parameters derived by similar studies, for the density profile of the disk-like component. We focus on the disk-like component as it is the one producing most of the emission attributed to the CGM X-ray emission. We first note that thanks to the sky coverage and spatial resolution of eROSITA, the statistical uncertainties are the smallest to date. Despite this however, physical assumptions necessarily introduce systematic biases in the results, witnessed by the significant shift of the best-fit parameters for the different models summarized in Tab. 4 (empty magenta stars) with respect to the combined (\(\beta\equiv 0.5\)) model (filled magenta star). Compared with the other works, we find an overall agreement on \(z_{\rm th}\sim 1-3\) kpc, with some preference towards the lower end of the range. The agreement likely comes from the fact that no physical assumption (i.e. \(kT\), \(Z\)) is required to fit this parameters. Again, we note that the lower bound of a scale height of 1-3 kpc is not too far to the one estimated for the thick disk component of the MW of \(z_{\rm disk}=0.9\) kpc (McMillan, 2017), as discussed above. If the source of the disk-like emission is a truly diffuse plasma, the normalization of the profile \(n_{0}\) really indicates a density value. Although \(n_{0}\) shows inconsistency between experiments when only the statistical uncertainties are taken in to account, we note that assumptions on temperature and metal abundances shift \(n_{0}\) across the parameter space. Considering that our combined (\(\beta\equiv 0.5\)) model assumes \(kT=0.15\) keV and \(Z=0.1Z_{\odot}\), \(n_{0}\) is consistent with the \(\sim\times 2-4\) lower values found using HaloSat (\(kT\simeq 0.225\), \(Z\equiv 0.3Z_{\odot}\)Kaaret et al., 2020) and Suzaku (\(kT\simeq 0.28\), \(Z\equiv Z_{\odot}\)Nakashima et al., 2018), within uncertainties, as shown by our combined+highe model, holding similar assumptions. The large statistical uncertainity on the value estimated using XMM (Li and Bregman, 2017) relates to the small sky coverage of the experiment. Furthermore, a minor but additional scatter across different experiments possibly relates to the different treatment of background and foreground components other than the hot CGM of the MW.
Using a conservative approach, \(n_{0}\simeq 1-6\times 10^{-2}\,{\rm cm}^{-3}\) reasonably encompasses the actual value, although we note that, in a sense, \(n_{0}\) has still a more geometrical meaning rather than a physical one, for at least two different reasons: i) our and similar experiments are currently only able to probe the CGM using average profile models, while the details at a precise location in space are expected to certainly deviate (within some scatter) from the average picture; ii) as \(n_{0}\) is the extrapolation of the density profile for \((R,z)<<(R_{h},z_{\rm th})\), there may be no place at all holding \(n=n_{0}\), as the physics at the Galactic Center largely deviates from the one assumed in these works.
In Fig. 12 we compute the column density of O VIII emitting gas, as seen by an observer far away from the MW, who computes \(N_{\rm OVIII}\) as a function of the projected distance from the Galactic Center. Different colors of the solid lines in Fig. 12 show the profiles for our combined (\(\beta\equiv 0.5\)) model (magenta) and some of the others summarized in Table 4, as labelled. In particular we show the combined (\(\beta\equiv 0.3\)) and combined+highe to investigate how \(\beta\), \(kT\) and \(Z\) affect \(N_{\rm OVIII}\). We compare the \(N_{\rm OVIII}\) profiles with prediction from the HESTIA simulations of the MW in the Local Group for different initial conditions (please refer to Damle et al., 2022, for details). Our models are all consistent with the HESTIA profiles within the first \(\sim 100\) kpc. At larger distances they seem to overpredict the column density of some of the HESTIA realizations, while they become inconsistent at \(R_{\rm proj}>500\) kpc, with the main difference induced by the choice of \(\beta\). This distances however correspond to \(\sim 2R_{\rm vir}\), where our assumptions on the physics may be broken by galaxy-galaxy interactions within the Local Group. Although this comparison does not allow us to state in favor or against some of our models, we find a gen
Figure 11: \(z_{\rm th}\) vs. \(n_{0}\) derived by different works, as labelled. Error bars only provide statistical uncertainties quoted in the references.
eral agreement between our results and the HESTIA predictions.
ary (i.e. long-lived) gaseous atmosphere following the same gravitational potential as the stellar thick disk;
* vi) a truly diffuse nature of the disk-like component can be energetically sustained by star formation (via heating by supernovae explosions) at least locally;
* vii) the disk-like component of the warm-hot CGM can provide (part of) the ambient pressure support required by observations of high velocity clouds in the MW.
We also demonstrated the augmented statistical power provided by the quality and amount of the eROSITA data. Our knowledge of the CGM properties are thus now limited by the still necessary physical assumptions on the plasma properties. These assumption will potentially be loosened by observations of the soft X-ray band with future high resolution spectrometers (e.g. XRISM, Athena), allowing to resolve individual emission lines and in turn further constraining \(kT\) and \(Z\) in the CGM of the MW.
###### Acknowledgements.
This work is based on data from eROSITA, the soft X-ray instrument aboard SRG, a joint Russian-German science mission supported by the Russian Space Agency (Roskosmos), in the interests of the Russian Academy of Sciences represented by its Space Research Institute (IKI), and the Deutsches Zentrum fur Luftft- und Raumfahrt (DLR). The SRG spacecraft was built by Lavochkin Association (NPOL) and its subcontractors, and is operated by NPOLO with support from the Max Planck Institute for Extraterrestrial Physics (MPE). The development and construction of the eROSITA X-ray instrument was led by MPE, with contributions from the Dr. Karl Remesis Observatory Bamberg & ECAP (FAU Erlangen-Nuemberg), the University of Hamburg Observatory, the Leibniz Institute for Astrophysics Potsdam (AIP), and the Institute for Astronomy and Astrophysics of the University of Tubingen, with the support of DLR and the Max Planck Society. The Argelander Institute for Astronomy of the University of Bonn and the Ludwig Maximilians Universitat Munich also participated in the science preparation for eROSITA. The eROSITA data shown here were processed using the eASS software system developed by the German eROSITA consortium. NL, GP and XZ acknowledge financial support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program HotMilk (grant agreement No. [865637]). GP also aknowledges support from Bando per il Finanziamento della Ricerca Fondamentale 2022 dell'Istituto Nazionale di Astrofisica (INAF): GO Large program. The authors thank Mattia Sormani and Shifra Mandel for fruitful discussion and help.
|
2308.12348 | Particle-in-cell Simulations of the Magnetorotational Instability in
Stratified Shearing Boxes | The magnetorotational instability (MRI) plays a crucial role in regulating
the accretion efficiency in astrophysical accretion disks. In low-luminosity
disks around black holes, such as Sgr A* and M87, Coulomb collisions are
infrequent, making the MRI physics effectively collisionless. The collisionless
MRI gives rise to kinetic plasma effects that can potentially affect its
dynamic and thermodynamic properties. We present 2D and 3D particle-in-cell
(PIC) plasma simulations of the collisionless MRI in stratified disks using
shearing boxes with net vertical field. We use pair plasmas, with initial
$\beta=100$ and concentrate on sub-relativistic plasma temperatures ($k_BT
\lesssim mc^2$). Our 2D and 3D runs show disk expansion, particle and magnetic
field outflows, and a dynamo-like process. They also produce magnetic pressure
dominated disks with (Maxwell stress dominated) viscosity parameter $\alpha
\sim 0.5-1$. By the end of the simulations, the dynamo-like magnetic field
tends to dominate the magnetic energy and the viscosity in the disks. Our 2D
and 3D runs produce fairly similar results, and are also consistent with
previous 3D MHD simulations. Our simulations also show nonthermal particle
acceleration, approximately characterized by power-law tails with temperature
dependent spectral indices $-p$. For temperatures $k_BT \sim 0.05-0.3\, mc^2$,
we find $p\approx 2.2-1.9$. The maximum accelerated particle energy depends on
the scale separation between MHD and Larmor-scale plasma phenomena in a way
consistent with previous PIC results of magnetic reconnection-driven
acceleration. Our study constitutes a first step towards modeling from first
principles potentially observable stratified MRI effects in low-luminosity
accretion disks around black holes. | Astor Sandoval, Mario Riquelme, Anatoly Spitkovsky, Fabio Bacchini | 2023-08-23T18:00:15Z | http://arxiv.org/abs/2308.12348v1 | # Particle-in-cell Simulations of the Magnetorotational Instability
###### Abstract
The magnetorotational instability (MRI) plays a crucial role in regulating the accretion efficiency in astrophysical accretion disks. In low-luminosity disks around black holes, such as Sgr A* and M87, Coulomb collisions are infrequent, making the MRI physics effectively collisionless. The collisionless MRI gives rise to kinetic plasma effects that can potentially affect its dynamic and thermodynamic properties. We present 2D and 3D particle-in-cell (PIC) plasma simulations of the collisionless MRI in stratified disks using shearing boxes with net vertical field. We use pair plasmas, with initial \(\beta=100\) and concentrate on sub-relativistic plasma temperatures (\(k_{B}T<mc^{2}\)). Our 2D and 3D runs show disk expansion, particle and magnetic field outflows, and a dynamo-like process. They also produce magnetic pressure dominated disks with (Maxwell stress dominated) viscosity parameter \(\alpha\sim 0.5-1\). By the end of the simulations, the dynamo-like magnetic field tends to dominate the magnetic energy and the viscosity in the disks. Our 2D and 3D runs produce fairly similar results, and are also consistent with previous 3D MHD simulations. Our simulations also show nonthermal particle acceleration, approximately characterized by power-law tails with temperature dependent spectral indices \(-p\). For temperatures \(k_{B}T\sim 0.05-0.3\,mc^{2}\), we find \(p\approx 2.2-1.9\). The maximum accelerated particle energy depends on the scale separation between MHD and Larmor-scale plasma phenomena in a way consistent with previous PIC results of magnetic reconnection-driven acceleration. Our study constitutes a first step towards modeling from first principles potentially observable stratified MRI effects in low-luminosity accretion disks around black holes.
keywords: plasmas - instabilities - accretion disks - acceleration of particles - dynamo
## 1 Introduction
The primary driver of accretion in astrophysical disks is believed to be the turbulence generated by the magnetorotational instability (MRI; Balbus & Hawley, 1991, 1998), which provides the needed outward transport of angular momentum. Most of our knowledge about the nonlinear evolution of the MRI in different disk regimes comes from magnetohydrodynamic (MHD) simulations. However, in the regime where the plasma accretion rate is much lower than the Eddington rate, the Coulomb mean free path of the particles can be much larger than the system size, rendering the disk effectively collisionless and making the MHD approach inapplicable. This collisionless accretion regime is expected, for instance, in the low-hard state of X-ray binaries (Esin et al., 1997) as well as around the central supermassive black holes of most nearby galaxies, including M87 and Sagittarius A* (Sgr A*) in our own Milky Way (Yuan & Narayan, 2014).
The collisionless version of the MRI can give rise to several kinetic plasma phenomena, which may in turn affect its dynamics as well as the thermodynamic properties of the accreting plasma. These kinetic phenomena have been studied mainly via unstratified shearing-box MRI simulations, using either a fluid approach through kinetic-MHD models (Sharma et al., 2006, 2007) or particle simulations that employ either the hybrid or the particle-in-cell (PIC) methods (Riquelme et al., 2012; Hoshino, 2013, 2015; Kunz et al., 2016; Inchingolo et al., 2018; Bacchini et al., 2022). One of the relevant kinetic phenomena is the appearance of an anisotropic stress, which is due to the presence of a pressure anisotropy in the accreting turbulence. Previous unstratified shearing-box simulation studies, both based on fluid and particle methods, have found that this anisotropic stress may contribute significantly to the disk viscosity, making the collisionless MRI turbulence more efficient in transporting angular momentum compared to its collisional counterpart.
Another potentially important collisionless phenomenon is the possibly different ion and electron heating rates (e.g., Sharma et al., 2007). However, to date PIC studies have only used an ion to electron mass ratio \(m_{i}/m_{e}=1\) (or close to unity), therefore not capturing the possibly different heating efficiencies of the different species. Plasma energization can also include nonthermal particle acceleration. Studying this phenomenon requires fully kinetic treatments of at least one species, which has been done
through PIC and hybrid simulations. Different levels of nonthermal particle acceleration have indeed been found by these types of studies (Riquelme et al., 2012; Hoshino, 2013, 2015; Kunz et al., 2016; Inchingolo et al., 2018; Bacchini et al., 2022), although the conditions under which this acceleration is most efficient and the mechanism(s) underlying this phenomenon remain to be clarified.
An important physical ingredient, so far not included in hybrid or fully kinetic PIC studies of the MRI, is the vertical stratification of the disks. While the unstratified local shearing-box approximation allows us to investigate a disk by focusing on a small vertical section, this approach does not account for potentially important processes in stratified disks, such as outflows, disk expansion and the generation of a corona, among others. Stratified disks have been included in previous MHD shearing-box simulations of the MRI, which have found that stratification can give rise to important phenomena like outflows and dynamo-like processes, which may in turn affect the overall accretion efficiency of the disks (Bai & Stone, 2013; Salvesen et al., 2016). Also, a kinetic-MHD study that considers a stratified disk (Hirabayashi & Hoshino, 2017) has found that disk stratification may decrease the importance of anisotropic stress significantly compared to unstratified kinetic-MHD results.
To address these possible effects, our study employs 2D and 3D stratified shearing-box PIC simulations to examine the development of the collisionless MRI. We use equal ion and electron masses, \(m_{i}=m_{e}=m\) for computational convenience, and focus on the sub-relativistic temperature regime, relevant for the inner regions of black hole accretion disks (\(k_{B}T\lesssim mc^{2}\), where \(k_{B}\) is the Boltzmann constant, \(T\) is the plasma temperature and \(c\) is the speed of light). By comparing with unstratified PIC runs, we show the importance of including stratification to describe phenomena like plasma beta evolution, effective viscosity and particle acceleration. In our 2D runs we pay special attention to the role played by the ratio between the initial cyclotron frequency of the particles and the Keplerian frequency of the disk, \(\omega_{c,0}/\Omega_{0}\) (hereafter, the scale-separation ratio). In realistic disks, this ratio satisfies \(\omega_{c,0}/\Omega_{0}\gg 1\) and determines the scale separation between mesoscale MHD phenomena and kinetic microphysical processes. Even though most of our analysis is done in 2D, in this paper we take a step forward by conducting the first fully kinetic 3D simulation of the stratified MRI evolution. This preliminary 3D simulation enables us to compare it with our established 2D results and gain valuable insight into the limitations of the 2D approach. This exploration sets the stage for future investigations aimed at fully unraveling the complexities of the 3D scenario.
The paper is organized as follows. In SS2 we describe our numerical method and simulation setup. In SS3 and SS4 we present the general properties of the stratified MRI turbulence in 2D and 3D, respectively. In SS5 we quantify the effective viscosity in our runs, and in SS6 we analyze the ability of the stratified MRI turbulence to accelerate particles. Finally, we present our conclusions in SS7.
## 2 Simulation setup
We use the electromagnetic PIC code TRISTAN-MP (Buneman, 1993; Spitkovsky, 2005) in 2D and 3D. Our simulations are performed in the local, shearing-box approximation (Hawley et al., 1995), using Cartesian coordinates where the \(x\), \(y\) and \(z\) axes correspond to the radial, azimuthal (or toroidal) and vertical directions of the disk, respectively. This reference frame rotates with an angular velocity \(\Omega_{0}=\Omega_{0}z\), corresponding to the Keplerian angular velocity at a radius that coincides with the center of our simulation box. In order to model a stratified disk, we include the vertical component of the gravitational force produced by the central object, \(-m\Omega_{0}^{2}z^{2}\), and we initially set up an isothermal disk in hydrostatic equilibrium with a \(z\)-dependent density profile:
\[n(z)=n_{0}\exp\left(-\frac{z^{2}}{H_{0}^{2}}\right), \tag{1}\]
where \(n_{0}\) is the plasma density at the disk midplane (considering both species), \(H_{0}\) is the scale height of the disk given by \(H_{0}=(2k_{B}T_{0}/m)^{1/2}/\Omega_{0}\) and \(T_{0}\) is the initial plasma temperature, which is given by \(k_{B}T_{0}/mc^{2}=5\times 10^{-3}\) in all of our runs. Our runs do not include any type of particle cooling, so a gradual increase in the temperature and scale height of the simulated disks is expected due to dissipation of magnetic energy. The whole simulation domain is initially threaded by a vertical, homogeneous magnetic field \(\mathbf{B}_{0}=B_{0}^{\,2}\), so that the initial plasma \(\beta\) parameter in the disk midplane, \(\beta_{0}\) (\(=8\pi n_{0}k_{B}T_{0}/B_{0}^{\,2}\)), has a value of \(\beta_{0}=100\). These choices for \(T_{0}\) and \(\beta_{0}\) imply that the initial Alfven velocity in the disk midplane, \(v_{A,0}\) (\(=B_{0}/(4\pi n_{0}m)^{1/2}\)), is \(v_{A,0}/c=10^{-2}\) in all of our runs.
### Basic Equations
In our rotating frame, the time derivative of particles momentum \(\mathbf{p}=(p_{x},p_{y},p_{z})\) is determined by the Lorentz force, the radial and vertical components of gravity, and the Coriolis force:1
Footnote 1: Since Coriolis forces conserve kinetic energy, the standard Coriolis expression for the evolution of \(\mathbf{v}\), \(d\mathbf{v}/dt=2\Omega_{0}\times\mathbf{v}\), can be directly translated into a relativistic momentum \(\mathbf{p}\) evolution as \(d\mathbf{p}/dt=2\Omega_{0}\times\mathbf{p}\).
\[\frac{d\mathbf{p}}{dt}=q(\mathbf{E}+\frac{\mathbf{v}}{c}\times\mathbf{B})+3m\Omega_{0}^{2}x \delta-m\Omega_{0}^{2}z^{2}-2\Omega_{0}\times\mathbf{p}, \tag{2}\]
where \(\mathbf{v}=\mathbf{p}/(\gamma m)=(v_{x},v_{y},v_{z})\), \(q\), \(\mathbf{E}\) and \(\mathbf{B}\) are, respectively, the particle velocity, the particle charge and the electric and magnetic fields. In this non-inertial frame, Maxwell's equations also acquire extra terms, which modify the evolution of the electric field as (Schiff, 1939):
\[\frac{\partial\mathbf{E}}{\partial t}=c\nabla\times\mathbf{B}-4\pi\mathbf{J}+\frac{\mathbf{v}_ {0}}{c}\times\frac{\partial\mathbf{B}}{\partial t}-\nabla\times\left(\mathbf{v}_{0} \times\left(\mathbf{E}-\frac{\mathbf{v}_{0}}{c}\times\mathbf{B}\right)\right), \tag{3}\]
where \(\mathbf{J}\) is the current density and \(\mathbf{v}_{0}\) is the Keplerian rotation velocity of the disk at the center of our simulation box (the evolution of the magnetic field \(\partial\mathbf{B}/\partial t=-c\nabla\times\mathbf{E}\) is not modified in the rotating frame). As discussed in Riquelme et al. (2012), the terms proportional to \(\mathbf{v}_{0}\) in Eq. 3 can in principle be comparable to the displacement current \(\partial\mathbf{E}/\partial t\), but should not change the non-relativistic MHD behavior of the plasma. This is because, in the non-relativistic regime (\(|\mathbf{v}_{0}|=v_{0}\ll c\)), these extra terms are always much smaller than the first term on the right hand side of Eq. 3 (\(c\nabla\times\mathbf{B}/4\pi\)). Therefore, the current density \(\mathbf{J}\) should still adjust to satisfy \(\mathbf{J}\approx c\nabla\times\mathbf{B}/4\pi\), as assumed in the non-relativistic MHD approach. Thus, as it has been done in all previous PIC and hybrid studies of the MRI, we drop the terms proportional to \(v_{0}\) in Eq. 3 and solve the conventional Maxwell's equations. We are thus implicitly assuming that these (beyond MHD) modifications to the displacement current do not affect considerably the kinetic MRI dynamics.
### Shearing Coordinates
Simulating the MRI in the shearing-box approximation requires implementing shearing periodic boundary conditions in the radial (\(x\)) direction (e.g., Hawley et al. 1995). We do this by employing _shearing coordinates_(Riquelme et al., 2012), in which the grid follows the shearing velocity profile within the shearing box, allowing the use of standard periodic boundary conditions in the radial (\(x\)) direction. However, the use of shearing coordinates introduces modifications in the evolution of the electric and magnetic fields, as well as in the evolution of particles momenta and positions. These modifications are described in detail in the Appendix of Riquelme et al. (2012) and, for easy access, are also summarized below.
In the shearing coordinates, the fields evolve as
\[\frac{\partial\mathbf{B}}{\partial t} = -c\nabla\times\mathbf{E}\] \[-\frac{3\Omega_{0}}{2}B_{x}\hat{y}+\frac{3\Omega_{0}}{2}\left(ct \frac{\partial\mathbf{E}}{\partial y}+\frac{y}{c}\frac{\partial\mathbf{E}}{\partial t }\right)\times\hat{x}\text{ and }\] \[\frac{\partial\mathbf{E}}{\partial t} = c\nabla\times\mathbf{B}-4\pi\mathbf{J}\] \[-\frac{3\Omega_{0}}{2}E_{x}\hat{y}-\frac{3\Omega_{0}}{2}\left(ct \frac{\partial\mathbf{B}}{\partial y}+\frac{y}{c}\frac{\partial\mathbf{B}}{\partial t }\right)\times\hat{x}.\]
The last terms in these equations, which are proportional to \(y/c\) (hereafter, the \(y\)-dependent terms), can, however, be neglected in the \(v_{A,0}/c\ll 1\) regime, as it is shown below. This can be seen considering that the size of our shearing-boxes in the \(y\) direction should be typically a few times the wavelength of the most unstable MRI modes \(\lambda_{MRI}=2\pi v_{A,0}/\Omega_{0}\), which means that \(\Omega_{0}y\sim v_{A,0}\) (the fact that \(\lambda_{MRI}\) is the dominant scale of the MRI turbulence even in its nonlinear stage will be shown in SS4.1 and further discussed in SS4.2). Also, assuming that the order of magnitude of the time derivative of any field component \(f\) should satisfy \(\partial f/\partial t\sim\Omega_{0}f\), one can calculate the ratios between the magnitudes of the \(y-\)dependent terms in Eqs. 4 and 5 and the left hand side of these equations, obtaining:
\[\frac{\left|\left(\Omega_{0}y/c\right)\partial\mathbf{E}/\partial t\right|}{ \left|\partial\mathbf{B}/\partial t\right|}\sim\frac{|\mathbf{E}|}{|\mathbf{B}|}\frac{v_{ A,0}}{c}, \tag{6}\]
for Eq. 4 and
\[\frac{\left|\left(\Omega_{0}y/c\right)\partial\mathbf{B}/\partial t\right|}{ \left|\partial\mathbf{E}/\partial t\right|}\sim\frac{|\mathbf{B}|}{|\mathbf{E}|}\frac{v_{ A,0}}{c} \tag{7}\]
for Eq. 5. Since in general \(|\mathbf{E}|/|\mathbf{B}|\lesssim 1\) (which is verified in SS4.2), the right hand side of Eq. 6 is much smaller than unity as long as \(v_{A,0}/c\ll 1\), implying that the \(y\)-dependent term in Eq. 4 can be safely neglected. The right hand side of Eq. 7, on the other hand, is not necessarily \(\ll 1\) since its value depends on the precise magnitude of the ratio \(|\mathbf{E}|/|\mathbf{B}|\), which makes the \(y-\)dependent term in Eq. 5 not necessarily negligible. However, using the approximation \(\nabla f\sim\lambda_{MRI}^{-1}\sim(\Omega_{0}/v_{A,0})f\), we can calculate the ratio between the magnitude of this \(y-\)dependent term and an estimate of the magnitude of the first term on the right hand side of Eq. 5 (\(c\nabla\times\mathbf{B}\)), obtaining:
\[\frac{\left|\left(\Omega_{0}y/c\right)\partial\mathbf{B}/\partial t\right|}{ \left|c\nabla\times\mathbf{B}\right|}\sim\Big{(}\frac{v_{A,0}}{c}\Big{)}^{2}. \tag{8}\]
This implies that dropping the \(y-\)dependent term in Eq. 5 should not change the non-relativistic MHD behavior of the plasma, in which \(\mathbf{J}\approx c\nabla\times\mathbf{B}/4\pi\), and is also consistent with our previous choice of ignoring the terms proportional to \(v_{0}\) in Eq. 3. Doing a similar analysis, we find that the ratio between the magnitudes of the third and the first terms on the right hand side of Eq. 5 is
\[\frac{\Omega_{0}|E_{x}|}{\left|c\nabla\times\mathbf{B}\right|}\sim\frac{|\mathbf{E}|}{ |\mathbf{B}|}\frac{v_{A,0}}{c}, \tag{9}\]
so, for consistency, we also neglect the former. Therefore, in our simulations we evolve the fields by solving the equations:
\[\frac{\partial\mathbf{B}}{\partial t} = -c\nabla\times\mathbf{E}\] \[-\frac{3}{2}\Omega_{0}B_{x}\hat{y}+\frac{3}{2}\Omega_{0}t\,c\, \frac{\partial\mathbf{E}}{\partial y}\times\hat{x}\text{ and }\] \[\frac{\partial\mathbf{E}}{\partial t} = c\nabla\times\mathbf{B}-4\pi\mathbf{J}-\frac{3}{2}\Omega_{0}t\,c\,\frac{ \partial\mathbf{B}}{\partial y}\times\hat{x}. \tag{11}\]
In terms of particles evolution, in the shearing coordinates each particle's momentum \(\mathbf{p}\) evolves as (Riquelme et al., 2012):
\[\frac{d\mathbf{p}}{dt}=2\Omega_{0}p\,y\hat{x}-\frac{1}{2}\Omega_{0}p_{x}\hat{y}-m \Omega_{0}^{2}c^{2}+q(\mathbf{E}+\frac{\mathbf{v}}{c}\times\mathbf{B}), \tag{12}\]
which is valid in the limit \(\Omega_{0}y\sim v_{A,0}\ll c\) and as long as the
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Run & UN2D-20 & ST2D-28 & ST2D-20 & ST2D-14 & ST2D-10 & ST2D-7 & ST2D-3.5 & ST3D-3.5 \\ \hline \(\omega_{c,0}/\Omega_{0}\) & 20 & 28 & 20 & 14 & 10 & 7 & 3.5 & 3.5 \\ \(L_{x}\) [\(2\pi v_{A,0}/\Omega_{0}\)] & 22 & 35 & 43 & 47 & 46 & 46 & 48 & 24 \\ \(L_{y}\) [\(2\pi v_{A,0}/\Omega_{0}\)] & - & - & - & - & - & - & - & 24 \\ \(L_{z}\) [\(2\pi v_{A,0}/\Omega_{0}\)] & 22 & 120 & 89 & 95 & 92 & 93 & 96 & 96 \\ \(\Delta\left|c/\omega_{p,0}\right|\) & 0.35 & 0.35 & 0.35 & 0.35 & 0.35 & 0.35 & 0.35 & 0.35 \\ \(Np_{PC}\) & 25 & 400 & 350 & 200 & 200 & 200 & 200 & 30 \\ \(c\) [\(\Delta/\Delta t\)] & 0.45 & 0.45 & 0.45 & 0.45 & 0.45 & 0.45 & 0.45 & 0.45 & 0.225 \\ \hline \end{tabular} We list the initial parameters of our simulations, which are: the scale-separation ratio \(\omega_{c,0}/\Omega_{0}\), where \(\omega_{c,0}=|q|B_{0}/mc\) is the initial cyclotron frequency of the particles, the box size along the different axes (\(L_{x}\), \(L_{y}\) and \(L_{z}\)) in terms of \(\lambda_{MRI}=2\pi v_{A,0}/\Omega_{0}\), the grid spacing \(\Delta\) (equal in all dimensions) in terms of the initial plasma skin depth, \(c/\omega_{p,0}=c/(4\pi n_{0}q^{2}/m)^{1/2}\), the initial number \(Np_{PC}\) of particles (ions and electrons) per cell, and the speed of light \(c\) in units of \(\Delta/\Delta t\), where \(\Delta t\) is the simulation time step.
\end{table}
Table 1: Simulations parameters
shear velocity of the plasma within our simulation domain, \(\mathbf{v_{s}}\), is non-relativistic. This last assumption is justified since \(v_{s}\sim\Omega_{0}\mathbf{x}\), and \(x\) in our shearing-box is also of the order of a few times \(\lambda_{MRI}=2\pi v_{A,0}/\Omega_{0}\). This implies that \(|\mathbf{v_{s}}|\sim\Omega_{0}\mathbf{x}\sim v_{A,0}\), making Eq. 12 valid in the regime \(v_{A,0}\ll c\).
Finally, the evolution of the particles position \(\mathbf{r}=(x,y,z)\) is given by:
\[\frac{d\mathbf{r}}{dt}=\mathbf{v}+\frac{3}{2}\Omega_{0}t\;v_{x}\hat{y}, \tag{13}\]
which is obtained combining Eqs. A30 and A35 of Riquelme et al. (2012) also in the limit in which \(\mathbf{v_{s}}\) is non-relativistic.
In order to safeguard the numerical stability and accuracy of our simulations, every time the factor \((3/2)\Omega_{0}t\) on the right hand side of Eqs. 10, 11 and 13 equals an integer, we reset these equations to their initial (\(t=0\)) shape. This implies a periodic "unshearing" of our shearing grid that, therefore, requires a remapping of the electric and magnetic fields, as well as of the particles positions. This periodic redefinition of the time origin in our runs means that the factors \((3/2)\Omega_{0}t\) in Eqs. 10, 11 and 13 never surpass unity. Notice also that Eq. 13 implies that relativistic particles may in principle change position in the \(y-\)direction at a rate close to twice the speed of light. This should not be considered a violation of special relativity, since this equation only describes the update of particle positions in our non-inertial, time-varying shearing coordinates. However, this situation may affect the numerical stability of our method. In order to avoid this possibility, our 3D runs use \(c=0.225\Delta/\Delta t\). Since this is not an issue in our 2D runs, in those cases we use \(c=0.45\Delta/\Delta t\) (see Table 1).
We emphasize that, in order to obtain our plasma evolution equations, we have assumed a non-relativistic plasma with \(v_{A,0}/c\ll 1\), which rotates at non-relativistic velocities (\(v_{0}\ll c\)). This implies that our work strictly applies to a plasma at radii significantly larger than the gravitational radius of a central black hole. For this reason, in this work we concentrate on a sub-relativistic regime, where the plasma temperature satisfies \(k_{B}T\lesssim mc^{2}\). Notice, however, that the treatment of individual particles is relativistic, since (as we see below) a small fraction of them can still be nonthermally accelerated to energies much larger than \(mc^{2}\).
### Boundary conditions along \(z\)
Using shearing coordinates allows the use of periodic boundary conditions both in the \(x\) (radial) and \(y\) (toroidal) coordinates. In the \(z\) coordinate we use open boundary conditions, which allow the existence of field and particle outflows in our stratified setup. Thus, in our runs particles are removed from the simulation box after they cross the vertical boundaries, while the fields are absorbed by these boundaries. This configuration effectively prevents outflowing fields from rebounding into the simulation domain (Cerutti et al., 2015; Belyaev, 2015; Sironi et al., 2016). This is done by implementing an absorption layer of width \(\Delta_{abs}\) in the vertical boundaries, where the terms
\[-\eta(z)(\mathbf{B}-\mathbf{B}_{0})\quad\text{and}\;-\eta(z)\mathbf{E} \tag{14}\]
are added to the right hand side of Eqs. 10 and 11, respectively. We use \(\Delta_{abs}=50\) cells and \(\eta(z)=(40/\Delta t)(|z-z_{abs}|/\Delta_{abs})^{3}\) within the absorption layer (\(\eta(z)=0\) otherwise), where \(z_{abs}\) is the inner edge of the absorption layer and \(\Delta t\) is the simulation time step.
### Numerical parameters
The simulations presented in this paper and their numerical parameters are listed in Table 1, with all physical quantities in stratified runs corresponding to plasma conditions in the disk midplane. These parameters are the scale-separation ratio \(\omega_{c,0}/\Omega_{0}\), where \(\omega_{c,0}=|q|B_{0}/mc\) is the initial cyclotron frequency of the particles, the box size along the different axes (\(L_{x}\), \(L_{y}\) and \(L_{z}\)) in terms of \(\lambda_{MRI}\), the grid spacing \(\Delta\) in terms of the initial plasma skin depth, \(c/\omega_{p,0}=c/(4\pi nqq^{2}/m)^{1/2}\), the initial number \(N_{ppc}\) of macro-particles (ions and electrons) per cell and the speed of light \(c\) in units of \(\Delta/\Delta t\), where \(\Delta t\) is the simulation time step. Notice that we ran simulations using several values of \(\Delta\), \(N_{ppc}\) and \(L_{x}\), \(L_{y}\) and \(L_{z}\) to make sure that our results are numerically converged. Table 1 only includes the simulations used to present our results.
### Notation Convention
In this section, we introduce various types of averages denoted by angled brackets with different subscripts, namely \(\langle A\rangle_{x}\), \(\langle A\rangle_{x-y}\), and \(\langle A\rangle_{v}\). \(\langle A\rangle_{x}\) denotes the average along the \(x\) axis at a fixed height \(z\) for 2D stratified simulations, \(\langle A\rangle_{x-y}\) denotes the average over the \(x-y\) plane at a fixed \(z\) for 3D stratified simulations, and \(\langle A\rangle_{v}\) represents the average taken over the volume of the disk for stratified simulations, while for unstratified simulations it represents the average over the entire simulation domain.
Additionally, we use an overline notation (\(\top\)) for quantities that are computed as the ratio of two volume averages. For instance, for the plasma \(\beta\) and temperature we define \(\overline{\beta}\equiv(8\pi P)_{v}/\langle B^{2}\rangle_{v}\) and \(k_{B}\overline{T}\equiv\langle P\rangle_{v}/\langle n\rangle_{v}\). In these expressions, \(\langle P\rangle_{v}=\langle P_{\parallel}\rangle_{v}/3+2\langle P_{z} \rangle_{v}/3\), where \(P\) denotes the isotropic pressure, and \(P_{\parallel}\) and \(P_{\perp}\) correspond to the pressure parallel and perpendicular to the local magnetic field, respectively.
Since in the stratified runs these averages are calculated in the disk region, we define this region through the condition \(|z|<H(\overline{T})\), where \(H(\overline{T})=(2\,k_{B}\overline{T}/m)^{1/2}/\Omega_{0}\) denotes the instantaneous scale height of the disk. Notice that the calculation of \(\overline{T}\) has to be done in the disk region itself, whose definition depends on \(\overline{T}\) through the inequality \(|z|<H(\overline{T})\), implying that the \(\overline{T}\) of the disk has to be determined recursively.
## 3 2D Results
In this section we describe the stratified MRI turbulence using 2D simulations, paying special attention to the difference between stratified and unstratified simulations and to the role played by the scale-separation ratio \(\omega_{c,0}/\Omega_{0}\). In SS3.1 we analyze the properties of the turbulence, and in SS3.2 we show the evolution of the plasma properties in the disk.
### Turbulence properties in 2D
Figure 1 shows three snapshots of the squared magnetic fluctuations \(\delta B^{2}\) (where \(\delta B=|\delta\mathbf{B}|\) and \(\delta\mathbf{B}=\mathbf{B}-\mathbf{B}_{0}\)) and of the particle density \(n\) for the stratified 2D run ST2D-20 (\(\omega_{c,0}/\Omega_{0}=20\)). Panels \(a\) and \(b\) show the initial formation of nonlinear channel flows at time \(t=2.5\)\([2\pi/\Omega_{0}]\). These channel flows appear both in \(\delta B^{2}\) and \(n\) and are more clearly formed within the disk region (\(|z|<H(\overline{T})\)), which is marked by the horizontal dotted lines in all the panels. Panels
and \(d\) show the same quantities but at \(t=3.25\) [\(2\pi/\Omega_{0}\)], when the channel flows have already experienced reconnection, breaking into a turbulent state. At that moment, the disk thickness has increased due to plasma heating and significant particle and magnetic field outflows occur. This turbulent state continues during the entire simulation and is accompanied by a permanent puffing up of the disk, as shown by panels \(e\) and \(f\), corresponding to \(t=4.5\) [\(2\pi/\Omega_{0}\)].
Our 2D runs also show the formation of a large scale, preferentially toroidal dynamo-like field, similar to those observed in previous MHD studies (e.g., Bai & Stone, 2013; Salvesen et al., 2016). This is seen in panel \(a\) of Fig. 2, which shows \(\langle B_{y}\rangle_{x}\) as a function of time \(t\) and of the vertical coordinate \(z\). We see that a net \(\langle B_{y}\rangle_{x}\) is formed, with a maximum amplitude of \(\sim 30-40\)\(B_{0}\) during the nonlinear stage of the stratified simulation and with oposite signs inside and outside the disk. The amplitude of \(\langle B_{y}\rangle_{x}\) is very close to the one observed by previous equivalent 3D MHD simulations of the stratified MRI with initial \(\beta_{0}=100\) in the disk midplane (Salvesen et al., 2016).
In order to explore the effect of the scale-separation ratio \(\omega_{c,0}/\Omega_{0}\) on the 2D turbulence structure, Fig. 3 shows \(\delta B^{2}\) and \(n\) in the nonlinear MRI state (\(t=4\) [\(2\pi/\Omega_{0}\)]) for an analogous run using \(\omega_{c,0}/\Omega_{0}=3.5\) (run ST2D-3.5) instead of \(\omega_{c,0}/\Omega_{0}=20\). By comparing with panels \(e\) and \(f\) of Fig. 1, we see that the scale-separation ratio does not appear to produce a qualitative change in the properties of the 2D MRI turbulence, preserving features such as disk thickness increase and the presence of outflows. Run ST2D-3.5 also shows significant dynamo-like activity, as seen in panel \(b\) of Fig. 2, where a net \(\langle B_{y}\rangle_{x}\) field appears similarly to the case of run ST2D-20 in panel \(a\).
The weak effect of \(\omega_{c,0}/\Omega_{0}\) on the field structure of the stratified
Figure 1: Squared magnetic fluctuations \(\delta B^{2}\) (left) and plasma density \(n\) (right) for simulation ST2D-20 at \(t=2.5\), \(3.25\) and \(4.5\) [\(2\pi/\Omega_{0}\)]. The black arrows in the left panels show the total magnetic field direction. The horizontal dotted lines in all the panels mark the region defined as disk region in our analysis (i.e., \(|z|<H(\overline{T})\)).
Figure 2: Panels \(a\), \(b\) and \(c\) show the toroidal magnetic component \(B_{y}\) averaged over the \(x\)-axis, \(\langle B_{y}\rangle_{x}\), as a function of the time \(t\) and the vertical coordinate \(z\) for the 2D runs ST2D-20, ST2D-3.5 and UN2D-20, respectively. The black dashed line in panel \(a\) marks the disk region (\(|z|<H(\overline{T})\)). Similarly, panel \(d\) shows \(B_{y}\) averaged over the \(x-y\) plane for the 3D run ST3D-3.5.
MRI can also be seen in the magnetic field power spectra, which are shown in Fig. 4 for runs with \(\omega_{c,0}/\Omega_{0}=7,10,14,\) and 20 and 28 (all of them at \(t\approx 5\)\([2\pi/\Omega_{0}]\)). Panel \(a\) shows the spectra of the poloidal component of the magnetic field, \(d(|\dot{B}_{x}(k)|^{2}+|\dot{B}_{z}(k)|^{2})/dln(k)\) (\(\ddot{B}_{x}(k)\) and \(\ddot{B}_{z}(k)\) are the Fourier transform of the \(x\) and \(z\) components of \(\mathbf{B}\) and \(k\) is the corresponding wave number), while panel \(b\) shows the spectra of the toroidal component, \(d|\dot{B}_{y}(k)|^{2}/dln(k)\). For all the values of \(\omega_{c,0}/\Omega_{0}\), the spectra show similar shapes, with a break at \(k\rho_{l}\sim 1\) (\(k\rho_{l}=1\) is marked by the colored dots on each line), where \(\rho_{l}\) is the typical particle Larmor radius, defined as \(\rho_{l}\equiv mc(3k_{B}\overline{T}/m)^{1/2}/|q|\langle B^{2}\rangle_{\nu}^{1/2}\). Their main difference is the location of the break of the spectra, which moves to larger wave numbers (in units of \(\lambda_{MRI}^{-1}=\Omega_{0}/2\pi\nu_{A,0}\)) as \(\omega_{c,0}/\Omega_{0}\) increases, implying a growing separation between the kinetic (\(\rho_{l}\)) and the MHD (\(\lambda_{MRI}\)) scales. However, apart from this growing separation between scales, increasing \(\omega_{c,0}/\Omega_{0}\) does not significantly affect the qualitative shape of the power spectra.
Panels \(a\) and \(b\) also compare \(d(|\dot{B}_{x}(k)|^{2}+|\dot{B}_{z}(k)|^{2})/dk\) and \(d|\ddot{B}_{y}(k)|^{2}/dk\) with power-law functions of index \(\nu\) (\(\propto k^{-\nu}\)) and show that, at sub-Larmor scales (\(k\rho_{l}\gtrsim 1\)), the poloidal and toroidal spectra are approximately consistent with \(\nu\approx 3\). This \(\nu\approx 3\) behavior is expected for kinetic Alfven wave turbulence (e.g., Passot & Sulem, 2015) and it has also been observed in previous unstratified 2D and 3D kinetic simulations (Kunz et al., 2016; Inchingolo et al., 2018; Bacchini et al., 2022). Above Larmor scales (\(k\rho_{l}<1\)) the poloidal spectra show a peak at \(k\,2\pi\nu_{A,0}/\Omega_{0}\sim 1\), followed by a power-law region characterized by \(\nu\approx 5/3\). The toroidal spectra, on the other hand, has a peak at \(k\,2\pi\nu_{A,0}/\Omega_{0}\sim 0.2\), followed first by an approximately flat region for \(0.2\lesssim k\,2\pi\nu_{A,0}/\Omega_{0}\lesssim 1\) and then by a steeper \(\nu\approx 2\) region for \(k\,2\pi\nu_{A,0}/\Omega_{0}\gtrsim 1\). The nearly flat behavior of the toroidal spectra at \(0.2\lesssim k\,2\pi\nu_{A,0}/\Omega_{0}\lesssim 1\) is significantly affected by the presence of the dynamo-like field. Indeed, panels \(c\) and \(d\) of Fig. 4 show the poloidal and toroidal spectra of the "turbulent" part of the magnetic field, \(\mathbf{B}^{T}\), which is obtained by removing the contribution from the dynamo-like field:
\[\mathbf{B}^{T}\equiv\mathbf{B}-\mathbf{B}^{D}, \tag{15}\]
where \(\mathbf{B}^{D}\equiv(\mathbf{B})_{x}\). While the turbulent and total spectra of the poloidal field are very similar (see panels \(a\) and \(c\), respectively), the turbulent spectra of the toroidal field (panel \(d\)) decrease substantially at \(0.2\lesssim k\,2\pi\nu_{A,0}/\Omega_{0}\lesssim 1\) compared to the total toroidal spectra (panel \(b\)), maintaining its \(\nu\approx 2\) behavior for \(k\,2\pi\nu_{A,0}/\Omega_{0}\gtrsim 1\). The \(\nu\approx 5/3\) and 2 behaviors of the poloidal and toroidal components of the turbulent field is similar to the unstratified results from 3D MHD simulations of the MRI (e.g., Walker et al., 2016), as well as the ones of 3D kinetic simulations (Kunz et al., 2016; Bacchini et al., 2022).
### Disk properties in 2D
In this section we show the evolution of the average disk properties in our 2D runs, paying attention to the way these properties are affected by the presence of stratification and by the scale-separation ratio \(\omega_{c,0}/\Omega_{0}\).
In order to assess the effect of stratification, we compare run ST2D-20 with the analogous unstratified run UN2D-20, with the same initial conditions as in the disk midplane of run ST2D-20. Figure 5 shows \(\delta B^{2}\) and \(n\) for run UN2D-20 at the moment when nonlinear channel flows appear (\(t=2.5\)\([2\pi/\Omega_{0}]\)) and then when these channel flows have reconnected and broken into turbulence (\(t=3\)\([2\pi/\Omega_{0}]\)). At first glance, this figure suggests that the evolution of the MRI turbulence in run UN2D-20 is similar to the one in the disk region of run ST2D-20. However, the average plasma properties between these two runs differ substantially, as shown in Fig. 6. Panel \(a\) of Fig. 6 shows the evolution of \(\langle B^{2}\rangle_{\nu}\) in the disk of run ST2D-20 (solid blue line) and in the whole volume of the analogous unstratified run UN2D-20 (solid red line). In both simulations there is an initial exponential growth regime that transitions to a much slower growth regime at \(t\approx 3\)\([2\pi/\Omega_{0}]\). Also, the two runs show the lack of a complete magnetic field saturation. However, at \(t\gtrsim 3\)\([2\pi/\Omega_{0}]\), at any given time the unstratified case
Figure 4: Panels \(a\) and \(b\) show the power spectra of the poloidal and toroidal components of the total magnetic field, \(d(|\dot{B}_{x}(k)|^{2}+|\ddot{B}_{z}(k)|^{2})/dln(k)\) and \(d|\ddot{B}_{y}(k)|^{2}/dln(k)\), for 2D stratified runs with \(\omega_{c,0}/\Omega_{0}\)=7 (pink), 10 (green), 14 (red), 20 (purple) and 28 (blue). Panels \(c\) and \(d\) are analogous to panels \(a\) and \(b\), but considering only the turbulent component of the magnetic field, \(\mathbf{B}^{T}\). In all the panels, the spectra use arbitrary normalization and the wave number \(k\) is normalized by \(\Omega_{0}/2\pi\nu_{A,0}\).
Figure 3: Panels \(a\) and \(b\) are analogous to panels \(e\) and \(f\) of Fig. 1 but for a run using a much smaller scale-separation ratio \(\omega_{c,0}/\Omega_{0}=3.5\) (run ST2D-3.5).
reaches a \(\langle B^{2}\rangle_{\nu}\) magnitude \(\sim 5-10\) times larger than in the stratified case. This factor \(\sim 5-10\) larger amplification applies similarly to the three components of the magnetic field, as can be seen from panel \(b\) of Fig. 6.
Interestingly, the \(B_{y}\) component in the unstratified case appears to be dominated by a large scale, dynamo-like component, similarly to what occurs in the stratified runs. This can be seen from panel \(c\) of Figure 2, which shows \(\langle B_{y}\rangle_{x}\) for run UN2D-20. We see that by \(t=6\)\([2\pi/\Omega_{0}]\), \(\langle B_{y}\rangle_{x}\) reaches an amplitude \(\sim 100B_{0}\), similar to the one of the total \(B_{y}\) component, as shown by the dashed red line in panel \(b\) of Fig. 6. However, whereas the dynamo activity in the analogous stratified run ST2D-20 (shown in panel \(a\) of Fig. 2) produces a rather homogeneous \(\langle B_{y}\rangle_{x}\) in the disk region (\(|z|<H(\overline{T})\), marked by the dashed black lines), the characteristic wavelength of the large scale \(B_{y}\) field in the unstratified case is \(\sim 4\) times smaller. We thus interprete the large scale \(B_{y}\) field in the unstratified case as a growth in the wavelength of the MRI modes, being therefore of different nature compared to the larger scale \(\langle B_{y}\rangle_{x}\) of the stratified runs.
The time when the stratified simulation significantly slows down its growth (\(t\approx 3\)\([2\pi/\Omega_{0}]\)) coincides with the moment when stratification effects, such as outflows and disk expansion become important, as can be seen from Fig. 1. Notice that this moment coincides with the time when the disk temperature starts increasing significantly, as we can see from the dashed-blue line in panel \(c\) of Fig. 6, showing a connection between energy dissipation and disk expansion and outflow generation.
Despite the fact that the magnetic field is amplified less in the stratified case, the average cold sigma parameter \(\overline{\sigma}_{\rm c}\) (\(\equiv\langle B^{2}\rangle_{\nu}/(4\pi nmc^{2})_{\nu}\)) is larger in the nonlinear regime of these runs. This is shown by the solid blue and solid red lines in panel \(c\) of Fig. 6 for the stratified and unstratified cases, respectively. This can be explained by the decrease in the disk density \(n\) due to its expansion in the stratified runs. Finally, in both cases the plasma beta \(\overline{\beta}\) (\(\equiv\langle 8\pi P\rangle_{\nu}/(B^{2})_{\nu}\)) reaches a nearly steady state regime for \(t\gtrsim 3\)\([2\pi/\Omega_{0}]\), as shown by the dashed lines in panel \(a\) of Fig. 6. However, while \(\overline{\beta}\sim 2\) in the unstratified case, \(\overline{\beta}\sim 0.4\) in the stratified case, which shows that stratification produces a disk that is magnetic-pressure supported, consistently with previous MHD stratified simulations (Bai & Stone, 2013; Salvesen et al., 2016).
In order to explore the role of the scale-separation ratio \(\omega_{c,0}/\Omega_{0}\) in our stratified runs, panels \(a\) and \(b\) of Figure 7 show the quantities \(\langle B^{2}\rangle_{\nu}\), \(\overline{\beta}\), \(\overline{\sigma}_{c}\), and \(\overline{T}\) for stratified simulations with \(\omega_{c,0}/\Omega_{0}=7,10\), \(14,20\) and \(28\). We see that increasing \(\omega_{c,0}/\Omega_{0}\) produces a slight increase in \(\langle B^{2}\rangle_{\nu}\) and \(\overline{\sigma}_{c}\), not yet showing a clear convergence for the highest values of \(\omega_{c,0}/\Omega_{0}\). (Note that the time origins of these simulations were slightly adjusted to align their exponential growth temporally, facilitating comparison). The evolutions of \(\overline{T}\) and \(\overline{\beta}\) exhibit some variations within a factor of \(\sim 2\), but without showing a discernible dependence on \(\omega_{c,0}/\Omega_{0}\).
As discussed in SS3.1, another important feature of the stratified 2D simulations is a dynamo-like action that produces a significant \(B_{y}^{D}=\langle B_{y}\rangle_{x}\) field, as shown in panels \(a\) and \(b\) of Fig. 2. The magnetic energy in the disk provided by the dynamo-like field \({\bf B}^{D}\) in run ST2D-20 is shown by the solid lines in the panel \(a\) of Fig. 8, where the red-solid, black-solid and green-solid lines show the contributions by the three components of \({\bf B}^{D}\): \(\langle(B_{y}^{D})^{2}\rangle_{\nu}\), \(\langle(B_{y}^{D})^{2}\rangle_{\nu}\)
Figure 5: The squared magnetic fluctuations \(\delta B^{2}\) (left) and the plasma density \(n\) (right) for simulation UN2D-20 at \(t=2.5\) and \(3\)\([2\pi/\Omega_{0}]\). The black arrows in the \(\delta B^{2}\) panels show the total magnetic field projected on the \(x-z\) plane.
Figure 6: Plasma properties as a function of time \(t\) for the disk region of the stratified run ST2D-20 (blue) and for the entire domain of the unstratified run UN2D-20 (red), respectively. Panel \(a\) shows \(\langle B^{2}\rangle_{\nu}\) (solid) and \(\overline{\beta}\) (dashed). Panel \(b\) shows the contributions to \(\langle B^{2}\rangle_{\nu}\) by the \(x\) (solid), \(y\) (dashed) and \(z\) (dotted) components of the magnetic field. Panel \(c\) shows \(\overline{\sigma}_{\rm c}\) (solid) and \(\overline{T}\) (dashed).
and \(\langle(B_{x}^{T})^{2}\rangle_{v}\), respectively. We see that the dynamo-like action within the disk is indeed dominated by the toroidal component of the magnetic field. Panel \(a\) of Fig. 8 also shows in dashed lines the contribution to the magnetic energy provided by the three components of the turbulent magnetic field \(\mathbf{B}^{T}\), which are averaged over the disk volume obtaining \(\langle(B_{x}^{T})^{2}\rangle_{v}\), \(\langle(B_{y}^{T})^{2}\rangle_{v}\) and \(\langle(B_{z}^{T})^{2}\rangle_{v}\) (red-dashed, black-dashed and green-dashed lines, respectively). We see that the turbulent field is dominated by its toroidal component as well and contributes most of the magnetic energy in the disk from the triggering of the MRI turbulence at \(t\approx 2\left[2\pi/\Omega_{0}\right]\) until \(t\approx 3.5\) (\(2\pi/\Omega_{0}\)). After that, the toroidal component of the dynamo-like field \(\langle\langle B_{y}\rangle_{x}^{2}\rangle_{z}\) becomes larger (by a factor of \(\sim 2\)) than the toroidal component of the turbulent field.
Panel \(b\) of Fig. 8 shows the total energies in the dynamo-like field \(\mathbf{B}^{D}\) (blue-solid line) and in the turbulent field \(\mathbf{B}^{T}\) (blue-dashed line) for run ST2D-20 (\(\omega_{c,0}/\Omega_{0}=20\)). We see that after \(t\approx 4\left[2\pi/\Omega_{0}\right]\) the energies in the dynamo and turbulent fields are comparable. Thus, in terms of the total magnetic energy, the dynamo-like and turbulent magnetic fields are roughly equally important after the initial period (of \(\sim 1\) orbit after the triggering of the MRI) in which the turbulent magnetic energy dominates. This trend appears to not be significantly affected by the scale-separation ratio \(\omega_{c,0}/\Omega_{0}\). This is shown by the pink-solid and pink-dashed lines in panel \(b\) of Fig. 8, which show the contributions by, respectively, the turbulent and dynamo-like fields to the magnetic energy in the disk of run ST2D-7 (\(\omega_{c,0}/\Omega_{0}=7\)). We see that in this \(\omega_{c,0}/\Omega_{0}=7\) run there is also an initial period of about \(\sim 1\) orbit in which the turbulent field energy dominates, followed by a similar contribution to energy by the turbulent and dynamo-like fields.
Thus, we have shown that disk stratification in 2D can change significantly the behavior of the MRI turbulence compared to the unstratified case. Besides producing significant outflows and a puffing up of the disk due to temperature increase, stratification makes the disk turbulence more magnetically dominated (smaller \(\beta\)) compared to what is shown in an analogous 2D unstratified simulation. Stratification also gives rise to a significant large scale dynamo-like activity, which contributes similarly to the magnetic energy in the nonlinear MRI stage as the turbulent field after \(\sim 1\) orbit from the triggering of the MRI. We also found that increasing the scale-separation ratio produces slightly more magnetized disks, obtaining no complete convergence for the largest values of \(\omega_{c,0}/\Omega_{0}\) used, consistent with the result obtained by Bacchini et al. (2022) who found that a scale separation ratio \(\omega_{c,0}/\Omega_{0}\gtrsim 60\) is required for a complete convergence in the 2D case.
In the next section we compare these 2D results with a 3D simulation showing that, although some differences appear, most of our 2D results are reasonably well reproduced in the 3D case.
## 4 3D MRI turbulence
In this section we present results from a 3D stratified simulation, run ST3D-3.5 (\(\omega_{c,0}/\Omega_{0}=3.5\)) and compare them with the analogous 2D stratified run ST2D-3.5.
### Turbulence properties in 2D vs 3D
Figure 9 shows three snapshots of \(\delta B^{2}\) for the stratified 3D run ST3D-3.5 at times \(t=1.5\), \(2.5\) and \(5.5\) (\(2\pi/\Omega_{0}\)). At the qualitative level, there are many similarities with the turbulence structure of the 2D runs. At \(t=1.5\) (\(2\pi/\Omega_{0}\)), nonlinear channel flows are present in \(\delta B^{2}\), which look similar to the ones shown in panel \(a\) of Fig. 1 for
Figure 8: Panel \(a\): the solid lines show \(\langle(B_{x}^{D})^{2}\rangle_{v}\) (red), \(\langle(B_{y}^{T})^{2}\rangle_{v}\) (black), and \(\langle(B_{y}^{D})^{2}\rangle_{v}\) (green), respectively, for run ST2D-20 (\(\omega_{c,0}/\Omega_{0}=20\)). The dashed lines show \(\langle(B_{x}^{T})^{2}\rangle_{v}\) (red), \(\langle(B_{y}^{T})^{2}\rangle_{v}\) (black) and \(\langle(B_{z}^{T})^{2}\rangle_{v}\) (green) for the same run and in the same region. Panel \(b\): the total energies in the dynamo-like field \(\mathbf{B}^{D}\) (solid line) and in the turbulent field \(\mathbf{B}^{T}\) (dashed line) for the runs ST2D-20 (\(\omega_{c,0}/\Omega_{0}=28\); blue line) and ST2D-7 (\(\omega_{c,0}/\Omega_{0}=7\); pink line).
Figure 7: Plasma properties as a function of time \(t\) for the disk region of 2D stratified runs using \(\omega_{c,0}/\Omega_{0}=7\) (pink), 10 (green), 14 (red) and 20 (purple) and 28 (blue). Panel \(a\) shows \(\langle B^{2}\rangle_{v}\) (solid) and \(\overline{\beta}\) (dashed). Panel \(b\) shows \(\overline{\sigma_{c}}\) (solid) and \(\overline{T}\) (dashed).
run ST2D-20. At \(t=2.5\) [\(2\pi/\Omega_{0}\)], the channel flows have already reconnected and broken into turbulence, with a significant increase in the disk thickness, similarly to what was shown for run ST2D-20 in panel \(c\) of Fig. 1. This trend continues at later times, as can be seen in panel \(c\) of Fig. 9, which shows \(\delta B^{2}\) at \(t=5.5\) [\(2\pi/\Omega_{0}\)]. Figure 10 shows the same snapshots of Fig. 9 but for the particle density \(n\). At \(t=1.5\) [\(2\pi/\Omega_{0}\)] nonlinear channel flows are present in \(n\), similarly to what is shown in panels \(b\) of Fig. 1 for run ST2D-20. At \(t=2.5\) and 4 [\(2\pi/\Omega_{0}\)], a much more turbulent and progressively thicker disk is shown, as also shown for run ST2D-20 in panels \(d\) and \(f\) of Fig. 1.
Our 3D run also shows the action of a dynamo-like mechanism, as can be seen from panel \(c\) of Fig. 2, which shows \(B_{y}\) averaged over the \(x-y\) plane, \(\langle B_{y}\rangle_{x-y}\), as a function of time \(t\) and of the vertical coordinate \(z\). We see that a net \(\langle B_{y}\rangle_{x-y}\) field is formed, with an amplitude similar to the 2D cases shown in panels \(a\) and \(b\) of Fig. 2 (runs ST2D-20 and ST2D-3.5). However, while the dynamo-like field in 2D shows significant time-variability and inhomogeneity along the \(z\)-coordinate, in 3D this field appears less variable and more homogeneous.
The behavior of the magnetic power spectrum seems to be quite similar in 2D and 3D. Panels \(a\) and \(b\) of Fig. 11 compare, respectively, the poloidal and toroidal magnetic spectra of the 2D and 3D runs ST3D-3.5 and ST2D-3.5 (blue and green lines, respectively). These runs share the same ratio \(\omega_{c,0}/\Omega_{0}=3.5\), so that the effect of the scale-separation does not affect significantly the comparison. At sub-Larmor scales (\(k\rho_{I}>1\), where \(k\rho_{I}=1\) is marked by the colored dots on each line), we observe a magnetic spectrum with \(\nu\approx 3.3\) (political case) and \(\nu\approx 3.5\) (toroidal case), for both types of runs. These \(\nu\approx 3.3\) and 3.5 spectra are, however, steeper than the ones shown by the 2D runs with higher \(\omega_{c,0}/\Omega_{0}\), showing that a minimum scale-separation ratio is necessary for correctly capturing the behavior of the sub-Larmor part of the spectra. Above Larmor scales (\(k\rho_{I}<1\)), both runs show a poloidal and toroidal magnetic field spectra with \(\nu\) close to \(\nu\approx 5/3\) and \(\nu\approx 2\), respectively. These \(\nu\approx 5/3\) and \(\nu\approx 2\) behaviors are maintained when removing the dynamo-like field in both runs (which in 3D is defined as in Eq. 15 but with \(\mathbf{B}^{D}\equiv\langle\mathbf{B}\rangle_{x-y}\)). This is shown in panels \(c\) and \(d\) of Fig. 11 where we show the poloidal and toroidal spectra of \(\mathbf{B}^{T}\). The main effect of removing \(\mathbf{B}^{D}\) is to substantially reduce the contribution of \(k\,2\pi\nu_{A,0}/\Omega_{0}\lesssim 1\) to the toroidal part of the 2D and 3D spectra. In this way, the peak of the poloidal and toroidal spectra of \(\mathbf{B}^{T}\) both in 2D and 3D approach \(k\,2\pi\nu_{A,0}/\Omega_{0}\sim 1\) (although the toroidal part of the turbulent spectrum in 3D has its peak at wavelengths \(\sim 3\) times larger than in 2D). This behavior of the \(\mathbf{B}^{T}\) spectra in runs ST3D-3.5 and ST2D-3.5 above Larmor scales is similar to the ones shown in our stratified 2D runs with higher scale-separation ratio (Fig. 4), as well as in previous unstratified MHD (Walker et al., 2016) and kinetic 3D simulations (Kunz et al., 2016; Bacchini et al., 2022).
### Validation of assumptions
The fact that the peaks of the poloidal and toroidal spectra of \(\mathbf{B}^{T}\) are close to \(k\,2\pi\nu_{A,0}/\Omega_{0}\sim 1\) in 3D is important for the validation of the shearing coordinates approach used in this work. Indeed, since \(\mathbf{B}^{D}\) only depends on \(t\) and \(z\), substracting this quantity from the total field does not change the power spectra of the magnetic field for \(k_{x}\) and \(k_{y}\) (\(k_{x}=\hat{x}\cdot\mathbf{k}\) and \(k_{y}=\hat{y}\cdot\mathbf{k}\), where \(\mathbf{k}\) is the wave vector in Fourier space). Thus, the dominance of \(k\,2\pi\nu_{A,0}/\Omega_{0}\sim 1\) for poloidal and toroidal components of \(\mathbf{B}^{T}\) implies that the dominant wavelength of the magnetic fluctuations along the \(x\) and \(y\) axis is given by \(\sim 2\pi\nu_{A,0}/\Omega_{0}\). This is indeed one of the assumptions made in our implementation of the shearing coordinates approach (SS2.2), which, combined with the condition \(\nu_{A,0}/c\ll 1\), allowed us to drop the \(y-\)dependent terms in the field evolution equations 4 and 5, as well as to obtain the momentum and particle position evolution equations 12 and 13. Notice that our shearing coordinates approach also assumes that the electric field in the MRI turbulence is either smaller or of the same order of the magnetic field (\(|\mathbf{E}|/|\mathbf{B}|\lesssim 1\)). To support this assumption, panels \(a\) and \(b\) of Fig. 12 shows as example the distribution of the electric current magnitude \(|\mathbf{J}|\) and the \(|\mathbf{E}|/|\mathbf{B}|\) ratio for runs ST2D-20 and ST3D-3.5 at time \(t=5\,[2\pi/\Omega_{0}]\). We see that in both cases the entire distribution satisfies \(|\mathbf{E}|/|\mathbf{B}|\lesssim 3\),
Figure 10: Panels \(a\), \(b\) and \(c\) show \(n\) for simulation ST3D-3.5 at \(t=1.5\), \(2.5\) and \(4.5\) [\(2\pi/\Omega_{0}\)], respectively.
including the regions with the largest value of \(|\mathbf{J}|\), which are expected to correspond to reconnecting current sheets. This shows that the assumptions made in SS2.2 to derive the plasma evolution equations in our shearing coordinates are fully supported by our obtained MRI turbulence behavior.
### Disk plasma properties in 2D vs 3D
In this section we compare the disk plasma properties evolution in 2D and 3D. Panel \(a\) in Fig. 13 shows the evolution of \(\langle B^{2}\rangle_{v}\) in the 2D and 3D runs ST2D-3.5 and ST3D-3.5. In both cases there is an initial exponential growth regime that evolves into a nonlinear regime with a much smaller growth rate at \(t\approx 1.5\) [\(2\pi/\Omega_{0}\)]. Later, in the time interval \(t\sim 1.5-3\) [\(2\pi/\Omega_{0}\)], significant differences appear in the 2D and 3D cases, with the 3D run having a \(\langle B^{2}\rangle_{v}\) amplitude \(\sim 5\) times smaller. This significant difference in \(\langle B^{2}\rangle_{v}\) produces a similar difference in \(\overline{\sigma}_{c}\), as can be seen from the solid blue and green lines in panel \(b\) of Fig. 13. This implies that in that time interval the disk expansion (and therefore its density) is about the same in the two simulations. This is consistent with the fact that their temperatures \(\overline{T}\) reach similar values, as shown by the dashed lines in panel \(b\) of Fig. 13. Finally, consistently with the behaviors of \(\overline{T}\) and \(\overline{\sigma}_{c}\), \(\overline{\beta}\) is \(\sim 5\) times larger in the 3D case during the time period \(t\sim 1.5-3.5\) [\(2\pi/\Omega_{0}\)]. Later, when \(t\gtrsim 3\) [\(2\pi/\Omega_{0}\)] there is a transition towards a state in which the amplitudes of \(\langle B^{2}\rangle_{v}\) in 2D and 3D tend to give more similar values, which also tends to produce similar values of \(\overline{\beta}\). Indeed, for \(t\gtrsim 4\) [\(2\pi/\Omega_{0}\)], \(\overline{\beta}\approx 0.5\) in both runs while \(\langle B^{2}\rangle_{v}\) is only a factor of \(\sim 2\) larger in the 2D case.
The smaller magnetic amplification shown by the 3D run in the time interval \(t\sim 1.5-4\) [\(2\pi/\Omega_{0}\)] is consistent with recent unstratified PIC simulations of the MRI that show that using 3D runs is important to allow efficient reconnection of the toroidal magnetic field component (Bacchini et al., 2022). By the end of the simulations, however, the 2D and 3D magnetic energies only differ by a factor of \(\sim 2\). This can be explained by the growing importance of the dynamo-like field in the stratified 2D and 3D runs, which evolves very similarly in these two types of runs. The progressively growing importance of the dynamo-like field in 2D can be seen from panel \(c\) of Fig. 13, which shows that in run ST2D-3.5, \(|\mathbf{B}^{D}|^{2}\) (solid-blue line) starts smaller than the turbulent part of the magnetic energy density \(|\mathbf{B}^{T}|^{2}\) (dotted-blue line) for \(t\lesssim 4\) [\(2\pi/\Omega_{0}\)], but afterwards it becomes comparable to \(|\mathbf{B}^{T}|^{2}\). This is indeed consistent with what was shown for 2D runs with larger scale-separation ratios in Fig. 8. In the 3D run ST3D-3.5 this increase in the dynamo-like field importance is even more significant, since \(|\mathbf{B}^{D}|^{2}\) (solid-green line) becomes \(\sim 5\) times larger than \(|\mathbf{B}^{T}|^{2}\) (dotted-green line) at \(t\gtrsim 4\) [\(2\pi/\Omega_{0}\)], given that 3D runs dissipate \(|\mathbf{B}^{T}|^{2}\) more efficiently via reconnection. Since \(|\mathbf{B}^{D}|^{2}\) has essentially the same values in 2D and 3D, it is thus expected that, by the end of the simulations, \(\langle B^{2}\rangle_{v}\) only differs by a factor of \(\sim 2\) between the 2D and 3D cases.
In SS5.2 we show that the similitude between 2D and 3D by the end of the runs is also reproduced when analyzing the MRI-driven effective viscosity.
## 5 Effective viscosity
In this section we analyze the effective disk viscosity caused by the MRI turbulence. This viscosity is quantified making used of the \(\alpha\) parameter (Shakura and Sunyaev, 1973), defined as the \(xy\) component of the plasma stress tensor \(T_{xy}\), normalized by the plasma pressure, \(\sigma=T_{xy}/P\). This stress tensor component \(T_{xy}\) has three contributions: the Maxwell stress \(M_{xy}=-B_{x}B_{y}/4\pi\), the Reynolds stress \(R_{xy}=m\nu X_{y}Y_{y}\), where \(\mathbf{V}=(V_{x},V_{y},V_{z})\) is the fluid velocity, and the anisotropic stress \(A_{xy}=-(P_{\perp}-P_{\parallel})B_{x}B_{y}/B^{2}\), where \(P_{\perp}\) and \(P_{\parallel}\) are the plasma pressures perpendicular and parallel to the local magnetic field. Notice that, even though in the calculation of \(R_{xy}\) we assume non-relativistic fluid velocities, in our simulations individual particles can still acquire relativistic velocities. Thus \(\mathbf{V}\) is calculated as \(\mathbf{V}=\langle\mathbf{p}\rangle_{p}/m\langle\gamma\rangle_{p}\), where \(\mathbf{p}\) and \(\mathbf{\gamma}\) are the momenta and Lorentz factors of the particles in a given fluid element and \(\langle\,\rangle_{p}\) denotes an average over those particles. In this way we ensure that the fluid velocity \(V\) corresponds to the velocity of the reference frame where the average particles momentum within a fluid element vanishes.
Figure 11: Panels \(a\) and \(b\) show the power spectra of the poloidal and toroidal components of the magnetic field, \(d(|\hat{B}_{x}(k)|^{2}+|\hat{B}_{z}(k)|^{2})/dln(k)\) and \(d|\hat{B}_{y}(k)|^{2}/dln(k)\), respectively, for the 2D and 3D stratified runs ST2D-3.5 and ST3D-3.5. Panels \(c\) and \(d\) show the same as in panels \(a\) and \(b\), respectively, but considering only the turbulent field \(\mathbf{B}^{T}\).
Figure 12: Panels \(a\) and \(b\) show the distribution of current density magnitude \(|\mathbf{J}|\) and the \(|\mathbf{E}|/|\mathbf{B}|\) ratio in the stratified run ST2D-20 and ST3D-3.5 at time \(t=5\) [\(2\pi/\Omega_{0}\)], respectively.
### Effect of stratification and \(\omega_{c,0}/\Omega_{0}\) on viscosity
Fig. 14 shows in solid blue line the time evolution of the average parameter \(\overline{\alpha}\) (\(\equiv\langle T_{xy}\rangle_{v}/\langle P\rangle_{v}\)) for run ST2D-3.5, along with the contributions from the Maxwell, Reynolds and anisotropic stresses: \(\overline{\alpha}_{M}\) (\(\equiv\langle M_{xy}\rangle_{v}/\langle P\rangle_{v}\); dotted line), \(\overline{\alpha}_{R}\) (\(\equiv\langle R_{xy}\rangle_{v}/\langle P\rangle_{v}\); dot-dashed blue line) and \(\overline{\alpha}_{A}\) (\(\equiv\langle A_{xy}\rangle_{v}/\langle P\rangle_{v}\); dashed line), respectively. We see that \(\overline{\alpha}\) reaches a saturated value of \(\overline{\alpha}\sim 1\), which is dominated by the Maxwell stress, with the contributions to \(\overline{\alpha}\) following the ordering \(\overline{\alpha}_{M}>\overline{\alpha}_{R}>\overline{\alpha}_{A}\). The fact that \(\overline{\alpha}_{M}\sim 1\) is consistent with the dominance of magnetic pressure compared to particle pressure (\(\overline{\beta}\lesssim 1\)) seen in panel \(a\) of Fig. 6 for the same run ST2D-20. We compare these results with the ones of the analogous unstratified run UN2D-20, where we find a similar ordering of the contributions to \(\overline{\alpha}\), \(\overline{\alpha}_{M}>\overline{\alpha}_{R}>\overline{\alpha}_{A}\), but with a \(\sim 4\) times smaller \(\overline{\alpha}_{M}\). This difference is consistent with the \(\sim 4\) times larger \(\overline{\beta}\) obtained in the unstratified run, implying a significant effect of stratification on the disk viscosity in collisionless studies of the MRI. Notably, the importance of \(\overline{\alpha}_{R}\) in our results seems to contradicts previous unstratified kinetic studies (Kunz et al., 2016; Bacchini et al., 2022), but are in line with the findings from MHD simulations (Bai & Stone, 2013; Salvesen et al., 2016).
We also measured the effect of \(\omega_{c,0}/\Omega_{0}\) on the behavior of \(\overline{\alpha}_{M}\), \(\overline{\alpha}_{R}\), \(\overline{\alpha}_{A}\) and the total \(\overline{\alpha}\), which is done in panel \(a\) of Fig. 15. We see that, although \(\overline{\alpha}\) fluctuates by factors of order unity, there is no discernible dependence of this quantity on \(\omega_{c}/\Omega_{0}\), implying that the scale separation used in our 2D runs appears to be large enough to accurately capture the behavior of the MRI-driven viscosity. The blue lines in panel \(b\) of Fig. 15 also compare the contribution to \(\overline{\alpha}_{M}\) by the turbulent field \(\mathbf{B}^{T}\) (dashed lines) and dynamo-like fields \(\mathbf{B}^{D}\) (solid lines) in the ST2D-28 run (\(\omega_{c,0}/\Omega_{0}=28\)), which we name \(\overline{\alpha}_{M}^{T}\) and \(\overline{\alpha}_{M}^{D}\), respectively. We see that \(\overline{\alpha}_{M}^{T}\) dominates until \(t\sim 4\) [\(2\pi/\Omega_{0}\)]. After that moment \(\overline{\alpha}_{M}^{T}\) and \(\overline{\alpha}_{M}^{D}\) are comparable, with fluctuating differences of a factor \(\sim 2-3\). A similar behavior is obtained for \(\overline{\alpha}_{M}^{T}\) and \(\overline{\alpha}_{M}^{D}\) in run ST2D-7 (\(\omega_{c,0}/\Omega_{0}=7\)), which are shown by dashed-pink and solid-pink lines, respectively. This implies that no clear effect of the scale-separation ratio of \(\overline{\alpha}_{M}^{T}\) and \(\overline{\alpha}_{M}^{D}\) is observed in our simulations. The fact that these quantities become comparable after \(t\sim 4\) [\(2\pi/\Omega_{0}\)] is in line with the behaviors of \(|\mathbf{B}^{T}|^{2}\) and \(|\mathbf{B}^{D}|^{2}\) for runs ST2D-28 and ST2D-7, which also become comparable in the same time period, as shown in panel \(b\) of Fig. 8.
### Viscosity in 2D vs 3D
Panel \(a\) of Fig. 16 compares the effective viscosities of the 3D and 2D runs ST3D-3.5 and ST2D-3.5, respectively. We see that for \(t\gtrsim 1.5\) [\(2\pi/\Omega_{0}\)], the viscosity of the 3D run has a nearly steady value of \(\overline{\alpha}\approx 0.5\). For \(t\sim 1.5-3.5\) [\(2\pi/\Omega_{0}\)], the 3D \(\overline{\alpha}\) is \(\sim 3-4\) times smaller than in the 2D case, while for \(t\gtrsim 3.5\) [\(2\pi/\Omega_{0}\)] the \(\overline{\alpha}\) of the 2D and 3D runs become more similar, differing by a maximum factor of \(\sim 2\). The time dependence of the difference between the 2D and 3D values of \(\overline{\alpha}\) is consistent with the fact that, initially, the 3D \(\overline{\beta}\) is \(\sim 3-5\) times larger than in 2D, with a subsequent period at \(t\gtrsim 3.5\) [\(2\pi/\Omega_{0}\)] in which both \(\overline{\alpha}\)'s acquire essentially the same value, as shown by the dashed blue (2D) and green (3D) lines in panel \(a\) of Fig. 13.
These results reinforce the idea that, when the dynamo-like field becomes either dominant (3D) or comparable to the turbulent field (2D) at \(t\gtrsim 3.5\) [\(2\pi/\Omega_{0}\)], the 2D and 3D runs produce fairly similar results, which include the value of the (Maxwell stress-dominated) disk viscosity. When that happens, \(\overline{\alpha}_{M}\) itself is significantly affected by the dynamo-like field. This can be seen from panel \(b\) of Figure 16, which shows the contributions of the dynamo-like magnetic field (solid) and the turbulent magnetic field (dashed) to the Maxwell stress, \(\overline{\alpha}_{M}^{D}\) and \(\overline{\alpha}_{M}^{T}\), in the 3D run ST3D-3.5 (green) and the 2D run ST2D-3.5 (blue). At \(t>3.5\) [\(2\pi/\Omega_{0}\)], the 3D run exhibits a greater contribution to the Maxwell stress attributed to the dynamo-like field
Figure 14: The solid lines show the evolution of \(\overline{\alpha}\) in the stratified run ST2D-20 (blue) and the unstratified run UN2D-20 (red). The contributions from the Maxwell, Reynolds, and anisotropic stresses (\(\overline{\alpha}_{M}\), \(\overline{\alpha}_{R}\) and \(\overline{\alpha}_{A}\), respectively) are also shown by the dotted, dash-dotted and dashed lines, respectively.
Figure 13: Average disk plasma properties as as a function of time \(t\) for 2D (blue) and 3D (green) stratified runs ST2D-3.5 and ST3D-3.5, respectively. Panel \(a\) shows \(\langle B^{2}\rangle_{v}\) (solid) and \(\overline{\beta}\) (dashed). Panel \(b\) shows \(\overline{\alpha}_{c}\) (solid) and \(\overline{T}\) (dashed). Panel \(c\) shows the magnetic energy densities in the turbulent field \(|\mathbf{B}^{T}|^{2}\) (dashed) and in the dynamo-like field \(|\mathbf{B}^{D}|^{2}\) (solid) in 2D and 3D.
(by a factor \(\sim 5\)). This dominant contribution to the viscosity by the large scale dynamo-like field is in qualitative agreement with the 3D MHD simulations of Bai & Stone (2013) in the case of \(\beta_{0}=100\). Conversely, in the 2D run ST2D-3.5 the dynamo-like contribution to the viscosity becomes comparable to the one of the turbulent field after \(t\sim 3.5\)\([2\pi/\Omega_{0}]\), with some dominance of the former after \(t\sim 4.5\)\([2\pi/\Omega_{0}]\) by a factor of \(\sim 2-3\). This is in line with results shown for the 2D runs ST2D-28 and ST2D-7 (\(\omega_{c,0}/\Omega_{0}=28\) and 7, respectively), for which \(\overline{\alpha}_{M}^{D}\) and \(\overline{\alpha}_{M}^{T}\) were comparable after \(t\sim 4\)\([2\pi/\Omega_{0}]\), with no discernible dependence on \(\omega_{c,0}/\Omega_{0}\).
Both our 2D and 3D runs give rise to an anisotropic stress that is subdominant compared to the Maxwell stress, although the former is larger than the Reynolds stress in the 3D run, which is the contrary to what occurs in 2D, suggesting that 3D effects would tend to suppress the fluid velocities that give rise to the Reynolds stress.
### Pressure anisotropy behavior
The very small contribution of \(\overline{\alpha}_{A}\) to the total effective viscosity in our 2D and 3D stratified runs seems to contradict previous kinetic simulation studies that suggest that the anisotropic stress can be as important as Maxwell stress in collisionless disks (e.g., Kunz et al., 2016). This discrepancy, however, appears to be mainly due to the small \(\beta\) regime reached in the nonlinear state of our simulations. To demonstrate this point, panel \(a\) of Fig. 17 shows the distribution of plasma anisotropy \(P_{\perp}/P_{\parallel}\) and \(\beta_{\parallel}\) in the disk of run ST2D-20 during a time interval \(t=3.5-4.5\)\([2\pi/\Omega_{0}]\), and compares it with a threshold for the growth of unstable mirror modes (black line) obtained from linear Vlasov theory (Hellinger et al, 2006):
\[\frac{P_{\perp}}{P_{\parallel}}=1+\frac{0.77}{(\beta_{\parallel}-0.016)^{0.76}}. \tag{16}\]
We see that in most cases \(P_{\perp}/P_{\parallel}\) tends to be larger than unity and limited by the mirror threshold. As an estimate of the upper limit for the expected importance of \(\alpha_{A}\), one can compute the ratio \(\langle\alpha_{A}/\alpha_{M}\rangle_{\nu}\) assuming that the pressure anisotropy of the plasma is given by Eq. 16. In that case we would have
\[\Big{\langle}\frac{\alpha_{A}}{\alpha_{M}}\Big{\rangle}_{\nu}=\Big{\langle} \frac{(P_{\perp}-P_{\parallel})\ \beta_{\parallel}}{P_{\parallel}}\frac{\beta_{\parallel}}{2}\Big{\rangle}_{ \nu}\lesssim 0.4\langle\beta_{\parallel}^{0.24}\rangle_{\nu}\sim 0.4 \overline{\beta}_{\parallel}^{0.24}, \tag{17}\]
where we have applied Eq. 16 in the limit \(\beta_{\parallel}\gg 0.016\) (\(\overline{\beta}_{\parallel}\approx 0.4\) for \(t\gtrsim 3\)\([2\pi/\Omega_{0}]\), as can be seen from the \(\overline{\beta}_{\parallel}\) evolution for run ST2D-20 shown in Fig. 6). Thus, using \(\overline{\beta}_{\parallel}\approx 0.4\), we obtain \(\langle\alpha_{A}/\alpha_{M}\rangle_{\nu}\lesssim 0.3\). This upper limit is consistent with the fact that \(\overline{\alpha}_{A}\) is much smaller (by a factor of \(\sim 10\)) than \(\overline{\alpha}_{M}\) in run ST2D-20, as shown, respectively, by the dashed blue and dotted blue lines in Fig. 14. Notice that in a hypothetical case in which \(\overline{\beta}_{\parallel}\sim 100\) (e.g., as in Kunz et al., 2016), Eq. 17 would predict comparable contributions from the anisotropic and Maxwell stress with \(\overline{\alpha}_{A}\sim\overline{\alpha}_{M}\).
Panel \(b\) of Fig. 17 shows the same as panel \(a\) but for the 2D run ST2D-3.5. We see that \(P_{\perp}/P_{\parallel}\) is somewhat larger in the case of run ST2D-3.5 for a given \(\beta_{\parallel}\). The larger value of \(P_{\perp}/P_{\parallel}\) is consistent with the smaller scale-separation ratio, as shown by previous PIC simulation studies of the mirror instability driven by a growing background magnetic field (see, e.g., Ley et al., 2023). However, the distribution of \(P_{\perp}/P_{\parallel}\) and \(\beta_{\parallel}\) in the disk of run ST2D-3.5 still follows reasonably well the threshold for the growth of mirror modes
Figure 16: Panel \(a\) shows in solid lines the evolution of \(\overline{\alpha}\) in the 3D run ST3D-3.5 (green) and the 2D run ST2D-3.5 (blue). Their contributions from the Maxwell, Reynolds, and anisotropic stresses (\(\overline{\alpha}_{M}\), \(\overline{\alpha}_{R}\) and \(\overline{\alpha}_{A}\), respectively) are also shown by the dotted, dash-dotted and dashed lines, respectively. Panel \(b\) shows the contributions of the dynamo-like magnetic field (solid) and the turbulent magnetic field (dashed) to the Maxwell stress, \(\overline{\alpha}_{M}^{D}\) and \(\overline{\alpha}_{M}^{T}\), in the 3D run ST3D-3.5 (green) and the 2D run ST2D-3.5 (blue).
presented in Eq. 16, consistently with the essentially absent effect of scale-separation on the dominance of \(\overline{\alpha}_{M}\) in our runs. Panel \(c\) of Fig. 17 shows the behavior for \(P_{\perp}/P_{\parallel}\) and \(\beta_{\parallel}\) in the 3D run ST3D-3.5. We see that the pressure anisotropy behaves similarly in the runs ST2D-3.5 and ST3D-3.5, in agreement with the small contribution of \(\overline{\alpha}_{A}\) to the effective viscosity in the 3D case.
In summary, our 2D and 3D runs give a (Maxwell stress dominated) \(\overline{\alpha}\) with values between \(\sim 0.5\) (3D) and \(\sim 1\) (2D), with a progressively similar behavior of the 2D and 3D runs as the dynamo-like field becomes dominant (\(t\gtrsim 3.5\) [\(2\pi/\Omega_{0}\)]). In this dynamo-dominated regime, \(\overline{\alpha}\) is expected to be mainly produced by the dynamo-like field. Interestingly, this viscosity behavior is very similar to the one obtained from 3D MHD simulations of stratified disk with net vertical field and initial \(\beta=100\)(Salvesen et al., 2016).
## 6 Particle acceleration
Our stratified MRI simulations show significant particle acceleration. In this section, we show that the acceleration efficiency grows as the disk temperature and the scale-separation ratio \(\omega_{c,0}/\Omega_{0}\) increase. Well developed nonthermal tails are observed mainly in our 2D runs, due to their relatively large scale-separation ratio.
### Spectrum evolution in 2D
The evolution of the particle spectrum, \(dn/d\gamma\), calculated in the disk of run ST2D-20 is shown in Fig. 18, where \(\gamma\) is the particle Lorentz factor. The spectra are shown for different values of the disk temperature \(\overline{T}\), instead of at different times. (This allows us to compare spectra from different simulations, removing the fact that different runs may take different times to trigger the MRI and/or to heat the plasma). As the plasma temperature increases, their spectra develop a nonthermal tail that can be approximately described as a power-law with an exponential cut-off,
\[\frac{dn}{d\gamma}\propto(\gamma-1)^{-p}e^{-\gamma/\gamma_{\rm c}}, \tag{18}\]
where \(p\) and \(\gamma_{\rm c}\) are the corresponding spectral index and cut-off Lorentz factor, respectively. This behavior can be seen in Fig. 18, for instance, in the cases \(k_{B}\overline{T}/mc^{2}\approx 5.2\times 10^{-2}\) and \(3.5\times 10^{-1}\). For the first temperature we fitted Eq. 18 using \(p\approx 2.2\) and \(\gamma_{\rm c}\approx 4.8\) (green dotted line), while for the second temperature we used \(p\approx 1.9\) and \(\gamma_{\rm c}\approx 35\) (pink dotted line). These fits were obtained using a maximum likelihood analysis considering only particles with energy larger than \(10k_{B}\overline{T}\).
### Role of \(\omega_{c,0}/\Omega_{0}\)
The role of the scale-separation ratio \(\omega_{c,0}/\Omega_{0}\) is shown in Fig. 19, which shows the spectra of simulations with \(\omega_{c,0}/\Omega_{0}=7,10,14,20\) and \(28\), for temperatures \(k_{B}\overline{T}/mc^{2}=5.2\times 10^{-2}\) (panel \(a\)) and \(3.5\times 10^{-1}\) (panel \(b\)). For each of these spectra, we show in dotted lines the corresponding fits using power-laws with exponential cut-offs (Eq. 18). The dependences of the fitted \(\gamma_{c}\) and \(p\) on \(\omega_{c,0}/\Omega_{0}\) are shown in panels \(a\) and \(b\) of Fig. 20, respectively. By comparing with the black line in panel \(a\) (\(\gamma_{\rm c}=36(\omega_{c,0}/\Omega_{0})/20\)), we see that for the spectra with temperature \(k_{B}\overline{T}/mc^{2}\approx 3.5\times 10^{-1}\), \(\gamma_{\rm c}\) behaves approximately as \(\gamma_{c}\propto\omega_{c,0}/\Omega_{0}\). For \(k_{B}\overline{T}/mc^{2}\approx 5.2\times 10^{-2}\), on the other hand, \(\gamma_{\rm c}\sim 4-10\) with no clear dependence on \(\omega_{c,0}/\Omega_{0}\).
This discrepancy in how \(\gamma_{c}\) depends on \(\omega_{c,0}/\Omega_{0}\) is likely a manifestation of the underlying acceleration mechanism, which appears to be consistent with the expectation from reconnection driven acceleration. Indeed, the \(\gamma_{c}\) dependence on \(\omega_{c,0}/\Omega_{0}\) for \(k_{B}\overline{T}/mc^{2}\approx 3.5\times 10^{-1}\) is qualitatively consistent with the pair plasma magnetic reconnection results of Werner et al. (2016) in the limit of small system size, \(L\). These results show power-laws with supra-exponential cut-offs (\(dn/d\gamma\propto\gamma^{-p}e^{-\gamma^{2}/\gamma_{c,\rm rec}^{2}}\approx 0 )\) with \(\gamma_{c,\rm rec}\approx 0.1L/\rho_{0}\), where \(\rho_{0}=mc^{2}/eB\) and \(B\) is the magnitude of the magnetic field in the upstream medium of the reconnecting plasmas. The corresponding value of \(L\) in our simulations can be estimated from the power spectrum of the \(x-z\) (poloidal) magnetic energy component, \(d(|B_{x}(k)|^{2}+|B_{z}(k)|^{2})/d\ln(k)\), for runs with different scale-separation ratios shown in panel \(a\) of Fig. 4 (we use the poloidal magnetic field since this is the component that can experience reconnection in 2D). We see that the poloidal spectra peak at \(k\sim\Omega_{0}/2\pi\nu_{A,0}\) fairly independent of the scale-separation ratio. Thus a reasonable estimate for \(L\) is \(L\sim 2\pi/k\sim(2\pi)^{2}\nu_{A,0}/\Omega_{0}\). In addition, we can estimate \(\rho_{0}=mc^{2}/eB\approx c/(\omega_{c,0}/B_{R})\), where \(f_{B}\) (\(\equiv((B^{2})_{\rm v})^{1/2}/B_{0}\)) is the root mean square amplification factor of the magnetic field in the disk at a given time. Thus, if reconnection
Figure 17: Panels \(a\), \(b\) and \(c\) show the distributions of plasma anisotropy \(P_{\perp}/P_{\parallel}\) and \(B_{\parallel}\) in the disk of the 2D run ST2D-20 during \(t=3.5-4.5\) [\(2\pi/\Omega_{0}\)], the 2D run ST2D-3.5 during \(t=2.5-3.5\) [\(2\pi/\Omega_{0}\)] and the 3D run ST3D-3.5 during \(t=2-3\) [\(2\pi/\Omega_{0}\)], respectively. The three cases are compared with a threshold for the growth of unstable mirror modes (black line) and firehose modes (dashed line) obtained from linear Vlasov theory (Hellinger et al, 2006).
Figure 18: The evolution of the particle energy distribution for our fiducial run ST2D-20. Different colors represent different disk temperatures, and the dotted lines correspond to fits to the nonthermal tails using Eq. 18.
is the main driver of particle acceleration in our runs, \(\gamma_{c}\) should be close to \(\gamma_{c,\rm rec}\), which would be given by
\[\gamma_{c,\rm rec}\approx 0.1\frac{L}{\rho_{0}}\approx 36\Big{(}\frac{\omega_{c,0} /\Omega_{0}}{20}\Big{)}\Big{(}\frac{f_{B}}{30}\Big{)}, \tag{19}\]
where we have used that \(v_{A,0}/c=10^{-2}\) in all our simulations. The value \(f_{B}\) as a function of \(\overline{T}\) is shown in dashed lines in Fig. 21 for different values of \(\omega_{c,0}/\Omega_{0}\). We see that when \(k_{B}\overline{T}/mc^{2}\approx 3.5\times 10^{-1}\), \(f_{B}\approx 30\), fairly regardless of the scale-separation ratio. This means that the expected \(\gamma_{c,\rm rec}\) at \(k_{B}\overline{T}/mc^{2}=3.5\times 10^{-1}\) is
\[\gamma_{c,\rm rec}\approx 36\Big{(}\frac{\omega_{c,0}/\Omega_{0}}{20} \Big{)}. \tag{20}\]
The black line in panel \(a\) of Fig. 20 shows the case \(\gamma_{c}=\gamma_{c,\rm rec}\), where \(\gamma_{c,\rm rec}\) is given by Eq. 20. We see that \(\gamma_{c,\rm rec}\) reproduces well the behavior of \(\gamma_{c}\) in our runs with \(k_{B}\overline{T}/mc^{2}=3.5\times 10^{-1}\).
The behavior \(\gamma_{c,\rm rec}\approx 0.1L/\rho_{0}\) expected from reconnection is valid as long as \(L/\sigma_{c}^{\rm th}\rho_{0}\lesssim 40\)(Werner et al., 2016), where \(\sigma_{c}^{\rm th}\) corresponds to the cold sigma parameter in the upstream medium of the reconnection simulations. We estimate \(\sigma_{c}^{\rm th}\) using \(\langle\sigma_{c}\rangle_{v}\) in our runs when \(k_{B}\overline{T}/mc^{2}=3.5\times 10^{-1}\), which is \(\langle\sigma_{c}\rangle_{v}\sim 10-50\) for the range of \(\omega_{c,0}/\Omega_{0}\) considered, as shown by the solid lines in Fig. 21.2 Thus, using Eq. 19, we obtain that
Footnote 2: Since we want to estimate the equivalent of the upstream cold sigma parameter \(\sigma_{c}^{\rm th}\), we are mainly interested in the values of \(\sigma_{c}\) outside the current sheets, where \(\sigma_{c}\) is the largest. Thus, given that in our runs \(\langle\sigma_{c}\rangle_{v}\) (\(=\langle B^{2}/4\pi nm^{2}\rangle_{v}\)) \(>\overline{\sigma_{c}}\) (\(=\langle B^{2}\rangle/4\pi\langle n\rangle_{v}m^{2}\)), we are using \(\langle\sigma_{c}\rangle_{v}\) instead of \(\overline{\sigma_{c}}\) as our estimate of \(\sigma_{c}^{\rm th}\).
\[\frac{L}{\rho_{0}\sigma_{c}}\sim 18\Big{(}\frac{\omega_{c,0}/\Omega_{0}}{20} \Big{)}\Big{(}\frac{f_{B}}{30}\Big{)}\Big{(}\frac{\langle\sigma_{c}\rangle_{v }}{20}\Big{)}^{-1}. \tag{21}\]
Eq. 21 thus implies that all of our simulations satisfy the restriction \(L/\rho_{0}\sigma_{c}\lesssim 40\) when \(k_{B}\overline{T}/mc^{2}=3.5\times 10^{-1}\), even in our run with the largest scale-separation ratio, \(\omega_{c,0}/\Omega_{0}=28\). Interestingly, Fig. 21 also shows that, when \(k_{B}\overline{T}/mc^{2}=5.2\times 10^{-2}\), \(\langle\sigma_{c}\rangle_{v}\sim 1\) and \(f_{B}\approx 20\), implying that for that temperature \(L/\rho_{0}\sigma_{c}\sim 240(\omega_{c,0}/\Omega_{0})/20\gtrsim 40\). This means that, if particle acceleration is driven by magnetic reconnection at \(k_{B}\overline{T}/mc^{2}=5.2\times 10^{-2}\), \(\gamma_{c}\) should not be proportional to \(\omega_{c,0}/\Omega_{0}\). Instead, a weaker dependence on \(L\) is expected, since in that case \(\gamma_{c}\) likely grows more slowly with time as \(\gamma_{c}\propto t^{1/2}\)(Petropoulou and Sironi, 2018; Hakobyan et al., 2021). This possibly explains why we do not observe a clear dependence of \(\gamma_{c}\) on \(\omega_{c,0}/\Omega_{0}\) in the case of \(k_{B}\overline{T}/mc^{2}=5.2\times 10^{-2}\).
The values of \(p\) for \(k_{B}\overline{T}/mc^{2}=5.2\times 10^{-2}\) and \(3.5\times 10^{-1}\) seen in panel \(b\) of Fig. 20 are close to \(p\sim 2.2\) and \(\sim 1.9\), respectively, and do not show a clear dependence on the scale-separation ratio. This is also consistent with acceleration being driven by reconnection. For instance, for \(L/\rho_{0}\sigma_{c}\gtrsim 40\) and \(\sigma_{c}=3\) (a case close to our results with \(k_{B}\overline{T}/mc^{2}=5.2\times 10^{-2}\), where \(\langle\sigma_{c}\rangle_{v}\sim 1-2\); see Fig. 21), Werner et al. (2016) predicts \(p\sim 2.3\)-\(2.5\). Whereas for \(L/\rho_{0}\sigma_{c}\lesssim 40\) and \(\sigma_{c}=10-30\) (close to our results with \(k_{B}\overline{T}/mc^{2}=3.5\times 10^{-1}\), where \(\langle\sigma_{c}\rangle_{v}\sim 10-50\); see Fig. 21), the results of Werner et al. (2016) show \(p\sim 1.4-1.9\).
Figure 19: The particle spectra for runs with \(\omega_{c,0}/\Omega_{0}=7,\,10,\,14,\,20\) and \(28\), and with temperatures \(k_{B}\overline{T}/mc^{2}=5.2\times 10^{-2}\) (panel \(a\)) and \(3.5\times 10^{-1}\) (panel \(b\)). For each spectra, we show in dotted lines a power-law fit with an exponential cut-off (as in Eq. 18). The normalizations of the spectra are arbitrary.
### Effect of stratification on the acceleration
The previous discussion underscores the importance of plasma conditions, in particular \(\sigma_{c}\) and \(f_{B}\), in determining the efficiency of nonthermal particle acceleration. Since these conditions vary significantly between stratified and unstratified simulations (as shown by Fig. 6), we expect the acceleration efficiency in these two types of runs to be different. Fig 22 compares spectra from run ST2D-20 with the equivalent spectra in the unstratified run UN2D-20 at the same values of \(\overline{T}\). We see that the spectra in the unstratified run are always softer than in the stratified run ST2D-20. This is consistent with the fact that, for a given temperature, in run UN2D-20 the value of \(\overline{\sigma}_{c}\) is smaller than in the run ST2D-20, which favors harder nonthermal acceleration (as seen in panel \(c\) of Fig. 6)
### Acceleration in 2D vs 3D
In Fig. 23 we compare spectra from the 2D and 3D simulations ST2D-3.5 and ST3D-3.5, both with a scale-separation ratio \(\omega_{c,0}/\Omega_{0}=3.5\), for \(k_{B}\overline{T}/mc^{2}=6.8\times 10^{-2}\), \(2.1\times 10^{-1}\) and \(3.5\times 10^{-1}\). As expected from our previous discussion on the dependence of \(\gamma_{c}\) on \(\omega_{c,0}/\Omega_{0}\), the 2D run ST2D-3.5 should produce a nonthermal tail of rather short extension, which is what we see in Fig. 23. However, it is still interesting to verify whether its main features are reproduced in the 3D run ST3D-3.5. We see that, although the spectra show somewhat different shapes, they both feature nonthermal tails with similar maximum energies. In particular, when \(k_{B}\overline{T}/mc^{2}=3.5\times 10^{-1}\), the 2D and 3D spectra look very similar, suggesting that 3D effects maintain the main particle accelerating properties of the MRI turbulence in the stratified setup.
Even though the nonthermal particle behavior in our runs suggests a significant role of magnetic reconnection in the acceleration of particles, our simulations may be subject to effects that are not present in previous magnetic reconnection studies. These include particle escape from the disk, stochastic acceleration by the MRI turbulence (e.g., Kimura et al., 2019; Sun and Bai, 2021), and the action of various kinetic instabilities that may contribute to field dissipation and/or particle acceleration, including, e.g., the drift kink instability (Zenitani and Hoshino, 2007) and the ion-cyclotron instability (Ley et al., 2019). We thus defer to future research a detailed determination of the dominant acceleration process(es) as well as the role of the scale-separation ratio by including 2D and 3D runs with larger values of \(\omega_{c,0}/\Omega_{0}\).
## 7 Conclusions
In this work we have studied the effect of stratification on the collisionless MRI using 2D and 3D PIC simulations. Comparing 2D stratified and unstratified runs, we found that stratification affects the evolution of the disk conditions, due to the presence of outflows and disk expansion, leading to a decrease in the amplification of magnetic field energy density in the turbulent non-linear MRI regime. However, the expansion of the disk also decreases the plasma pressure and density, resulting in a highly magnetized disk, with smaller \(\beta\) and larger cold magnetization parameter \(\sigma_{c}\) compared to the unstratified case. Indeed, in the nonlinear regime the disk is magnetic-pressure supported with \(\beta\sim 0.4\), which is a factor \(\sim 5\)
Figure 23: Spectra from simulations in 2D (dashed) and 3D (solid), both with a scale-separation ratio \(\omega_{c,0}/\Omega_{0}=3.5\) (runs ST2D-3.5 and ST3D-3.5, respectively). The different colors represent \(k_{B}(T)/mc^{2}=6.8\times 10^{-2}\), \(2.1\times 10^{-1}\) and \(3.5\times 10^{-1}\).
Figure 21: The values of \(\langle\sigma_{c}\rangle_{v}\) and \(f_{B}=(\langle B^{2}\rangle_{v})^{1/2}/B_{0}\) are shown in solid and dashed lines as a function of \(\overline{T}\) and for simulations with \(\omega_{c,0}/\Omega_{0}=7,10,14,20\) and \(28\).
smaller than the value reached by its unstratified counterpart. Our stratified simulations do not exhibit a discernible low beta corona separated from the disk. In the disk region, our runs also give rise to a significant large scale and predominantly toroidal dynamo-like field \(\mathbf{B}^{D}\) (\(\equiv\langle\mathbf{B}\rangle_{x}\) in 2D), whose dominant scale length follows the disk scale height. Although a large scale \(\langle\mathbf{B}\rangle_{x}\) field also appears in the 2D unstratified case, its scale length is \(\sim 4\) times smaller. The increased magnetization of our 2D stratified runs produces an effective viscosity \(\alpha\) in the disk that reaches \(\alpha\sim 1\), which is a factor \(\sim 5\) larger than in the equivalent unstratified case. This viscosity is dominated by the Maxwell stress, \(\alpha_{M}\), with a small contribution of the anisotropic stress, \(\alpha_{A}\). This small \(\alpha_{A}\) is consistent with the regulation of pressure anisotropies by kinetic microinstabilities in the low \(\beta\) regime.
Even though our 2D and 3D stratified simulations produce similar results, some differences are present. In order to assess them, we compared 2D and 3D runs focusing on a specific case with small scale separation, \(\omega_{c,0}/\Omega_{0}=3.5\). In the early phase of the non-linear MRI stage (i.e., \(\sim 1-2\) orbits after the triggering of the instability), 3D simulations exhibit a significantly lower amplification of the magnetic field energy density compared to their 2D counterpart, consistently with a more efficient reconnection of the toroidal magnetic field component in 3D (as shown in the recent work of Bacchini et al., 2022). This primarily affects the effective viscosity \(\alpha\) and the plasma \(\beta\), which are, respectively, \(\sim 3-4\) times smaller and larger in the 3D case. However, after this initial stage (at \(t\sim 4\) [\(2\pi/\Omega_{0}\)], our 2D and 3D simulations are more similar, with \(\beta\) reaching essentially the same values and the effective viscosity \(\alpha\) being only \(\sim 2\) times smaller in 3D (in 3D, \(\alpha\approx 0.5\) during the whole nonlinear MRI stage). This transition at \(t\sim 4\) [\(2\pi/\Omega_{0}\)] occurs because of the growing importance of the large scale dynamo-like field \(\mathbf{B}^{D}\) (\(\equiv\langle\mathbf{B}\rangle_{x-y}\) in 3D). Indeed, after an initial stage in which the turbulent field \(\mathbf{B}^{T}=\mathbf{B}-\mathbf{B}^{D}\) dominates, the dynamo-like field becomes larger than the turbulent field in the 3D runs while in 2D it reaches values comparable to the turbulent field. Since the dynamo field has almost the same amplitude in 2D and 3D, the total fields in these two types of runs differ by a small amount after \(t\sim 4\) [\(2\pi/\Omega_{0}\)]. Also, in this dynamo-dominated period, the 3D viscosity is mainly produced by the dynamo-like field, while in 2D the turbulent and dynamo fields contribute comparably to \(\alpha\). In 3D the disk viscosity is also dominated by the Maxwell stress, \(\alpha_{M}\), with a small contribution from the anisotropic stress, \(\alpha_{A}\). This is also consistent with the action of pressure anisotropy-driven kinetic microinstabilities in the 3D case, as it occurs in 2D. Our 2D and 3D results in terms of \(\alpha\), \(\beta\) and dynamo-like field behaviors are reasonably consistent with previous 3D MHD simulations of stratified disks with similar initial conditions (e.g., Bai & Stone, 2013; Salvesen et al., 2016).
In terms of particle acceleration, in our 2D runs we find that the particle spectra in the nonlinear MRI stage follow power-laws with exponential cut-offs, with power-law indices \(p\approx 2.2-1.9\) for disk temperatures \(\sim 0.05-0.3\,mc^{2}\). Additionally, depending on the value of \(\sigma_{c}\) during the nonlinear MRI stage, the maximum energy attained by the particles is either proportional to the scale separation \(\omega_{c,0}/\Omega_{0}\) or fairly independent of this parameter, which appears to be consistent with previous magnetic reconnection studies (Werner et al., 2016). Particle acceleration in our 2D unstratified runs appears to be less efficient than in the analogous stratified case. This is likely due to the smaller cold magnetization parameter \(\sigma_{c}\) attained in the unstratified simulations. Furthermore, the particle acceleration observed in our 2D run with \(\omega_{c,0}/\Omega_{0}=3.5\) is well reproduced by its analogous 3D simulation, suggesting that 3D effects should maintain most of the acceleration properties of the MRI turbulence. However, 3D runs with larger scale-separation ratio are needed to confirm this trend.
In summary, our results suggest that including disk stratification in shearing-box PIC simulations of the MRI is important for studying its saturation, effective viscosity generation and particle acceleration physics. Interestingly, 2D and 3D simulations give quite similar results for the scale-separation ratios used in this work, especially after the magnetic field energy becomes dominated by a large-scale, dynamo-like field (which occurs \(\sim 1-2\) orbits after the triggering of the instability). We leave for future work the clarification of the effect of larger scale-separation ratios in 3D, as well as disentangling the underlying mechanism(s) for particle acceleration. We also note that our results refer to a specific case of initial plasma conditions. Further research is thus needed to clarify the effects of changing the initial \(\beta\) and/or temperature in the disk, potentially leading to a more distinct differentiation between an unmagnetized disk and a magnetized corona (e.g., Salvesen et al., 2016). Investigating the effect of more realistic mass ratios on the dynamic and thermodynamic properties of the collisionless MRI is also deferred to future work.
## Acknowledgements
A. Sandoval acknowledges support from the Center for Excellence in Astrophysics and Associated Technologies (CATA) through ANID, BASAL, FB210003. M. Riquelme thanks support from a Fondecyt Regular Grant No. 1191673 and from CONICYT/Quimal 190011. A. Spitkovsky acknowledges the support of NSF grants PHY-2206607 and AST-1814708. F. Bacchini acknowledges support from the FED-tWIN programme (profile Prf-2020-004, project "ENERGY") issued by BELSPO. The computational resources and services used in this work were provided by the National Laboratory for High Performance Computing (NLHPC) of the Center for Mathematical Modeling of University of Chile (ECM-02) and the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government - department EWI.
## Data availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2303.07351 | Context-based Ontology Modelling for Database: Enabling ChatGPT for
Semantic Database Management | This research paper explores the use of ChatGPT in database management.
ChatGPT, an AI-powered chatbot, has limitations in performing tasks related to
database management due to the lack of standardized vocabulary and grammar for
representing database semantics. To address this limitation, the paper proposes
a solution that involves developing a set of syntaxes that can represent
database semantics in natural language. The syntax is used to convert database
schemas into natural language formats, providing a new application of ChatGPT
in database management. The proposed solution is demonstrated through a case
study where ChatGPT is used to perform two tasks, semantic integration, and
tables joining. Results demonstrate that the use of semantic database
representations produces more precise outcomes and avoids common mistakes
compared to cases with no semantic representation. The proposed method has the
potential to speed up the database management process, reduce the level of
understanding required for database domain knowledge, and enable automatic
database operations without accessing the actual data, thus illuminating
privacy protection concerns when using AI. This paper provides a promising new
direction for research in the field of AI-based database management. | Wenjun Lin, Paul Babyn, Yan yan, Wenjun Zhang | 2023-03-11T23:15:03Z | http://arxiv.org/abs/2303.07351v1 | # Context-based Ontology Modelling for Database: Enabling ChatGPT for Semantic Database Management
###### Abstract
This research paper explores the use of ChatGPT in database management. ChatGPT, an AI-powered chatbot, has limitations in performing tasks related to database management due to the lack of standardized vocabulary and grammar for representing database semantics. To address this limitation, the paper proposes a solution that involves developing a set of syntaxes that can represent database semantics in natural language. The syntax is used to convert database schemas into natural language formats, providing a new application of ChatGPT in database management. The proposed solution is demonstrated through a case study where ChatGPT is used to perform two tasks, semantic integration, and tables joining. Results demonstrate that the use of semantic database representations produces more precise outcomes and avoids common mistakes compared to cases with no semantic representation. The proposed method has the potential to speed up the database management process, reduce the level of understanding required for database domain knowledge, and enable automatic database operations without accessing the actual data, thus illuminating privacy protection concerns when using AI. This paper provides a promising new direction for research in the field of AI-based database management.
**Keywords:** ChatGPT, Database management, Semantic database representation, Semantic integration, Tables joining
## 1 Introduction
ChatGPT is a conversational chatbot that uses artificial intelligence (AI) and machine learning (ML) techniques, combined with natural language processing (NLP) methods, to produce human-like text. It was launched in November 2022 and quickly gained popularity, reaching over one million users within just five days [1]. ChatGPT's ability to produce human-like text and perform a wide range of tasks has made it a popular tool for many users, including answering questions, writing short stories, composing music, solving math problems, performing language translations, and even computer programming.
Database operation involves the manipulation of data and information using specific syntax or commands, similar to computer programming. In database operations, these commands are known as database queries, which are used to retrieve, update, and manipulate data stored in a database. Similarly, in computer programming, instructions are written in a programming language, such as C or Python, in order to specify the desired behaviour of a computer program.
There has been an expectation that ChatGPT could assist in creating database queries, just as it can assist in creating computer programs. However, creating database queries requires an understanding of the database itself, and there is no conventional way to represent database semantics. This problem limits ChatGPT's ability to perform tasks related to database management.
In this paper, we present a solution to this problem by developing a set of syntax that can represent database semantics, such as table structure and relationships, in natural language. This allows for the creation of semantic representations of databases that can be understood by ChatGPT and enable it to perform database management tasks. Our work is demonstrated through a case study, where ChatGPT is used to perform two tasks: semantic integration and table joining. Our results show that the use of semantic database representations produces more precise outcomes and avoids common mistakes compared to cases with no semantic representation.
The proposed method transforms database schemas into natural language formats, providing a new application of ChatGPT in database management. This study has the potential to speed up the database management process, reduce the level of understanding required for database domain knowledge, and enable automatic database operations without accessing the actual data, thus illuminating privacy protection concerns when using AI.
The rest of the paper is organized as follows: In Section 2, we provide a review of related work in the area of database management using AI. Then, we describe our proposed solution in Section 3. Section 4 presents the results
and discussion of our case study. Finally, we discuss the potential benefits and limitations of our work and conclude with future directions for research.
## 2 Literature review
### AI-based database queries generation
The use of AI models for generating database queries through natural language has been the focus of several research studies. One such model proposed by Bais et al. [2] utilizes NLP techniques to analyze and interpret user queries by performing morphological, syntactic, and semantic analysis, resulting in a valid database query in SQL. Similarly, Sawant et al. [3] implemented a system that can generate SQL queries from text and speech input using NLP and deep learning techniques such as Long Short Term Memory (LSTM).
Other studies, such as Ghosh et al. [4], Nagare et al. [5], and Kombade et al. [6], have also utilized techniques such as lexical analysis, syntax analysis, and semantic analysis to extract SQL queries from natural language input. Kombade et al. [6] even considered the use of abbreviations in NLP to generate SQL queries. The implementation of these studies used python with a GUI for input and output, and the user could provide input through speech or text.
Despite the progress made in this field, limitations still exist in the ability of AI models to accurately generate database queries from natural language due to the complexity and ambiguity of natural language, as well as the lack of standardized vocabulary and grammar for representing database structures. For instance, Nagare et al. [5] mentions that the system checks the validity of the user's query, but it is unclear how the query's validity is determined. Moreover, the studies only consider basic database operations such as select, delete, and update. Complex operations, such as joining multiple tables and semantic integration, have not been investigated.
### Semantic integration
Semantic integration is crucial for resolving mismatches in data representation between related databases. In today's digital world, organizations are generating and storing vast amounts of data in various databases, which often leads to inconsistencies in the data representation. For instance, one database may use the attribute "Social Security Number" to identify individuals, while another database may use the attribute "SSN". In such cases, it is crucial to determine the relationships between these attributes in order to accurately compare and combine the data from these databases.
Several approaches for semantic integration have been developed in recent years. Tools like Silk [7], LIMES [8], and PARIS [9] use string similarity metrics, functional properties, and manual configuration to detect matching attributes. WebPie [10] and LINDA [11] are fully automatic systems that use techniques such as neighborhood checks and block placement. MateTee [12] and RDF2VEC [13] are more recent approaches that utilize embeddings and
machine learning to find similarities. However, the task of matching entities can become complicated with the presence of entities in abbreviates [14].
The existing approaches are heavily relying on domain expertise and require complex preparations. For example, dataset-based approaches [12, 13] require determining the relationships among attributes. Keyword-based approaches [7, 8] depend on the accuracy of metadata, while URI-based approaches [10] strongly depend on dereferencing HTTP URIs.
## 3 Methodology
### ChatGPT
ChatGPT is a language model developed by OpenAI [15]. It is a type of AI algorithm trained to predict the likelihood of a given sequence of words based on the context of the words that come before it. This technology is based on self-attention mechanisms [16] and has been trained on a massive dataset of text, allowing it to generate sophisticated and seemingly intelligent writing. ChatGPT is designed to converse with users in English and other languages on a wide range of topics, making it ideal for use in chatbots, customer service, content creation, and language translation tasks.
One of the applications of ChatGPT is to assist in programming, which can be achieved in two ways. Firstly, ChatGPT can serve as a programming assistant or tool. For instance, developers can ask ChatGPT programming-related questions and obtain recommendations and suggestions about general workflows and steps. Secondly, ChatGPT can generate code snippets directly, resulting in enhanced productivity and time-saving benefits for developers.
Despite its advanced natural language processing capability and successes in assisting programming, ChatGPT has not yet been able to generate queries for databases because database schemas, which contain vital information about database structures, are frequently written in the form of a graph rather than natural language.
### Context-based Ontology Modelling for Database
This study presents a new method, Context-based Ontology Modelling for Database (COM-DB), which is aimed at converting database schemas into natural language. COM-DB is built upon our previous research on ontology modelling [17], which utilizes constructs like context-of, monodirectional relationship, and bi-directional relationship to describe the relationship of concepts in databases.
In this study, we focus on the usage of the "context-of" construct for describing database schema, especially at the conceptual data model level. A conceptual schema or conceptual data model is a map of concepts and their relationships used for databases. The key feature of COM-DB is the ability to convert these relationships into natural language, which makes it more accessible to ChatGPT. To demonstrate the effectiveness of COM-DB, we provide
two examples that illustrate how the "context-of" construct can be used to describe the relationship of headers within one table and the relationship of tables within one database.
The example in Figure 1 demonstrates how COM-DB can be used to describe the relationships between headers within a database table. The left side of the figure shows a typical database table schema, which includes the name of the table and the names of its headers.
COM-DB converts this schema into two parts: the base schema, which describes the names of the headers in the table, and the contextual schema, which uses the "context-of" construct to describe relationships among headers. In this example, the headers "ADDRESS", "CITY", "STATE", and "COUNTY" are related and are used to store information about patients' addresses.
The contextual schema condenses this information into a single sentence by using the "context-of" construct to describe the relationship between these headers and the patients' addresses. Specifically, it states that "ADDRESS, CITY, STATE, COUNTY are in the context of patients' address". This represents four relationships in one sentence, making the information more easily understandable and less verbose.
Figure 2 is an Entity-Relationship schema of a hospital database. Similar to the first example, COM-DB represents the database in two parts. The first part, basic schema, explain headers in each table. Some basic schemas are as follows:
Figure 1: Describing relationships of headers within one table using COM-DB
Given a table 'allergies' with headers: Id, START, STOP, PATIENT, ENCOUNTER, CODE, DESCRIPTION. And a table 'careplans' with headers: Id, START, STOP, PATIENT, ENCOUNTER, CODE, DESCRIPTION. And a table 'conditions' with headers: START, STOP, PATIENT, ENCOUNTER, CODE, DESCRIPTION. And a table 'devices' with headers: START, STOP, PATIENT, ENCOUNTER, CODE, DESCRIPTION, UDI.
Figure 2: Entity-Relationship schema of a hospital database [18]
And a table 'encounters' with headers: Id, START, STOP, PATIENT, ORGANIZATION, PROVIDER, PAYER, ENCOUNTER CLASS, CODE, DESCRIPTION, BASE_ENCOUNTER_COST, TOTAL_CLAIM_COST, PAYER_COVERAGE. And a table 'imaging_studies' with headers: Id, DATE, PATIENT, ENCOUNTER, BODYSITE_CODE, BODYSITE_DESCRIPTION, MODALITY_CODE, MODALITY_DESCRIPTION, SOP_CODE, SOP_DESCRIPTION. And a table 'immunizations' with headers: DATE, PATIENT, ENCOUNTER , CODE, DESCRIPTION, BASE_COST. And a table'medications' with headers: START, STOP, PATIENT, PAYER, ENCOUNTER, CODE, DESCRIPTION, BASE_COST, PAYER_COVERAGE , DISPENESES, TOTALCOST. And a table 'observations' with headers: DATE, PATIENT, ENCOUNTER, CODE, DESCRIPTION, VALUE, UNITS, TYPE. And a table 'organizations' with headers: Id, NAME, ADDRESS, CITY, STATE, ZIP, LAT, LON, PHONE, REVENUE, UTILIZATION. And a table 'patients' with headers: Id, BIRTHDATE, DEATHDATE, SSN , PBEFIX, FIRST, LAST, SUFFIX, MAIDEN, MARITAL, RACE, ETHNICITY, GENDER, BIRTHPLACE, ADDRESS, CITY, STATE, COUNTY. And a table 'papers' with headers: Id, NAME, ADDRESS, CITY, STATE_HEADQUARTERED, ZIP, PHONE, AMOUNT_COVERED, AMOUT_UNCVERED, REVENUE, COVERED_ENCOUNTERS, UNCOVERED_ENCOUNTERS, COVERED_MEDICATIONS, UNCOVERED_MEDICATIONS, COVERED_PROCEDUREes, UNCOVERED_PROCEDUREes, UNCOVERED_IMMUNIZATIONS, UNIQUE_CUSTOMERS, QOLS_AVG, MEMBER_MONTHS. And a table 'procedures' with headers: DATE, PATIENT, ENCOUNTER, CODE, DESCRIPTION, BASE_COST. And a table 'providers' with headers: Id, ORGANIZATION, NAME, GENDER, SPECIALITY, ADDRESS, CITY, STATE, ZIP. In addition, the second part, contextual schema, is as follows: allergies, careplans, conditions, devices, immunizations, observations, procedures, imaging_studies are in the context of patients, encounters. encounters are in the context of patients, organizations, providers, payers. medications are in the context of patients, encounters, payers. providers are in the context of organizations. Note that the contextual schema is shown in a condensed form. For example, "allergies, careplans, conditions, devices, immunizations, observations, procedures, imaging_studies are in the context of patients, encounters." represents \(8\,\times\,2=16\) relationships. These relationships are between \(8\) tables "allergies, careplans, conditions, devices, immunizations, observations, procedures, imaging_studies" and \(2\) tables "patients, encounters". We use these examples to show practical applications of COM-DB and how it can be used to generate natural language descriptions of complex database schemas. Overall, the methodology of this study involves designing
and implementing the COM-DB method, which includes utilizing the "context-of" construct and other ontology modelling constructs to convert database schema into natural language. The effectiveness of this method is demonstrated through the use of two examples, which show how it can be used to complete two sophisticated database management tasks. The effectiveness of the method is demonstrated in the case study.
## 4 Case study
The case study aims to showcase the efficacy of the proposed COM-DB system. The system's primary feature is the "context-of" construct, which utilizes natural language to capture database semantics like table structure and relationships. The primary objective of the system is to create semantic representations of databases that can be easily comprehended by ChatGPT, enabling it to perform various database management tasks.
The case study provides empirical evidence to support the effectiveness of COM-DB. Two sample databases are collected from the literature, Synthea_Alabama [18] and BDA_EHR [19]. Based on those databases, two experiments are conducted that represent typical tasks conducted during database integration: semantic integration and tables joining. In both experiments, ChatGPT is used to perform tasks with and without the COM-DB-based schema. The study repeats each experiment 10 times to ensure reliability and eliminate the potential inconsistency in ChatGPT's performance. Results demonstrated illustrate an average result from the repeated experiments.
### Experiment 1: Semantic Integration
Semantic integration involves merging two tables with the same category of information. Different headers' names among different tables often cause incompatibility issues. 'patients_A' and 'patients_B' are tables of patient information from BDA_EHR and Synthea_Alabama, respectively. 'patients_A' contains headers: Id_patients, Name, Surname, Date of Birth, Place of Birth, Address, Gender, Blood Type, Job. And 'patients_B' contains headers: Id, BIRTHDATE, DEATHDATE, SSN, PREFIX, FIRST, LAST, SUF-FIX, MAIDEN, MARITAL, RACE, ETHNICITY, GENDER, BIRTHPLACE, ADDRESS, CITY, STATE, COUNTY.
The goal of this experiment is to identify headers from table 'patients_A' and table 'patients_B' which contain the same information, knowing that some headers may need to be combined or split for the mapping. The ideal mapping is illustrated in Table 1.
Figure 3 shows the input and output of using ChatGPT without COM-DB based schema. The message from the icon 'FF' is the input, while the message from the graphical icon is the output from ChatGPT. The input contains two parts of information. The first part is to explain the situation, which contains the names of headers in each table. Note that only the headers are provided here, without any sample data or data type. The second part "Identify
the headers from table 'patients_A' and table 'patients_B' which contain the same information. Some headers may need to be combined or split." is an explanation of the task to be completed by ChatGPT.
The output in Figure 3 shows that ChatGPT can understand the task and perform it to a degree. It matches Date of Birth and BIRTHDATE, Place of Birth and BIRTHPLACE, Gender and GENDER, correctly. However, it failed to match Name with FIRST, and Surname with LAST. In addition, ADDRESS in 'patients_B' should be used with other headers CITY, STATE, COUNTY. This was not noticed by ChatGPT.
Figure 4 shows the input and output of using ChatGPT with COM-DB based schema. In addition to the inputs used in Figure 3, the ontology model information is described as "In table 'patients_A', headers Name and Surname are in the context of patients' name. In table 'patients_B', headers ADDRESS, CITY, STATE, and COUNTY are in the context of patients' address."
\begin{table}
\begin{tabular}{l l} \hline \hline patients\_A & patients\_B \\ \hline Name & FIRST \\ Surname & LAST \\ Date of Birth & BIRTHDATE \\ Place of Birth & BIRTHPLACE \\ Address & ADDRESS CITY STATE COUNTY \\ Gender & GENDER \\ \hline \hline \end{tabular}
\end{table}
Table 1: The ideal header mapping results of table ’patients_A’ and table ’patients_B’
Figure 3: Experiment 1, header mapping without COM-DB based schema
Figure 4 illustrates a typical result from ChaGPT and demonstrates the COM-DB based schema improves the performance of semantic integration as ChatGPT has successfully identified all mappings as expected.
### Experiment 2: Tables Joining
Tables joining involves generating a new table or view that combines data from multiple tables. This process requires an Entity Relationship schema which doesn't have a conventional way to be represented in natural language. To demonstrate the effectiveness of COM-DB based schema in representing the Entity Relationship, Synthea_Alabama is used in this experiment. The database contains 14 tables as shown in Figure 2. The goal of the experiment is to create a SQL query that generates a list of careplans, with corresponding providers' and patients' identity information. The careplans are advices from providers (such as physicians) to patients. A SQL query needs to properly join four tables: careplans, providers, patients, and encounters. The encounters table plays a critical role here as it connects the patients table with the careplans table. This information is typically contained in an Entity Relationship schema.
Figure 5 shows the input and output of using ChatGPT without COM-DB based schema. The input contains two parts of information. The first part is to
Figure 4: Experiment 1, header mapping with COM-DB based schema
explain all tables with their contained headers in alphabetical order. Similar to Experiment 1, only the headers are provided here without any sample data or data type. The second part, "To create a SQL query that generates a list of careplans, with corresponding providers' and patients' identity information." is an explanation of the task to be completed by ChatGPT.
The output from ChatGPT is verified by executing the SQL query in the hospital database. The result is shown in Figure 6. From the result, it was found the query doesn't work due to the error "no such column: careplans.PROVIDER". The root cause of this error is the missing the encounters table as explained earlier.
Figure 7 shows the input and output of using ChatGPT with COM-DB based schema that explains the context information of each table. In this case, the COM-DB based schema describes relations between tables. Figure 8 verifies the SQL query by executing it in the hospital database. It shows that ChatGPT has successfully generated the query that results in a correct view.
Figure 5: Experiment 2 Generate a new view from multiple tables without COM-DB based schema. The conversation is split into two columns, from left to right.
## 5 Discussions
The results of the experiments indicate that ChatGPT performs better in both semantic integration and tables joining tasks when using the COM-DB-based schema. The context information provided by the ontology models helps ChatGPT to better complete the tasks. The study demonstrates that
Figure 6: Experiment 2, SQL query results without COM-DB based schema
Figure 7: Experiment 2 Generate a new view from multiple tables with COM-DB based schema. The conversation is split into two columns, from left to right.
the "context-of" construct in COM-DB captures certain context information which may not be included in a conventional ontology model. This context information determines the relations between concepts, which helps to eliminate ambiguities between concepts and increases the chances of success during database integration.
In addition, COM-DB enables automatic database operations without compromising privacy protections. Unlike existing AI-enabled database operations, which require the AI algorithm to access all data in the database, COM-DB only generates a schema from the table structure, instead of its content. This schema does not contain any privacy information, making it safe to share with third-party services such as ChatGPT. By sending the schema to ChatGPT, the risk of privacy breach is significantly reduced, as ChatGPT can perform automated database operations without ever accessing sensitive data. This allows businesses and organizations to leverage the power of AI and automation to streamline their operations and improve efficiency, without sacrificing the privacy and security of their customers' data. With COM-DB, businesses can have peace of mind knowing that their data is secure and protected, while still enjoying the benefits of automated database operations.
## 6 Conclusion
This paper explores the use of ChatGPT in the area of database management, highlighting the challenges of using natural language processing to perform database queries. Our research presents a solution by developing a set of syntaxes to represent database semantics in natural language. These syntaxes, called COM-DB, enable ChatGPT to perform tasks related to database management, such as semantic integration and tables joining. Our case study shows that the use of semantic representations in database management leads to more precise outcomes and reduces common mistakes compared to cases without such representations.
Our research aims to contribute to the field of database management by introducing a novel approach for converting database schemas into natural language format, thereby opening up new applications for ChatGPT.
Figure 8: Experiment 2, SQL query results with COM-DB based schema
This approach has the potential to deliver significant benefits, including faster database management, reduced domain knowledge requirements, and enhanced privacy protection through automated database operations that do not require access to actual data.
Future work involves expanding the scope of our method to include more complex database operations and testing it on larger databases. Furthermore, we intend to investigate the feasibility of incorporating other natural language processing models into database management and explore the possibilities of combining various models to enhance their capabilities.
In conclusion, our research demonstrates the potential of natural language processing models to be employed in the field of database management, providing a new way to interact with and manipulate databases. By leveraging the power of ChatGPT alongside our COM-DB syntaxes, we have demonstrated that complex database operations can be executed using natural language, offering a new approach to simplify database management and enhance productivity.
Acknowledgments.This study is supported by Natural Sciences and Engineering Research Council of Canada, Alliance grant #ALLRP 555161 - 20.
|
2308.13056 | Lexical Diversity in Kinship Across Languages and Dialects | Languages are known to describe the world in diverse ways. Across lexicons,
diversity is pervasive, appearing through phenomena such as lexical gaps and
untranslatability. However, in computational resources, such as multilingual
lexical databases, diversity is hardly ever represented. In this paper, we
introduce a method to enrich computational lexicons with content relating to
linguistic diversity. The method is verified through two large-scale case
studies on kinship terminology, a domain known to be diverse across languages
and cultures: one case study deals with seven Arabic dialects, while the other
one with three Indonesian languages. Our results, made available as browseable
and downloadable computational resources, extend prior linguistics research on
kinship terminology, and provide insight into the extent of diversity even
within linguistically and culturally close communities. | Hadi Khalilia, Gábor Bella, Abed Alhakim Freihat, Shandy Darma, Fausto Giunchiglia | 2023-08-24T19:49:30Z | http://arxiv.org/abs/2308.13056v2 | # Lexical Diversity in Kinship Across Languages and Dialects
###### Abstract
Languages are known to describe the world in diverse ways. Across lexicons, diversity is pervasive, appearing through phenomena such as lexical gaps and untranslatability. However, in computational resources, such as multilingual lexical databases, diversity is hardly ever represented. In this paper, we introduce a method to enrich computational lexicons with content relating to linguistic diversity. The method is verified through two large-scale case studies on kinship terminology, a domain known to be diverse across languages and cultures: one case study deals with seven Arabic dialects, while the other one with three Indonesian languages. Our results, made available as browseable and downloadable computational resources, extend prior linguistics research on kinship terminology, and provide insight into the extent of diversity even within linguistically and culturally close communities.
multilingual lexicon dialect language diversity lexical gap kinship lexical typology
## 1 Introduction
The culture and the social structure of a community are reflected in the language spoken by its members. One of the most salient examples of this phenomenon is the worldwide diversity of terms used to describe family structures and relationships. While, thanks to studies such as Murdock (1970), kin terms around the globe are generally well documented, many local variations--across dialects of a single language or across languages of a single country--have not yet been fully described or understood. For example, the term _mazozozi_ in the Algerian Arabic dialect, meaning _younger brother_, does not have any equivalent term in the Gulf Arabic dialect. In contrast, the Gulf word _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, \(z\), _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, \(z\), _zquez_, _zquez_, _zquez_, _zquez_, _zquez_, \(z\), _zquez_, _zquez_, _zquez_, \(z\), _zquez_, _zquez_, \(z\), _zquez_, \(z\), _zquez_, \(z\), _zquez_, _zquez_, \(z\), _zquez_, _zquez_, \(z\), _zquez_, \(z\), _zquez_, \(z\), _zquez_, \(z\), _zquez_, \(z\), _zquez_, \(z\), _zquez_, \(z\), _zquez_, \(z\), _zquez_, \(z\), \(z\), _zz_, \(z\), _zz_, \(z\), _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, _zz_, \(z\), _zz_, _zz_, _zz_, \(z\), _zz_, _zz_, \(z\), _zz_, \(z\), _zz_, _zz_, \(z\), \(z\), \(z\), \(z\), \(z\), \(z\), \(z\), \(z\), \(z\), _ \(z\), \(z\), \(z\), \(z\), \(z\), \(z\), \(z\), _ \(z\), _ \(z\), \(z\), _ \(z\), _ \(z\), _ \(z\), \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), \(z\), _ \(z\), _ \(z\), _ \(z\), _, _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), _ \(z\), \(z\), _ \(z\), _ \(z\), \(z\), _ \(z\), _ \(z\), _ \(z\), \(z\), \(z\), _ _z
others--tend to remain hidden in monolingual resources but are revealed in multilingual settings (Bella et al., 2023, 2022a).
In recent years, there has been an increasing number of linguistic databases covering a large number of languages. These resources are usually aimed at quantitative studies for comparative linguistics, such as the classification of pain predicates (Reznikova et al., 2012), a semantic map of motion verbs (Walchli and Cysouw, 2012), the modeling of color terminology (McCarthy et al., 2019), the CLICS database of cross-linguistic colexifications (Rzymski et al., 2020), DiACL (Diachronic Atlas of Comparative Linguistics), a database for ancient Indo-European languages spoken in Eurasia typology (Carling et al., 2018), or the Cross-Linguistic Database of Phonetic Transcription Systems (Anderson et al., 2018). Often, such databases use phonetic representations of lexical units or are limited to a few hundred or a few thousand core concepts, limiting their usability for the processing of contemporary written language. In our experience, most of the existing typology-informed NLP research is restricted to exploring language-specific morphosyntactic features and has ignored diversity within lexical resources (Batsuren et al., 2022). A notable exception is the Universal Knowledge Core, a massively multilingual lexical database that explicitly represents linguistic diversity and that we reuse in our work.
Our research is part of the _LiveLanguage_ initiative, the overarching objective of which is to create, publish, and manage language resources that are "diversity-aware"--i.e. that reflect the viewpoints of multiple speaker communities--and that can be reused by multiple communities: linguists, cognitive scientists, AI engineers, language teachers and students (Bella et al., 2023). Contrary to mainstream exploitative practices, LiveLanguage aims to carry out its goals while empowering local speaker communities, giving them control over resources they help to produce (Helm et al., 2023). Involving human contributors and deciders from speaker communities is therefore a crucial part of our methodology.
In particular, the present paper focuses on diversity where it is less expected to appear: within dialects of the same language and within languages of the same country. Therefore, we describe a multidisciplinary study on the diversity of kin terms across seven Arabic dialects (Algerian, Egyptian, Tunisian, Gulf, Moroccan, Palestinian, and Syrian) and three languages from Indonesia (Indonesian, Javanese, and Banjarese). We consider kin terms as a domain particularly well-suited both for research on the methodology of collecting and producing diversity-aware linguistic data, and for comparative studies on diversity across languages.
Our paper aims to provide four contributions: (1) a general method for collecting multilingual lexical data from native speakers for a given domain (in our case the domain of kin terms), in a diversity-aware manner; (2) 223 kin terms and 1,619 lexical gaps collected in seven Arabic dialects and three Indonesian languages; (3) a qualitative and quantitative discussion of our results regarding the diversity observed across the dialects and languages covered; and (4) the publication of our results as an open, computer-processable dataset, as well as its integration into the Universal Knowledge Core multilingual database. Our starting point is state-of-the-art datasets on worldwide kinship terminology from ethnography (Murdock, 1970) and computational linguistics (Khishigsuren et al., 2022). Our data collection method is based on collaborative input from native speakers and language experts. Our results extend the state-of-the-art resources above with kin terms in languages and dialects not yet covered, as well as with 22 new kinship concepts not yet associated with other languages within those resources.
The structure of the paper is organized as follows. In Section 2 we give an overview of lexical typology and the phenomena of lexical untranslatability and lexical gaps with respect to the domain of kinship in particular. The Universal Knowledge Core resource is presented in Section 3. In Section 4 we describe our data collection method. Sections 5 and 6 introduce two case studies on Arabic dialects and Indonesian languages, respectively. Section 7 discusses previous studies related to our work. Finally, we provide conclusions in Section 8.
## 2 Untranslatability and Lexical Typology
Linguists understand translation from one language to another as a complex and multidimensional problem, ranging from multiple coexisting forms of meaning equivalence to untranslatability (Catford, 1965; Bella et al., 2022a). The diversity between cultures is a major cause for this problem appearing on several lexical-semantic levels. Some examples of the linguistic diversity are the richness of Toaripi vocabulary on the various forms of motion verbs describing walking around the beach like (isai) meaning "_go beachward_" and (kavai) meaning "_go inland with respect to the beach_", the language of the coastal Papua New Guinea country, the lack of vocabulary for the word meaning "_sailing_" in Mongolian, which is the language of a landlocked country, or the Arabic word _gains_ meaning "_to ascend a camel's hump_".
The domain of kinship terms, which is the subject of our paper, is known to be extremely varied across languages, due to the different ways family structures are organised around the world. Matiarchal societies may describe certain female relatives with more detail, while strongly Patriarchal ones are more descriptive with respect to male relatives. Arabic dialects, for instance, distinguish paternal and maternal brothers but also blood brothers, full brothers, and
breatfeeding brothers. Thus, not only are kinship-related vocabularies 'richer' or 'poorer' across languages, they are also structured in different manners.
In this research, we focus on lexical untranslatability, which manifests most clearly through the lexical gap phenomenon when a word in a source language does not have a concise and precise translation in a given target language. Lexical gaps are often the linguistic manifestation of culturally or spatially defined specificities of a community of language speakers that cannot entirely be predicted or explained through systematic principles or recurrent patterns (Lehrer, 1970). Table 1 below presents this phenomenon for nine concepts representing sibling relationships from the kinship domain in eight languages1. One can observe that none of the eight languages has concise lexicalizations for all nine concepts, yet each concept is lexicalized in at least one language. Such variations in lexicalization pose a problem for both machine and human translation: for instance, substituting a specific term instead of a broader one may result in injecting unintended meaning. In Javanese, at least four specific terms--(sedulur/_sibling_), (adhi/_younger sibling_), (kangmas/_elder brother_), and (Mbakyu/_elder sister_)--are used for expressing the sibling relationship, and accordingly, translating this sentence through Google Translate (_my sister is ten years older than me_) to Javanese gives this nonsensical sentence (_adhiku luwih huw septulu luun intihung aku_) meaning (_my younger sibling_ is ten years older than me_). This result is due to the lack of Javanese vocabulary for the word meaning (sister), and also lacks the term meaning "_younger sister_", so the machine translator uses (adhi) meaning "_younger sibling_," which finally produces the semantically absurd output.
Footnote 1: These nine concepts do not cover sibling terms exhaustively in all languages: for example, many Austronesian languages use different terms based on the gender of the speaker.
Lexical typology is a field of linguistics that studies the diversity across languages according to the structural features of languages with respect to specific semantic fields (Plungyan, 2011). Different classical studies are conducted in this field on grammar and phonology, such as VoxClamantis V1.0-- a large-scale corpus for phonetic typology (Salesky et al., 2020) and the structure of the space semantic field by identifying a set of semantic parameters and notions depending on the grammatical information of the field's constituents (Levinson and Wilkins, 2006). Other examples of such studies have been conducted on lexical-typological issues that appear across languages during translation, like the presence or absence of lexicalizations in languages. In these articles, authors focused on semantic fields that offer the richness of cross-lingual diversity: family relationships (Kemp and Regier, 2012), colors (Roberson et al., 2005), food (Bella et al., 2022b), body parts (Wierzbicka, 2007), putting and taking events (Kopecka and Narasimhan, 2012), cutting and breaking events (Majid et al., 2007), or cardinal direction terms (Arora et al., 2021). However, as mentioned in the introduction, only a few open datasets have been published in the scientific research area. These include the classification of kinship by Murdock (1970), which has been published in D-PLACE (Kirby et al., 2016). Part of Kay and Cook (2016)'s work on colors is published under the lexicon chapter of the World Atlas of Language Structures (WALS) (Dryer and Haspelmath, 2013). Additionally, a color categorization dataset by McCarthy et al. (2019) is available on GitHub2.
Footnote 2: [https://github.com/aryamccarthy/basic-color-terms](https://github.com/aryamccarthy/basic-color-terms)
Digital lexicons have been increasingly used in lexical typology, enabling typologists to explore a broader range of languages and semantic domains. One noteworthy example is the KinDiv3 lexicon (Khishigsuren et al., 2022), which
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
**Meaning** & **English** & **Japanese** & **Arabic** & **Italian** & **Indonesian** & **Hindi** & **Hungarian** & **Javanese** \\ \hline sibling & sibling & GAP & GAP & GAP & saudara & & testvér & sedulur \\ \hline elder sibling & GAP & GAP & GAP & GAP & kakak & GAP & nagytesvér & GAP \\ \hline younger sibling & GAP & GAP & GAP & GAP & adik & GAP & kistestvér & adhi \\ \hline brother & brother & GAP & \(\mathfrak{c}^{\dagger}\) & fatello & GAP & \(\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A} \mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}\mathfrak{A}
encompasses 1,911 words and identifies 37,370 gaps within the domain of kinship, spanning 699 languages. In our current research, we extend our investigation into the kinship domain, specifically focusing on exploring linguistic diversity among Arabic dialects and Indonesian languages. Other examples include Viberg (1983)'s seminal study, which was conducted on perceptual terminology in 50 languages and has been expanded upon by Georgakopoulos et al. (2022) to cover 1,220 languages. Furthermore, the Kinbank database, recently introduced by Passmore et al. (2023), serves as a comprehensive repository of kinship terminology, encompassing more than 1,173 languages and offering a broad coverage of various kinship subdomains.
## 3 Universal Knowledge Core
This section describes the Universal Knowledge Core (UKC)4, a large multilingual lexical database that we adopt for the production of diversity-aware datasets in this research (Giunchiglia et al., 2017). The use of the UKC is motivated by its ability to represent linguistic unity and diversity explicitly: conceptualisations shared across languages, word senses appearing only in certain languages, shared lexicalisations (e.g. cognates), as well as lexical gaps. The theoretical underpinnings of the lexical model of the UKC have been described in Giunchiglia et al. (2018) and in Bella et al. (2022b), and are illustrated in Figure 1.
Footnote 4: [http://ukc.datascientiia.eu](http://ukc.datascientiia.eu)
The UKC is divided into a supra-lingual concept layer (as shown at the top of Figure 1) and the layer of individual lexicons (at the bottom of Figure 1). The concept layer includes hierarchies of concepts that represent lexical meaning shared across languages. Concepts are language-independent units and act as bridges across languages, and each one
Figure 1: Structural elements in the UKC lexical database
should be lexicalized by at least one language to be present in the concept layer. Supra-lingual concepts and their relations (e.g. hypernymy, meronymy) are in part derived from third-party resources such as Princeton WordNet (PWN) (Miller, 1995), and are in part proper to the UKC. In particular, the UKC contains an extensive formal conceptualisation of kinship domain terms computed from the KinDiv database, spanning about 200 distinct concepts.5 KinDiv itself is based on ethnographic evidence from 699 languages (Khishigsuren et al., 2022). While this existing hierarchy of kinship concepts does not fully cover all terms that appear in our study, it is the most complete one we are aware of, motivating our choice of the UKC as a platform for our research.
Footnote 5: [https://github.com/k](https://github.com/k) batsuren/KinDiv
The lexicon layer consists of language-specific lexicons that provide lexicalizations for the concepts from the supra-lingual concept layer, while also asserting _lexical gaps_ whenever lexicalizations are known not to exist. Lexicons also provide term definitions as well as lexical relationships specific to the language, such as derivations, metonymy, or antonymy relations. Lexicons can also contain _language-specific concepts_ that do not appear in the supra-lingual concept layer. For example, in Figure 1, the Arabic 22,22, meaning _"a female person who has the same father, mother, or both parents as another person"_, is represented as a language-specific concept. The dual mechanism of defining lexical concepts either on the supra-lingual or on the language-specific level allows for the representation of differing worklors that would be hard or impossible to reconcile into a single global concept graph. The richness of its lexicon-level linguistic knowledge makes the UKC unique among multilingual lexical databases and particularly suitable for our study.
As mentioned in Section 2, a lexical gap for a specific concept is present in a language if there is no concise equivalent word meaning for the concept in that language. For example, neither English nor Arabic has a word meaning _elder sibling_; for such cases, the UKC provides evidence of meaning non-existence and untranslatability by representing lexical gaps inside lexicons, as shown in Figure 1. This information can be used by the NLP community to indicate the absence of equivalent words to downstream cross-lingual applications.
Beyond providing lexical relations between shared word meanings as other multilingual lexical databases do, the UKC also represents a richer set of lexical-semantic connections between language units in a lexicon. For example, the _antonym_ lexical relation expresses that two senses are opposite in meaning. While the lexical-semantic relation, _similar-to_, is used to connect two concepts with similar meanings, and the _hypernym-of_ connects parent meaning with its child. For instance, in Figure 1, the English (little brother) and (brother) are connected through a _hypernym-of_ relationship. Such information can be used by the NLP community to indicate the concise equivalent language-specific word meaning to downstream cross-lingual applications, e.g., as the position of a language-specific meaning in a language hierarchy in a lexicon.
The UKC currently does not explicitly distinguish between languages and dialects: each vocabulary is a separate entity labeled with a standard three-letter ISO 639-3 code. When such a code is not available, the UKC uses a standard extension mechanism where three additional (not standardised) letters are added to the ISO code: e.g., for Syrian Arabic, the code arb-syr is used.
## 4 A Methodology for Building Diversity-Aware Lexicons
This section presents the general method by which we collected and produced lexicalizations and gaps from native speakers and language experts. The same method presented below was employed in an independent manner for each Arabic dialect and Indonesian language covered by our study. The contents of this section aim to serve as a tried and tested recipe for gathering lexical data in a diversity-aware manner, that we intend to reuse in future lexicon development projects.
We exploit the UKC to import language-independent concepts (e.g., kinship concepts) to be used as an input dataset to our method and use its data representation model to formalize our data. We reuse an already broad and well-formalised hierarchy of 184 kinship concepts from the KinDiv database, which includes kinship terms and gaps in 699 languages. Data in KinDiv is based on the well-known results of Murdock (1970), as well as on lexicalizations retrieved from Wiktionary that we consider as an overall good-quality resource. In Khishigsuren et al. (2022), the accuracy of KinDiv was evaluated to be above 96%. One language expert per language provided this percentage, which represents the proportion of the number of words (or gaps) validated as correct to the total number of collected words (or gaps).
Our work extends KinDiv data by new concepts, lexicalizations, and lexical gaps in languages and dialects that are either not present in KinDiv or are incompletely covered. A lexical-semantic expert generates a contribution (kinship terms and gaps) task, then a group of native speakers collects contributions from a dialect (and a local language). After that, two steps for validating collected contributions: language experts evaluate collected lexical units and gaps of a dialect, and a lexical-semantic expert evaluates explored kinship concepts (not existing in UKC). Additionally, resulting
data (including gaps, words, and new concepts) is used to update and enrich UKC. So, gaps and words are merged into the lexicons of the UKC while new concepts are integrated with the (top) concept layer. A general view of the method is depicted in Figure 2.
Accordingly, the macro-steps of our methodology are as follows:
1. _Contribution task generation_: First, prepare the materials: the dataset of inputs to be examined and the architecture of the supra-lingual concept layer of each subdomain.
2. _Contribution collection_: The actual contribution effort is carried out by a native speaker in a local language or dialect.
3. _Lexicon-level validation_: Provided words and gaps are evaluated and corrected by a language expert.
4. _Concept-level validation_: New concepts and unclear contributions (i.e., words on the borderline) are verified by a lexico-semantic expert.
### Contribution task generation
This section describes the material needed during the execution of the next steps of the methodology. Hence, two constituents must be prepared in this step as described below:
1. _Dataset of inputs_: Constructing the dataset of general word meanings is the first step of studying diversity across dialects and represents the inputs of the contribution collection phase. In this context, the UKC lexicon is employed to build a dataset, which contains several facilities that support retrieving categorized data from its interlingual shared meaning layer as introduced in Section 3. Moreover, typology datasets or other approaches can be used for that, such as the kinship dataset from Murdock (1970); or gathering data from online dictionaries using automatic methods, i.e. KinDiv retrieves some of its kinship terms from Wiktionary. The constructed dataset is a spreadsheet containing language-independent meanings from one semantic field. At the same time, its content is distributed into subdomains (sheets) for usability and simplicity in designing a concept hierarchy for each subdomain which is a helpful tool for lexical-gap exploration. One spreadsheet row is generated for each concept, containing the concept ID, the source concept definition in the standard language, another definition in English, as well as empty slots for inserting a lexical gap or a word with equivalent meaning, and the data provider's comments in a dialect or local language.
2. _Interlingual concept hierarchy_: Modeling the interlingual shared meaning space is essential to explore lexical gaps systematically. In this task, the UKC concept hierarchy is exploited. UKC is the only resource introducing
Figure 2: Methodology macro-steps and data sources
a hierarchy of shared meanings across languages for each semantic field, such as kinship, colors, or food. Furthermore, UKC uses a hybrid linguistic-conceptual approach in modeling each domain. This approach adopts actual domain ontology and linguistic data from typological literature. For example, a fragment of the brotherhood hierarchy in the top layer of the UKC is shown in Figure 1. A native speaker can compare each examined concept from the spreadsheet with the hierarchy of its domain to extract additional knowledge about its meaning based on a concept's position in the hierarchy, which helps to provide a concrete answer in terms of a gap or a lexical unit.
### Contribution collection
Contributions from a local language or a dialect are provided by one native speaker who was born and educated (university level) within the speaker community. The following are the most notable instructions they are given:
1. They are given the authority to skip concepts, stop contributions, or leave a comment when they deem the terms are becoming too culture-specific and consequently need an exact answer.
2. They are asked to provide a lexicalization in a local language (or dialect) that gives meaning equal to the concept's meaning.
3. They are asked explicitly to identify lexical gaps where no local (or dialect) lexicalization exists.
4. Within a local language (or dialect) and a subdomain (e.g., cousins), they are asked to provide new concepts that did not exist in the list of inputs which is imported from the UKC by providing a word (lemma) and a clear description of its meaning.
The process of providing such contributions is depicted in two flowcharts; for instance, Figure 3 shows the flowchart of the candidate gap (on the left-hand side of the figure) and candidate equivalent word meaning (on the right-hand side of the figure) exploration; it starts identifying a standard language and a local language (or dialect) and providing a native speaker with a spreadsheet including a list of subdomain concepts (inputs). Then, a native speaker is asked to find a linguistic resource in the local language and use it to search for concepts (concept-by-concept) to confirm lexicalizations and gaps. He/she can use a linguistic resource in the search process as the following steps: searching in a well-known dictionary, then in Wiktionary-- a large multilingual online lexicon after that in a typology dataset (if it is available), and finally, using Google search (based on the count of search hits). More details about these steps are described in Section 5. The native speaker can rely on search results and the count of Google hits to give a more concrete answer on whether the concept in the standard language has a lexicalization or is a gap in the local language; such candidates are passed to the next phase- lexicon-level validation.
A new concept collection is a third contribution in this phase, where the steps of a candidate new concept exploration in a local language can be seen in Figure 4. A native speaker can examine the list of subdomain concepts and provide his/her (own) concepts with their definitions that he/she believes have not existed in the list. The same search steps in gap identification can be followed in this task. As shown in Figure 4, All candidate new concepts are passed to the two subsequent validation phases: lexicon- and conceptual level.
### Lexicon-Level Validation
Our lexicon-level validation method formally and explicitly addresses individual gap identifications and their quality, as well as equivalent word meanings and new concepts. It allows a qualitative evaluation of the entire list of provided contributions through word-by-word and gap-by-gap in a loop between a native speaker and a validator. A word, a gap, or a new concept does not pass this validation until the native speaker provides the correct answer for each of them, as shown in the flowcharts in Figure 3 and Figure 4.
A language expert who is also a native speaker of the determined language (or dialect) will carry out this validation on a spreadsheet containing the data and results gathered in the previous step with two additional empty columns: the evaluation and lexicon-level validator's comment, producing the following information:
1. _Equivalent word meanings_: validate the correctness of all provided words in the local language (or dialect) by marking them up as correct, incorrect, or unclear for borderline cases and by providing correct words or indicating them as lexical gaps for incorrect ones.
2. _Lexical gaps_: validate the word meanings marked as lexical gaps by the native speaker in the local language, either as confirmed gaps or as non-gaps due to an existing lexicalization in that language, which the validator needs to indicate.
3. _New concepts_: validate all proposed new word meanings in each subdomain by marking them up as correct, correct but not new (in case the supposedly new concepts already existed in the list), or not accepted (in case another concept already existed in the list to express the meaning, or the validator does not consider it as a desirable suggestion for other reasons).
Correct equivalent word meanings and gaps are integrated with the local language lexicon on the fly. Also, correct new concepts are passed to the next step to be validated at the concept level before merging them with the supra-lingual shared meaning layer. While in case the evaluation is an incorrect equivalent word or a gap, or not accepted new concept, the validator returns each of them with a comment describing the reason to the native speaker to review and address the problem; when the native speaker finishes revising them, then he/she returns the new version of a contribution to the validator. This cycle (native speaker's contribution--lexicon level validation) is still alive until the validator confirms the correctness of the contribution or skips it.
Figure 3: Flowchart of gap and equivalent word meaning identification
### Concept-Level Validation
In this step, a lexical-semantic expert who is the manager of the UKC system verifies the new concepts and their quality as accept or reject to add them into the supra-lingual concept layer as well as addresses unclear words and non-confirmed gaps/non-gaps that are borderline cases. This validation is based on a discussion session with the language expert responsible for lexicon-level validation through concept-by-concept and case-by-case issue validation. A spreadsheet containing all new concepts and determined (words and gaps) to be examined is used. Columns of this sheet are the same columns in the previous step and two additional empty ones: the evaluation and concept-level validator's comment. The following tasks are used:
1. _New concepts_: Validate all proposed new concepts in each subdomain by marking them up as correct, correct but not new (in case the supposedly new concepts already existed in the UKC), or not accepted (in case another concept already existed in the UKC to express the meaning, or the validator does not consider the new concept as a desirable suggestion for any other reason).
Figure 4: Flowchart of a new concept collection
2. _Unclear words_: Validate the correctness of unclear word cases considered in the border-area by the lexicon-level validator by marking them as correct or incorrect and writing a comment.
3. _Non-confirmed gaps/non-gaps_: Validate the word meanings that do not have confirmation as lexical gaps or non-gaps by providing a judgment with a comment
Correct new concepts are imported into UKC by merging them with the supra-lingual conceptual layer. In contrast, not-accepted ones and those correct but not new are returned to the validator at the lexicon level, who may also return them with a comment describing the reason to the native speaker to address an included problem. In a new cycle, modified new concepts by the native speaker are transferred to this phase through the validator of lexicon-level; then, the validator at this level reviews the updates and decides whether to finish the revision cycle by accepting or rejecting the new concepts or issue a new one for more review, as shown in Figure 4. In addition, confirmed words and gaps output from this step are integrated with the language lexicon in the UKC, as shown in Figure 3.
## 5 Case Study on Diversity Across Arabic Dialects
This section demonstrates the use of the methodology described in Section 4 on kinship terminology from seven dialects of the Arabic language. Arabic is the official language of more than four hundred million native speakers in twenty-two countries in the Middle East and northern Africa. Classical Arabic or Modern Standard Arabic (MSA) refers to the standard form of the language used in academic writing, formal communication, classical poetry, and religious sermons (Elkateb et al., 2006). Surprisingly lexical diversity is manifested between Arabic dialects, evident in our study between seven of the twenty dialects spoken worldwide. The selected dialects are Egyptian, Moroccan, Tunisian, Algerian, Gulf, and South Levantine (two examples: Palestinian and Syrian). Let us take the example of the Gulf word _jail_J
In the contribution collection, a native speaker answers by filling a lexical unit or gap in a row empty slot specified for each concept. Linguistic resources and Google Search are used to provide answers as precise as possible. For example, the \(\includegraphics[width=14.226378pt]{images/}\) Almaany dictionary7, Wiktionary8, and the _Figh AlArabiyya_ typology book (Muttagin, 2009) are employed in sequential steps to give a judgment on cousin words in Syrian. Additionally, counting the number of hits returned by the Google search engine is another helpful indicator, where a high count of hits indicates a searching word (i.e., \(\includegraphics[width=14.226378pt]{images/}\) meaning "_son of father's sister_," has 131.5 million hits) is a lexical unit in Syrian. In contrast, a low count indicates a lexical gap; for example, \(\includegraphics[width=14.226378pt]{images/}\) meaning "_mental cousin_," has 158 thousand hits. Google hits of other cousin terms are shown in Table 3. Since Arabic words can be written and read with or without diacritics (i.e., "_atha_" above a letter or "_kassra_" under it), thus, each word is typed in two forms. Note that the content of this matrix cannot be considered the only criterion for gap exploration because word hits may contain a count of other hits resulting from searching in other Arabic dialects for the same word.
Footnote 7: [http://www.almaany.com/thesaurus.php](http://www.almaany.com/thesaurus.php)
Footnote 8: [http://ar.wiktionary.org](http://ar.wiktionary.org)
### Experiment Results
The overall contribution collection effort resulted in 180 words, 1,108 lexical gaps, and 19 new concepts identified, formalized, and collected. Detailed statistics about the collected gaps and words are shown in Table 4. New concepts were identified in three subdomains: siblings, cousins, and grandchildren. The total number of new concepts, 19, is lower than the sum of new concepts per language due to overlaps across languages: for example, \(\includegraphics[width=14.226378pt]{images/}\)\(\includegraphics[width=14.
from the dictionary _Dictionnaire arabe algerien9_ and from usage attested in Algerian TV films. Upon discussion between the validator and the participants, the mistakes made by the latter can be explained by misunderstandings of the meanings of certain concepts provided in MSA and English. The validator made sure to exclude or fix the mistakes, bringing the correctness of the final dataset closer to 100%.
Footnote 9: [https://www.lexilogos.com/arabe_algerien.htm](https://www.lexilogos.com/arabe_algerien.htm)
In this study, we use the UKC for creating the input dataset and the domain hierarchy and for storing and visualizing diversity data. Thus, the 19 new concepts were merged with the UKC by reconstructing a domain hierarchy at the supra-lingual concept layer. For example, the hierarchy of siblings was redesigned to contain five new brotherhood concepts and five new sisterhood concepts. For instance, in the Arabic-Egyptian lexicon, as shown in Figure 5, \(\xright\)\(\xright\)\(\xright\)\(\xright\)\(\xright\)\(\xright\)\(\xright\) meaning "_breastfeeding brother_," is set up as a sub-node for a newly created concept of the brother, "_a male person who has the same father, mother, or both parents as another person or has the same breastfeeding woman._", also, from the figure, can be seen \(\xright\)\(\xright\) meaning "_patternal brother_" and \(\xright\)\(\xright\) meaning "_maternal brother_" are inserted and connected the half-brother concept. New concepts and lexicalization are marked with white nodes and connected with blue lines.
Additionally, resulting lexical units and gaps were added into UKC lexicons. The website of the UKC provides several services for system users, such as browsable online access to database contents, source materials, and data visualization tools. The interactive exploration of linguistic diversity data in lexicons is the central feature of the website. The user
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Concept** & **With/Without Diacrities** & **Count of Hits** \\ \hline
\begin{tabular}{c} _2a_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ _ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ _ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _g_ \(g\) _
can browse: (1) all meanings within a language of a word typed in by the user; and (2) lexicalizations and gaps of a concept in all languages contained in the database.
Figure 6 shows a screenshot of the concept exploration functionality describing the concept _"_meaning "_parent's father_". On the left-hand side of the screenshot, details are provided on the lexicalization of the concept in Arabic, such as synonymous words, a definition, and a part of speech. The middle part of the screenshot shows an interactive clickable map of all lexicons that either contain the concept or, on the contrary, lack it due to their languages being known not to lexicalize it. The color-coded dots indicate the language family, while the black circled dot represents a lexical gap. This map presents an instant global typological overview _of_ the concept selected; for instance, from Figure 6, one can see that most languages in Europe lexicalize the concept _"_ while several languages in the American United States do not lexicalize it. Finally, the right-hand side shows the concept _"_ in the context of concept hierarchy, depicted as an interactive graph: the concept, its parent and child concepts, and other lexical-semantic relations (as metonymy and meronymy) are also presented when they exist. Note that the graph only shows a part of the complete hierarchy for usability reasons. Nevertheless, it is navigable and allows the exploration of the whole concept graph in the selected language.
As mentioned at the beginning of this section, the resulting Arabic dataset will be imported into the Arabic UKC, which is an instance of the UKC system; the top layer contains independent language concepts, and the bottom layer contains twenty lexicons as the number of Arabic dialects. A screenshot of the homepage of the Arabic UKC is shown in Figure 7.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Dialects** & **Words** & **Gaps w/o new concepts** & **New concepts** & **Gaps considering new concepts** \\ \hline Algerian & 28 & 156 & 10 & 165 \\ \hline Egyptian & 32 & 152 & 19 & 152 \\ \hline Moroccan & 22 & 162 & 10 & 169 \\ \hline Palestinian & 23 & 161 & 14 & 166 \\ \hline Syrian & 24 & 160 & 10 & 169 \\ \hline Tunisian & 23 & 161 & 2 & 178 \\ \hline Gulf & 28 & 156 & 14 & 169 \\ \hline
**Total** & **180** & **1108** & **19** & **1168** \\ \hline \end{tabular}
\end{table}
Table 4: The count of the diversity items collected and identified in the Arabic dialects.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Dialects** & \multicolumn{2}{|c|}{**Correctness of Native Speaker Contribution**} \\ \cline{2-3} & **Words** & **Gaps** \\ \hline Algerian & 85.71\% & 98.08\% \\ \hline Egyptian & 96.90\% & 97.37\% \\ \hline Moroccan & 95.83\% & 97.53\% \\ \hline Palestinian & 100\% & 98.76\% \\ \hline Syrian & 91.67\% & 95.00\% \\ \hline Tunisian & 95.65\% & 98.14\% \\ \hline Gulf & 100\% & 96.79\% \\ \hline
**Average** & **95.11\%** & **97.38\%** \\ \hline \end{tabular}
\end{table}
Table 5: Validator evaluation of words and lexical gaps by dialect.
### Discussion
The lexical diversity we observed across the seven dialects was higher than our original expectations, with 19 new concepts identified. Ten of these concepts are lexicalised in MSA, such as
Figure 8 shows the overlaps between pairs of Arabic dialects over the kinship domain. For example, the intersection of Egyptian and Gulf dialects gives a shared coverage of 74.5%, while all dialects are 47.1% the same. In the former case, the number of lexicalisations in Egyptian is 51, and in Gulf is 42. Also, 38 of these lexical units are included in both dialects; see the dataset uploaded to GitHub10. For example, Formula 1 calculates the overlap between Egyptian and Gulf in the Kinship domain (\(K\)) as follows:
Footnote 10: [https://github.com/HadiPTUK/kinship_dialect](https://github.com/HadiPTUK/kinship_dialect)
\[\mathrm{overlap}(K,\mathrm{Egyptian},\mathrm{Gulf})=\frac{|\mathrm{LexConcepts }(K,\mathrm{Egyptian})\cap\mathrm{LexConcepts}(K,\mathrm{Gulf})|}{\text{max}(| \mathrm{LexConcepts}(K,\mathrm{Egyptian})|,|\mathrm{LexConcepts}(K,\mathrm{ Gulf})|)}\]
\[\mathrm{overlap}(K,\mathrm{Egyptian},\mathrm{Gulf})=\frac{38}{\mathrm{max}( 51,42)}=\frac{38}{51}=74.5\%\]
More detail about the analysis of shared coverage between the rest of the Arabic dialects can be found in the same dataset uploaded to GitHub.
We find these overlaps--e.g. an overlap of 59.5% between Gulf and Tunisian, or the overall overlap of 47.1% among all seven dialects--lower than our initial expectations on dialectal variations. Arab dialectologists justify such differences with two major factors: linguistic and religious influence (Zaidan and Callison-Burch, 2014). By linguistic influence, we refer to the historical interaction of language-speaker communities, which affects the lexicons. Examples are the Egyptian dialect influenced by the Coptic language (historically spoken by the Copts, starting from the third century AD
Figure 6: Exploring the concept of \(\frac{1}{2}\) as lexicalized in the Arabic language (left), in the world (middle), and as part of the shared concept hierarchy (right).
Figure 7: Homepage of the Arabic UKC ongoing project.
in Roman Egypt) or the Levantine dialect influenced by the Western Aramaic, Canaanite, Turkish, and Greek languages. The Gulf dialect is one of the Peninsula groups, which was influenced by South Arabian Languages. Secondly, the religion of the speaker community also affects the lexicon. Religion is a sociolinguistic variable that shapes how Arabic is spoken. Religion in Arab countries is a matter of group affiliation and is not usually considered an individual choice: one is born a Muslim, Christian, Jew, or Druze, and this becomes a bit like one's ethnicity. So, for example, within the Egyptian speech community, one can find language mixing between Islamic and Christian terms, and the same in the Levantine community, which consists of a mixing of Muslims, Christians, Jews, and Druze. The Gulf communities, instead, mostly consist of Muslims (Al-Wer, 2008).
## 6 Case Study on Diversity Across Indonesian Languages
This section demonstrates the use of the methodology described in Section 4 on kinship terminology from three Austronesian languages from Indonesia: Indonesian, Javanese, and Banjarese. Contrary to the Arabic dialects in Section 5, these three languages are not mutually intelligible.
Indonesia is the fourth most populous country in the world, and it has more than 700 living languages (Eberhard et al., 2022). The national language spoken in Indonesia is Bahasa Indonesia/Indonesian language, which was decided in the historic moment of Youth's Pledge, October 28th, 1928. However, many Indonesians speak more than one language. For example, out of 198 million people that speak Indonesian, 84 million of them speak Javanese (Aji et al., 2022).
Even with the high number of speakers, the count of natural language processing research on Indonesian languages is very low compared to other languages around the world. As of 2020, the count of published papers on the Indonesian language is lower than other languages with less speaker count, such as Polish and Dutch (Aji et al., 2022). Not surprisingly, the amount of research on other languages (i.e., Banjarese and Javanese) in Indonesia is much lower than that. It is therefore motivating to conduct this study that discovers the richness of linguistic diversity across three Indonesian languages: standard Indonesian, Banjarese, and Javanese. In one semantic field, kinship, we have found that diversity is manifested in these languages; for example, in Javanese, the word _ponakan jaler_ meaning "_nephew_", is a lexical gap in Banjarese, and in the opposite direction, the Banjarese _gulu_ meaning "_parent's second edlest sibling_" is also a gap in Javanese.
### Experiment Setup
As in the Arabic experiment, we use the UKC lexicon to create the input dataset of kinship terms, which are independent language and formalizing such terms and also new concepts (not existing in the input dataset) identified in this experiment, as shown in the top layer of the UKC in Figure 1 for the brotherhood categorization.
In this study, three native speakers (one per language), born and educated (high school level) within the speaker community, were recruited to contribute. The participants' linguistic backgrounds are listed below:
1. _Participant 1_: a native Indonesian speaker with good command of English, Javanese, and Banjarese.
Figure 8: The overlap (percentage of shared lexicalisations) for Arabic dialects.
2. _Participant 2_: a native Banjarese speaker with good command of Indonesian and English.
3. _Participant 3_: a native Javanese speaker with good command of Indonesian and English.
For each language, an experiment was carried out to identify words and gaps associated with the same 184 kinship concepts as in the Arabic study (see Table 2). For example, in Banjarese, the dictionary _Kamus Bahasa Banjar Dialek Hulu-Indonesia_(Balai Baasa Banjarmasin, 2008) and Google Search hits were used in subsequent steps to provide a precise answer on each concept from the given list of inputs. Such search steps were also followed by the Banjarese native speaker for the task of judgment on new concepts identified in the uncle/aunt subdomain. For instance, the Banjarese term _gulu_, expressing an uncle/aunt relationship with the meaning of _a parent's second edlest sibling_ and attested by the dictionary above, did not previously exist in the UKC or in the KinDiv dataset, nor in Murdock (1970). Indonesian and Javanese native speakers also follow the same steps and use the dictionaries of Badan Pengembangan den Pembinaan Bahasa (2017) and Utomo (2015) for the task of judgment on terms and gaps identified in Indonesian and Javanese, respectively.
### Experiment Results
The overall contribution collection effort resulted in 41 words and 517 lexical gaps. Three new, yet unattested word meanings were also found and formalised as new concepts. All three are used in Banjarese in the uncle/aunt subdomain:
* _julak_, meaning _parent's edlest sibling_;
* _gulu_, meaning _parent's second edlest sibling_;
* _angah_ or _tangah_, meaning _parent's middle elder sibling_ (when the number of siblings is odd).
Statistics on the data collected for each language are shown in Table 6.
As in Arabic, a two-step validation was carried out in this study. The first step validated words and gaps contributed by native speakers, carried out by the fourth author, a native Indonesian speaker with a good command of all three languages. The second validation step was done on the concept level, performed by the second author, a lexical-semantic expert and UKC system manager for new concept validation. In this step, the new concepts were verified and approved to be added to the concept layer of the UKC.
Table 7 provides correctness results over native speaker contributions, provided by the validator. Upon discussion between the validator and the contributors, the mistakes made by the latter can be explained by misunderstandings of the meanings of certain concepts, provided in English. The validator made sure to exclude or fix the mistakes, bringing the correctness of the final dataset closer to 100%.
The produced kinship datasets from this experiment will be merged with the under-construction Indonesian UKC11, a diversity-aware lexicon for languages spoken in Indonesia, also imported into the main UKC database.
Footnote 11: [http://indonesia.ukc.datascientiia.eu/](http://indonesia.ukc.datascientiia.eu/)
Figure 9 shows how UKC explores information about a specific Indonesian word. However, the screenshot provides information about the Indonesian word _saudara_, which means "_sibling_" in English. The left-hand side of the screenshot explains synonymous words (lemmas) and the definition of the typed word. The middle of the screenshot displays the map of a global cytological overview of the concept. Most languages do not lexicalize this concept, marked by the black-circled dot. Only a few languages lexicalize it, such as Indonesian, Swedish, Ainu, and Malayalam, marked by white-circled dots. The right-hand side shows the lexico-semantic relations of the concept.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Languages** & **Words** & **Gaps w/o new concepts** & **New concepts** & **Gaps considering new concepts** \\ \hline Indonesian & 11 & 173 & 0 & 176 \\ \hline Javanese & 17 & 167 & 0 & 170 \\ \hline Banjarese & 12 & 172 & 3 & 172 \\ \hline
**Total** & 41 & 511 & 3 & 517 \\ \hline \end{tabular}
\end{table}
Table 6: The count of the diversity items collected and identified in the Indonesian languages.
The UKC lexicon is also equipped with several interactive visualization services that can be used to browse lexical units and gaps by domain in all supported languages. Figure 10 shows an example of using such services in visualizing the content of the grandparent subdomain in Indonesian.
### Discussion
More than 90% of our 184 initial kinship concepts were found to be gaps in the three Indonesian languages, as shown in Table 6. Using Formula 1, we calculated the overlaps between the Indonesian languages in terms of kinship lexicalisations, shown in Figure 11. For more details, see the dataset uploaded to the GitHub repository12. 35.3% of the concepts are lexicalised by the three Indonesian languages studied. The Javanese-Banjarese overlap is 52.9%, Javanese-Indonesian is 60%, and finally Banjarese-Indonesian is 41.2%. Even though all three languages are included in the Malayo-Polysesian branch of the Austronsian language family, Indonesian and Banjarese are considered Malay languages, while Javanese is not, which is the first reason for this result. Furthermore, these languages exist on different islands in Indonesia; Javanese exists on Java Island, Banjarese is located on the southern part of Borneo Island, and the Indonesian language is based on Malay, which is spoken on Sumatra Island (Sneddon, 2003), so this geographical barrier restricts interactions between speakers, and each language has developed within its own speech community.
Footnote 12: [https://github.com/HadiPTUK/kinship_dialect](https://github.com/HadiPTUK/kinship_dialect)
Finally, using Formula 1, we computed the overlaps between Arabic dialects and Indonesian languages. Figure 12 shows that the ten languages together cover only 3.9% of the concepts, and the most similar language pair, namely Egyptian-Indonesian, is merely 5.9% similar. For researchers in ethnography or comparative linguistics, the observation of such pronounced levels of cross-lingual and cross-cultural diversity may not come as a surprise, as major variations in kin patterns are well known in these domains. On the other hand, we believe that beyond these narrow fields of research, there is a general lack in understanding the depth of diversity in how, through languages, people describe and interpret the world. Most computational linguists and engineers who build language processing systems, as well as the
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{**Languages**} & \multicolumn{2}{c|}{**Correctness of Native Speaker Contribution**} \\ \cline{2-3} & **Words** & **Gaps** \\ \hline Indonesian & 90.91\% & 98.27\% \\ \hline Javanese & 94.44\% & 95.78\% \\ \hline Banjarese & 91.7\% & 97.67\% \\ \hline
**Average** & **92.35\%** & **97.24\%** \\ \hline \end{tabular}
\end{table}
Table 7: Validator evaluation of words and lexical gaps by language.
Figure 9: Exploring the concept of _saudara_ as lexicalized in the Indonesian language (left), in the world (middle), and as part of the shared concept hierarchy (right).
users who trust such systems for their daily activities, do not suspect the breadth of the mental divide across languages that language applications, such as machine translation systems, are meant to bridge. We think that through quantified measures, as we are attempting to do with our simple measure of overlap introduced on p. 14, can be useful to improve our qualitative grasp on diversity, which we consider a promising direction for future research.
Table 8 includes statistics of collected words and gaps by domain across Arabic and Indonesian languages. The results show that only three words in the domain of cousins are identified in the Indonesian languages, while in Egyptian, 16 words are used around the concept of the cousin.
Figure 11: The number of words in the intersection of Indonesian languages according to shared meaning.
Figure 10: Interactive browser tool showing lexical units and gaps for the grandparent subdomain in Indonesian.
## 7 Related Work
Ethnologists and linguists have for a long time studied how family structures map to kinship terminology across languages and social groups. The most famous and comprehensive ethnographic study on kin term patterns is that of Murdock (1970), upon which our work also indirectly relied: our cross-lingual formalisation of kin terms is based on the one provided by the KinDiv resource, itself in part derived from Murdock's data. KinDiv covers 699 languages and is a computer-processable database that can also be exploited for applications in computational linguistics. Our results provide linguistic evidence in seven Arabic dialects and three Indonesian languages that do not figure in these resources.
The exploration of kin terminology and the building of large-scale databases on the topic has also been the subject of more recent efforts--we only cite two examples here. The AustKin project13 has produced a large-scale database on kin terms in hundreds of indigenous Australian languages. The recent Kinbank database (Passmore et al., 2023) is a comprehensive resource on kinship terminology, covering over 1,173 languages, with a broad coverage of kinship subdomains. As Kinbank was released after the initial submission of our paper, we did not rely on it for our work. We consider our research as complementary to Kinbank: concentrating on a relatively low number of dialects and
Figure 12: The number of words in the intersection of Indonesian and Arabic languages according to shared meaning.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Domains** & **Words** & **Gaps** \\ \hline Grandparents & 21 & 169 \\ \hline Grandchildren & 19 & 251 \\ \hline Siblings & 37 & 173 \\ \hline Uncle/Aunt & 44 & 226 \\ \hline Nephew/Niece & 33 & 297 \\ \hline Cousins & 67 & 503 \\ \hline
**Total** & 221 & 1619 \\ \hline \end{tabular}
\end{table}
Table 8: Statistics of the collection of diversity data by domain.
languages, our results could, in principle, be integrated into Kinbank in order to extend its coverage. And vice versa, we see potential in using Kinbank data in order to cross-validate and possibly to extend the Indonesian terms we collected (as the three Indonesian languages of our study are also covered by Kinbank). There is, however, an important methodological difference between the our and Kinbank's way of representing terms: Kinbank does not explicitly indicate lexical gaps. For example, our work considers the concept of _son of father's brother as pronounced by a male speaker_ to be a lexical gap in Javanese, while Kinbank maps the Javanese term _sedudur misan_, simply meaning _cousin_, to this and 95 other meanings. Our work, instead, identifies the Javanese term as the general meaning of _cousin_ and considers all other (more specific) cousin terms as lexical gaps. This distinction is useful in comparative linguistics and cross-lingual applications where the explicit indication of the lack of precise meaning equivalence can be exploited.
Concepticon (List et al., 2016) is 'a resource for linking concept lists' frequently used in comparative linguistics. The _concept sets_ of Concepticon serve the same purpose as the supra-lingual concepts of the UKC in our study, namely to provide meaning-based mappings among lists of terms (aka _concept lists_ in Concepticon) across languages. As of mid-2023, Concepticon consists of nearly 4,000 concept sets, principally targeting core vocabularies (basic-level categories) that are the main subject of study of historical and comparative linguistics. Concepticon is under continuous development and has more recently evolved from a flat list of meanings to a hierarchy with broader-narrower relations. At the time of writing, the kinship domain seems to be partially represented in Concepticon: while sibling or grandparent relations are widely covered, fine-grained cousin relationships are mostly missing from it. The UKC, which contains over 100,000 supra-lingual concepts and a wide range of lexical and lexico-semantic relations, was a more suitable resource for our study due to its more complete coverage of the kinship domain and its explicit support for representing term untranslatability via lexical gaps.
Multilingual computational applications being in the core of our focus, we also review relevant resources from computational linguistics. For NLP applications, the most popular and widely-known representation of lexico-semantic knowledge is that of _wordnets_ that follow the general structure of the original English _Princeton WordNet_ (Miller, 1995). The _wordnet expansion_ approach by Fellbaum and Vossen (2012)--an expert-driven lexicon translation effort--is frequently used to produce new wordnets for lower-resourced languages: this approach consists of 'translating' (i.e. finding lexicalizations for) English WordNet concepts ('synsets' in wordnet terminology) into the target language. While this is a straightforward approach that produces resources that remain cross-lingually linked, its downside is that the translation approach cannot involve concepts and words specific to the target language and not present in the source language (which in most cases is English). In cases of diverse conceptualisations of the world, the translation approach often results in incorrect approximations. To take the example of Arabic, both versions of the Arabic Wordnet (Elkateb et al., 2006; Abouenour et al., 2013) map the English synset of _uncle_ ("_the brother of your father or mother; the husband of your aunt_") to the Arabic synset of, which means "_the brother of your father._"
A similar situation is observed for Indonesian. As far as we know, the only Indonesian Wordnet currently accessible is Bahasa Wordnet--a bilingual Wordnet for standard Indonesian and Malay languages (Noor et al., 2011). It was formed by merging three different wordnets (one in Indonesian and two in Malay) developed mainly by the same expansion approach from PWN. Due to this approach, many English words that have no equivalents in Indonesian are incorrectly mapped, resulting in meaning loss. For example, in Bahasa Wordnet, the English word _sister_, which means "_a female person who has the same parents as another person_," was mapped to the Indonesian word _kakak_ which means "_elder sibling._"
Finally, we mention MultiWordNet as an early effort at improving the representation of linguistic diversity in multilingual lexical databases (Pianta et al., 2002). It is a multilingual lexicon that was built using the _merge_ method that, contrary to the translation-based expand approach presented above, maps together existing high-quality bilingual dictionaries. MultiWordNet explicitly represents lexical gaps in its Italian and Hebrew wordnets: about 1,000 in Italian and about 300 in Hebrew (Bentivogli and Pianta, 2000; Ordan and Wintner, 2007). MultiWordNet, however, is a discontinued effort that does not cover the kinship domain and is thus was not suitable for our purposes.
The methodology we present in Section 4 follows neither the expansion nor the merge approach but a third one, more adapted to diversity-aware lexicography: our starting point is a supra-lingual, diversity-aware conceptualization of the domain of study (kinship in our case). The task of _contribution collection_ is performed by native speakers with respect to the supra-lingual concept hierarchy based on evidence from comparative linguistics and covering a wide range of languages. While there is no guarantee that our initial conceptualization is complete--indeed, it was not the case in our study--it is less biased towards the concepts of a single language and speaker community than the expansion approach.
## 8 Conclusions and Future Work
Our paper formally captures lexical diversity across languages and dialects by representing language- or dialect-specific concepts and linguistic gaps. It introduces a systematic method to produce such data in a human-based manner from
one semantic domain rather than from general domains, as the efforts of covering the WordNet domains (Magnini and Cavaglia, 2000) that have been conducted in building these wordnets, Mongolian (Batsuren et al., 2019), Unified Scottish Gaelic(Bella et al., 2020), and MultiWordNet(Pianta et al., 2002).
The method is verified through two large-scale case studies on kinship terminology, a domain known to be diverse across languages and cultures: one case study deals with seven Arabic dialects, while the other one with three Indonesian languages. The experiments show that our method outperforms the existing methods in terms of the quantity of explored gaps and words and the quality of results. Overall efforts resulted in 1619 gaps, and 223 words were identified in 10 languages and dialects. Moreover, 22 new word meanings with respect to the imported list of independent-language concepts from the UKC are explored in this research.
In future work, we plan to automate the method presented in this paper and apply it to new languages, such as the rest of the Arabic dialects and Indonesian language, as well as to new domains that are known to be diverse, such as body parts, food, color, or visual objects (Giunchiglia and Bagchi, 2021; Giunchiglia et al., 2023).
Finally, diversity-aware lexicons such as the UKC (which includes our produced datasets) provide essential information to cross-lingual applications, such as multilingual NLP tasks or cross-lingual language models. In the future, we plan to use this resource in implementing one such application, i.e., machine translation.
## Conflict of Interest Statement
The authors declare no conflict of interest.
## Author Contributions
FG and GB conceptualized and supervised the study. GB and HK imported and formatted the dataset of inputs. HK wrote the original manuscript draft and performed the Arabic experiments. AF and HK validated the collected Arabic data at the lexicon level. SD performed the Indonesian experiments and validated the results at the lexicon level. GB validated the identified diverse data at the concept level. FG, GB, AF, and HK analyzed the Arabic and Indonesian data. FG, GB, AF, SD, and HK reviewed and edited the manuscript. All authors contributed to the research and approved the submitted version.
## Acknowledgments
We thank the University of Trento and Palestine Technical University--Kadoori for their support.
## Data Availability Statement
The diversity-aware datasets of the kinship generated and analyzed for this study can be found in the GitHub repository (_[https://github.com/HadiPTUK/kinship_dialect_](https://github.com/HadiPTUK/kinship_dialect_)).
|
2301.07528 | Quantum-inspired tensor network for Earth science | Deep Learning (DL) is one of many successful methodologies to extract
informative patterns and insights from ever increasing noisy large-scale
datasets (in our case, satellite images). However, DL models consist of a few
thousand to millions of training parameters, and these training parameters
require tremendous amount of electrical power for extracting informative
patterns from noisy large-scale datasets (e.g., computationally expensive).
Hence, we employ a quantum-inspired tensor network for compressing trainable
parameters of physics-informed neural networks (PINNs) in Earth science. PINNs
are DL models penalized by enforcing the law of physics; in particular, the law
of physics is embedded in DL models. In addition, we apply tensor decomposition
to HyperSpectral Images (HSIs) to improve their spectral resolution. A
quantum-inspired tensor network is also the native formulation to efficiently
represent and train quantum machine learning models on big datasets on GPU
tensor cores. Furthermore, the key contribution of this paper is twofold: (I)
we reduced a number of trainable parameters of PINNs by using a
quantum-inspired tensor network, and (II) we improved the spectral resolution
of remotely-sensed images by employing tensor decomposition. As a benchmark
PDE, we solved Burger's equation. As practical satellite data, we employed HSIs
of Indian Pine, USA and of Pavia University, Italy. | Soronzonbold Otgonbaatar, Dieter Kranzlmüller | 2023-01-15T08:35:37Z | http://arxiv.org/abs/2301.07528v1 | # Quantum-Inspired Tensor Network for Earth Science
###### Abstract
Deep Learning (DL) is one of many successful methodologies to extract informative patterns and insights from ever increasing noisy large-scale datasets (in our case, satellite images). However, DL models consist of a few thousand to millions of training parameters, and these training parameters require tremendous amount of electrical power for extracting informative patterns from noisy large-scale datasets (e.g., computationally expensive). Hence, we employ a quantum-inspired tensor network for compressing trainable parameters of physics-informed neural networks (PINNs) in Earth science. PINNs are DL models penalized by enforcing the law of physics; in particular, the law of physics is embedded in DL models. In addition, we apply tensor decomposition to HyperSpectral Images (HSIs) to improve their spectral resolution. A quantum-inspired tensor network is also the native formulation to efficiently represent and train quantum machine learning models on big datasets on GPU tensor cores. Furthermore, the key contribution of this paper is twofold: (I) we reduced a number of trainable parameters of PINNs by using a quantum-inspired tensor network, and (II) we improved the spectral resolution of remotely-sensed images by employing tensor decomposition. As a benchmark PDE, we solved Burger's equation. As practical satellite data, we employed HSIs of Indian Pine, USA and of Pavia University, Italy.
Soronzonbold Otsonbaatar, Dieter Kranzlmuller German Aerospace Center, Ludwig-Maximilians-Universitat Munich
Tensor decomposition, quantum-inspired tensor decomposition, quantum-inspired machine learning.
## 1 Introduction
Deep Learning (DL) is a machinery for extracting most informative patterns, insights from large-scale data, and apply this knowledge to make predictions [1]. DL models currently have been outperforming conventional techniques and methods in science and engineering, even in remote sensing and Earth science [2, 3, 4]. However, DL models compose of a huge number of parameters, making their interpretation and predictions on large-scale data difficult. Their energy requirements also extremely limit their scalability (or computationally expensive) [5]. Hence, the authors of the articles [6, 7, 8] utilized a quantum-inspired tensor network to compress the parameters (e.g., hidden layers) of DL models and to decompose data tensors in very small factor matrices. Here, tensors are multidimensional arrays which can generalize vectors and matrices. A quantum-inspired tensor network can compress the training parameters of DL models and decompose data tensors in a small number of factor matrices. It is also widely used to represent quantum Machine Learning models as tensor-networks, which can be efficiently trained on big real-world datasets on GPU tensor cores [9].
Physics-Informed Neural Networks (PINNs) are DL models (e.g., Neural Networks), whose training parameters are penalized by enforcing the law of physics [10]; namely, the law of physics is embedded in Neural Networks (NNs). Moreover, PINNs can be utilized to compute and analyse computationally expensive Partial Differential Equations (PDEs) when data is of limited quantity and quality [11]. However, PINNs are still computationally expensive for obtaining solutions to PDEs in Earth science.
Remotely-sensed datasets are data tensors \(\mathcal{X}\in\mathbb{R}^{I_{1}\times\cdots\times I_{n}}\) which are so complex and diverse that they cannot be easily classified and analyzed even by using DL models. In particular, these datasets are characterized by not only volume but also another so-called "4V" features (Volume, Variety, Veracity, and Velocity) [12].
The key contribution of this paper is twofold: The first contribution of this paper is that we reduced a number of trainable parameters of DL models (i.e. PINNs) by using the quantum-inspired tensor network. The compressed DL models can be also applied to analyse and classify big real-world datasets as shown in the article [7]. The second contribution of this paper is that we improved the spectral resolution of
Figure 1: Satellite datasets: [Left] HSIs of Indian Pine, USA and [Right] of Pavia University, Italy
remetely-sensed images by employing tensor decomposition. As practical satellite data, we employed HSIs of Indian Pine, USA and of Pavia University, Italy. As a PDE, we considered Burger's equation.
## 2 Our Datasets
We use practical satellite datasets and refer them as 3rd-order data tensors. In particular, the HSI of Indian Pine is the data tensor \(\mathbb{R}^{240\times 240\times 200}\) with 16 classes, and the HSI of Pavia University is the data tensor \(\mathbb{R}^{610\times 340\times 103}\) with 9 classes (see Fig. 1).
## 3 Our Methodology
Remotely sensed images can be viewed as 3rd-order data tensors \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\). The 3rd-order data tensors can be decomposed in factor matrices by using so-called CANDECOMP/PARAFAC (CP)-decomposition [6]:
\[\mathcal{X}=\sum_{r=1}^{R}\mathbf{a}_{r}\circ\mathbf{b}_{r}\circ\mathbf{c}_{r}, \tag{1}\]
where \(R\), called the rank, is a real positive number, "\(\circ\)" denotes an outer product, and \(\mathbf{a}_{r}\in\mathbb{R}^{I_{1}}\), \(\mathbf{b}_{r}\in\mathbb{R}^{I_{2}}\), and \(\mathbf{c}_{r}\in\mathbb{R}^{I_{3}}\) are factor matrices (see Fig. 2 [Top]).
Another commonly used quantum-inspired tensor network is Tensor Train (TT)-decomposition, called also Matrix Product State (MPS) in quantum physics [13]. TT-decomposition expresses a 3rd-order tensor as core tensors and factor matrices:
\[\mathcal{X}=\mathbf{A}\times_{3}^{1}\mathbf{G}^{(2)}\times_{3}^{1}\mathbf{B}, \tag{2}\]
where \(\mathbf{G}^{(2)}\in\mathbb{R}^{R_{1}\times I_{2}\times R_{2}}\) is a core tensor, \(\mathbf{A}\) and \(\mathbf{B}\) are factor matrices, and \(\times_{3}^{1}\) is called a mode-(k,l) product.
TT-decomposition can compress DL models, and the compressed DL models can generate classes with the similar accuracy as their non-compressed ones [7] (see Fig. 2 [Bottom]). In addition, TT-decomposition is widely employed to efficiently simulate quantum circuits on conventional computers. Hence, TT-decomposition have been applying to design and train quantum-inspired machine learning models on large-scale datasets on GPU tensor cores [14, 15, 16].
## 4 Our Experiment
### Contribution I: compressing PINNs
We represented a solution \(u=u(t,x)\) to 1D Burger's equation by an NN [10]. In mathematical form, 1D Burger's equation is
\[\begin{split}& u_{t}+uu_{x}-(0.01/\pi)u_{xx}=0,\quad t\in[0,1],\\ & u(0,x)=-\sin(\pi x),\\ & u(t,-1)=u(t,1)=0.\end{split} \tag{3}\]
When we used the NN with \(8\) hidden layers, and each layer comprises \(100\) neurons, its trainable parameters are amounted
Figure 3: A solution to Burger’s equation (blue is an exact solution, and red is a predicted solution): [Top] The original PINN, and [Bottom] The compressed PINN
Figure 2: The two contributions of this paper in pictorial representation: [Top] Quantum-inspired tensor network (decomposition) for improving spectral resolution of real-world noisy data tensor, and [Bottom] Quantum-inspired tensor network for compressing Physics-Informed Neural Networks, which can be efficiently simulated on GPU tensor cores.
to \(71,101\) parameters. We reduced these \(71,101\) parameters to \(32,701\) parameters by compressing the odd numbers of the hidden layers by utilizing the TT-decomposition (see Fig. 2 [Bottom]) [7]. We found a solution \(u\) to the Burger's equation while utilizing both the original and compressed PINNs. Furthermore, the compressed PINN generated a solution to the Burger's equation with high accuracy such as having been produced by its original PINN, while it occupies a smaller parameter space than its original one (see Fig. 3). More importantly, the compressed NNs can be also utilized to analyse and classify any real-world datasets as shown in the article [7, 17].
### Contribution II: decomposing real-world data tensors in factor matrices
We decompose two practical HSIs shown in Fig. 1 in a very small number of factor matrices by using CP-decomposition expressed by Eq. (1) to improve their spectral resolution; we illustrate our method for decomposing these HSIs in Fig. 2 [Top]. In our experiment, we set the rank \(R\) of the CP-decomposition at 145. For the Indian Pine HSI, the decomposition time was \(0.1711\) seconds, the compression ratio was \(60\), and the R-squared value between the raw and the decomposed Indian Pine HSI was \(0.9959\). For the Pavia University HSI, the decomposition time was \(1.1013\) seconds, the compression ratio was \(140\), and the R-squared value between the raw and the decomposed Pavia University HSI was \(0.9450\). From these results, we gained the insight that we improved the spectral resolution of the HSIs, and the HSIs can be stored efficiently in conventional storage devices at the same time, while applying tensor decomposition to the practical HSIs. We presented some visual examples of our finding in Fig. 4.
## 5 Conclusion
This paper focused on designing and applying a quantum-inspired tensor-network to DL models and real-world data tensors. Our contribution is twofold: (I) We reduced the parameters of a DL model when compressing them by using TT-decomposition. As a DL model, we utilized a physics-informed neural network for finding a solution to 1D Burger's equation. The compressed model generates solutions to 1D Burger's equation with high accuracy such as having produced by its original one. (II) We improved the spectral resolution of hyperspectral images (i.e. data tensors) by decomposing them in sparse factor matrices through CP-decomposition. The decomposed data tensors are represented by sparse tensors, while the decomposition time was extremely small (around 1 second). Additionally, we can store these decomposed images (i.e. sparse tensors) efficiently and securely in distributed storage devices thanks to their sparse factor matrices. As practical HSIs, we used HSIs of Indian Pine, USA and of Pavia University, Italy.
As a future and on-going work, we invent and design quantum-inspired machine learning models for data-driven and model-driven practical problems. In addition, we invent and analyse DL models supported by quantum tensor networks [14, 15, 11, 16].
|
2310.06177 | DockGame: Cooperative Games for Multimeric Rigid Protein Docking | Protein interactions and assembly formation are fundamental to most
biological processes. Predicting the assembly structure from constituent
proteins -- referred to as the protein docking task -- is thus a crucial step
in protein design applications. Most traditional and deep learning methods for
docking have focused mainly on binary docking, following either a search-based,
regression-based, or generative modeling paradigm. In this paper, we focus on
the less-studied multimeric (i.e., two or more proteins) docking problem. We
introduce DockGame, a novel game-theoretic framework for docking -- we view
protein docking as a cooperative game between proteins, where the final
assembly structure(s) constitute stable equilibria w.r.t. the underlying game
potential. Since we do not have access to the true potential, we consider two
approaches - i) learning a surrogate game potential guided by physics-based
energy functions and computing equilibria by simultaneous gradient updates, and
ii) sampling from the Gibbs distribution of the true potential by learning a
diffusion generative model over the action spaces (rotations and translations)
of all proteins. Empirically, on the Docking Benchmark 5.5 (DB5.5) dataset,
DockGame has much faster runtimes than traditional docking methods, can
generate multiple plausible assembly structures, and achieves comparable
performance to existing binary docking baselines, despite solving the harder
task of coordinating multiple protein chains. | Vignesh Ram Somnath, Pier Giuseppe Sessa, Maria Rodriguez Martinez, Andreas Krause | 2023-10-09T22:02:05Z | http://arxiv.org/abs/2310.06177v1 | # DockGame: Cooperative Games for
###### Abstract
Protein interactions and assembly formation are fundamental to most biological processes. Predicting the assembly structure from constituent proteins - referred to as the protein docking task - is thus a crucial step in protein design applications. Most traditional and deep learning methods for docking have focused mainly on binary docking, following either a search-based, regression-based, or generative modeling paradigm. In this paper, we focus on the less-studied _multimeric_ (i.e., two or more proteins) docking problem. We introduce DockGame, a novel game-theoretic framework for docking - we view protein docking as a _cooperative game_ between proteins, where the final assembly structure(s) constitute stable _equilibria_ w.r.t. the underlying game potential. Since we do not have access to the true potential, we consider two approaches - i) learning a surrogate game potential guided by physics-based energy functions and computing equilibria by simultaneous gradient updates, and ii) sampling from the Gibbs distribution of the true potential by learning a diffusion generative model over the action spaces (rotations and translations) of all proteins. Empirically, on the Docking Benchmark 5.5 (DB5.5) dataset, DockGame has much faster runtimes than traditional docking methods, can generate _multiple_ plausible assembly structures, and achieves comparable performance to existing binary docking baselines, despite solving the harder task of coordinating multiple protein chains.
## 1 Introduction
Protein function is often dictated by interactions with other proteins, forming assemblies that regulate most biological processes. Predicting these assembly structures from constituent proteins - referred to as the (multimeric) protein docking task - is crucial in protein engineering applications. Traditional and deep learning methods for protein docking have largely focused on binary docking (i.e., two proteins). These methods either use a scoring function to rank millions of candidate assemblies, or model assembly structure prediction as a regression (Ganea et al., 2022) or generative modeling problem (Ketta et al., 2023) over the space of rotations and translations.
Only a handful of approaches exist for the harder _multimeric_ docking task (i.e. involving more than two proteins). A key challenge here is that the space of possible actions - rotational, translational and _optionally_ conformational changes, grows combinatorially with the number of proteins. Traditional methods for this task therefore first generate pairwise docking candidates and then combine them using combinatorial optimization algorithms, making them very inefficient and slow. Deep learning approaches for multimeric docking, while faster, still utilize some aspect of pairwise decomposition in their architectures - either in generating multiple sequence alignments (Evans et al., 2021), or pairwise roto-translation predictions (Ji et al., 2023). Moreover, they are both limited to predicting a single assembly structure1 (while in fact there might be multiple plausible structures).
Footnote 1: AlphaFold-Multimer uses an ensemble of 5 models, each with a single assembly structure prediction.
In this work, we introduce DockGame, a novel game-theoretic framework for the rigid multimeric docking problem: we view docking as a game between protein chains or their relevant subassemblies (smaller assemblies made up of chains), where the final assembly structure(s) constitute stable _equilibria_. To the best of our knowledge, this is the first approach connecting game theory with protein docking. In particular, we model docking as a _cooperative_ game, where all proteins have aligned interests (e.g., improved assembly energetics) described through an underlying potential, and the action space for each protein corresponds to the continuous space of rotations and translations. Intuitively, this allows us to simultaneously model interactions between all proteins, and decouple the combinatorial action space into individual ones.
In practice however, we do not have access to the true underlying potential. To tackle this problem, we propose two approaches which we summarize in Figure 1. Our first approach employs _supervised learning_, learning a differentiable analogue of traditional (and black-box) physics-based scoring functions. This allows us to compute equilibria via a gradient-based scheme over the space of roto-translations w.r.t the learnt potential. While simple, efficient, and generalizable to various objectives, it crucially relies on the supervision signal, which may not accurately reflect the game structure underlying observed assemblies. Our second approach, instead is _self-supervised_, learning solely from assembly structures in the data by interpreting them as samples from the Gibbs distribution of the underlying potential. To this end, we then define a diffusion generative model (DGM) over the joint roto-translation spaces of all players (relative to a fixed player), which can then be trained with standard denoising score-matching objectives.
To summarize, we make the following contributions:
* We formulate (rigid) multimeric protein docking as a cooperative game between proteins, where the final assembly structure(s) constitute equilibria w.r.t a underlying potential. To the best of our knowledge, this is the first work connecting game theory to protein docking. The game-theoretic framework also allows us to compute multiple equilibria.
* To compute plausible equilibria, we propose two different approaches. Our first approach utilizes supervision from scoring functions, e.g. physics-based, to learn a surrogate game potential and computes equilibria via gradient-based learning updates. Our second approach, instead, is self-supervised in that only uses observed assemblies data to train a DGM that can efficiently sample from the game equilibria distribution.
* We evaluate DockGame on the Docking Benchmark 5.5 (DB5.5) dataset. While DB5.5 has been traditionally used for binary docking evaluation, many proteins in the dataset con
Figure 1: Overview of DockGame. We introduce a game-theoretic framework for rigid multimeric docking: we view assembly structures as equilibria w.r.t to the potential of an underlying cooperative game. In practice, however, we do not have access to such potential function. We propose two approaches to tackle this. _Left_: We learn a surrogate potential using supervision from traditional physics-based scoring functions, and compute equilibria via a gradient-based scheme. _Right_: Viewing assembly structures as samples from the Gibbs distribution of the underlying potential, we learn a diffusion generative model over the roto-translation action spaces of all game agents.
tain multiple chains, allowing us to evaluate multimeric docking. Despite the harder task, Dockgame generates multiple plausible assemblies, and achieves comparable (or better) performance on the commonly used Complex-RMSD (C-RMSD) and TM-score metrics, while exhibiting orders of magnitude faster runtimes than traditional docking methods.
## 2 Related Work
Protein DockingTraditional protein docking methods (Chen et al., 2003; Mashiach et al., 2010; Kozakov et al., 2017; de Vries et al., 2015; Yan et al., 2020) have largely focused on the binary docking setting. These methods typically generate a large number (possibly millions) of candidate complexes, which are then ranked using a scoring function. Finally, top-ranked candidates can undergo further refinement using other energetic models. Deep learning approaches for protein docking have also mainly focused on the rigid binary docking setting, learning the best roto-translation of one protein (ligand) relative to the other protein (receptor) (Ganea et al., 2022), or a distribution thereof (Ketata et al., 2023). Different from all these works, we focus on the less explored (rigid) multimeric docking setting, where the goal is to the predict the assembly structure of \(\geq 2\) proteins.
A key challenge in the multimeric docking task, is to efficiently explore the combinatorial space of possible relative motions between proteins. Traditional approaches, such as Multi-LZerD (Esquivel-Rodriguez et al., 2012), achieve this by generating pairwise docking candidates, which are then combined with combinatorial assembly algorithms, with a similar algorithm in more recent works (Bryant et al., 2022). The only deep learning approaches for multimeric docking, AlphaFold-Multimer(Evans et al., 2021), and SynDock(Ji et al., 2023), also adopt pairwise decompositions, either in generating paired multiple sequence alignments, or in predicting pairwise roto-translations. In contrast, we propose a novel game-theoretic framework for multimeric docking based on cooperative games, and compute assembly structures (equilibria) efficiently using gradient-based schemes that decouple the search space over individual proteins. Additionally, game-theory offers a natural way to compute _multiple_ equilibria, while previous multimeric docking methods are limited to predicting a single assembly structure.
Diffusion Generative Models for Proteins and MoleculesDiffusion generative models have become increasingly popular in the molecular modeling community, used in tasks like conformer generation (Xu et al., 2022; Jing et al., 2022), molecular and protein docking (Qiao et al., 2022; Corso et al., 2023; Ketata et al., 2023), protein backbone generation (Watson et al., 2022; Yim et al., 2023), and generative structure prediction (Jing et al., 2023). The closest related work to ours are (Corso et al., 2023; Ketata et al., 2023), of which our diffusion generative model for the cooperative game can be interpreted as a multi-agent extension. To our best knowledge, this is the first approach of a "multi-agent" diffusion generative model in the context of multimeric docking (and cooperative games). In this regard, our score network is parameterized such that it can handle a varying number of proteins during training and inference - a fact of potentially more general interest.
## 3 Preliminaries and Background
In this section, we introduce relevant preliminaries and background that form the basis for the remainder of the paper. Throughout, we use the term protein to denote both individual protein chains, and any subassemblies made up of protein chains. We also use \([N]\) to denote the set \(\{1,2,\ldots,N\}\).
Problem Definition.In the multimeric protein docking task, the goal is to predict the protein assembly structure, given 3D structures of the constituent protein chains. The 3D structure of each protein chain \(i\in[N]\) is described by a point cloud \(\mathbf{X}_{i}\in\mathbb{R}^{n_{i}\times 3}\) where \(n_{i}\) is the number of chain's residues, and the residue position corresponds to the coordinates of its C\(\alpha\) atom.
Game-Theoretic Concepts.Game theory provides a natural framework to model and analyze systems composed of multiple self-interested agents. Formally, a game can be described by \(N\) agents, indexed \(i\in[N]\). Each agent \(i\) is equipped with an action set \(\mathcal{A}_{i}\), and a cost function \(f_{i}:\prod_{i=1}^{N}\mathcal{A}_{i}\rightarrow\mathbb{R}\), that maps the actions chosen by all agents to a scalar cost for agent \(i\). Solution concepts to games are defined by _equilibria_, where no agent has an incentive to deviate from their
equilibrium strategy. In the most general setting (general-sum games), agents can have differing objectives, i.e. \(f_{i}\) are not the same. In this work, we focus on cooperative games, in which case agents' interests are perfectly aligned towards a common goal i.e. \(f_{1}=f_{2}\cdots f_{N}=f\). In this case, \(f\) is also referred to as the game's _potential_(Monderer & Shapley, 1996).
Diffusion Generative Models.Diffusion generative models (DGMs) are a class of models that generate data by defining a diffusion process that transforms the data distribution into a tractable prior, and learning the time-reversal of this process. More formally, let the data distribution be \(p_{\text{data}}(x)\) that is transformed into a tractable prior \(p_{\text{prior}}(x)\) through a diffusion process \(d\mathbf{x}=f(\mathbf{x},t)dt+g(t)d\mathbf{w}\), where \(d\mathbf{w}\) denotes the Wiener process. The time-reversal of this process is described by \(d\mathbf{x}_{t}=\left[f(\mathbf{x}_{t},t)-g(t)^{2}\nabla_{\mathbf{x}_{t}}\log p (\mathbf{x}_{t})\right]dt+g(t)d\mathbf{w}\), where \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t})\) is the _score_ of the time-evolving distribution. This quantity is approximated by DGMs using neural networks \(s_{\theta}\) by first sampling from the transition kernel \(p(\mathbf{x}_{t}|\mathbf{x}_{0})\) defined by the forward diffusion process, computing the score \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t}|\mathbf{x}_{0})\) for \(\mathbf{x}_{0}\sim p_{\text{data}}\) and then regressing \(s_{\theta}(\mathbf{x}_{t})\) against \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t}|\mathbf{x}_{0})\).
## 4 Method
In this section, we first formalize our game-theoretic view on multimeric protein docking. We then present the proposed methods and discuss their limitations (Sections 4.2-4.3). Finally, we describe the used learning architectures (Section 4.4).
Multimeric Protein Docking as a Game.In multimeric protein docking, proteins interact with each other through a combination of relative motions and structural changes to form the assembly structure. Notably, the space of their relative motions grows combinatorially with the number of proteins, making it challenging to compute plausible docking structures. Previous methods tackle this by modeling the relative motions between _pairs_ of proteins followed by a global synchronization or a combinatorial assembly step. Instead, we argue that game theory provides a natural approach to _simultaneously_ model the associated interactions and differing objectives between proteins.
In this work, we model (rigid) protein docking as a _cooperative_ game, where agents' interests are aligned towards a common potential function \(f\). In accordance, we view the constituted assembly structures as the underlying game equilibria, i.e. stable outcomes from which no agent has an incentive to deviate. We note that our game-theoretic framework is agnostic to whether the agents are protein chains, sub-assemblies as long as agents' incentives are modeled appropriately.
### Overview
Modeling docking as a cooperative game requires three components - i) an appropriate definition of the action set \(\mathcal{A}_{i}\) for each protein \(i\), ii) the game potential function \(f\) (either specified or learnt) capturing relevant properties of the assembly, and iii) a strategy for computing equilibria. In the rigid docking setting, we can define the action set to be the continuous space of roto-translations. In principle, this allows us to _simultaneously_ steer the proteins to equilibria configurations and thus efficiently scale to a large number of proteins.
In practice however, we do not have access to the underlying game potential \(f\). To circumvent this difficulty, we describe two viable approaches in subsequent sections - i) learning an approximation of \(f\) via supervised learning, using traditional physics-based scoring functions in PyRosetta, and computing equilibria via simultaneous gradient descent (Section 4.2), and ii) viewing equilibria as samples from the Gibbs distribution of \(f\), and learning a diffusion generative model over the roto-translation action spaces of all players (Section 4.3). Intuitively, the former approach corresponds to learning a differentiable analogue of traditional scoring functions to facilitate gradient-based equilibrium computation, while the latter approach can be viewed as a multi-agent extension of the denoising score-matching paradigm. A key feature of our parameterization is that the learned (reverse) diffusion can handle varying number of proteins across training and inference.
Before describing our methods, we summarize the following useful notation. We define the action space for each protein \(i\) as \(\mathcal{A}_{i}=SO(3)\times\mathbb{T}(3)\), where \(SO(3)\) is the 3D rotation group, and \(\mathbb{T}(3)\) is the 3D translation group. The joint action space of all proteins \([N]\) is thus the product space \(\mathcal{A}=(SO(3)\times\mathbb{T}(3))^{N}\). These actions transform an input point cloud \(X\) as \(X^{\prime}=R(X-\bar{X})+\bar{X}+r\) for a given rotation \(R\) and translation \(r\), and \(\bar{X}\) denotes the (unweighted) center of mass of \(X\). This
transformation is consistent with the definition of \(\mathcal{A}_{i}\) as a _direct product_ of \(SO(3)\) and \(\mathbb{T}(3)\), and is different from the group of 3D rigid motions, \(SE(3)\), defined a _semi-direct product_ of \(SO(3)\) and \(\mathbb{T}(3)\) (More details in Appendix D). We use \(\mathbf{X}=[X_{1},\ldots,X_{N}]\in\mathbb{R}^{\sum_{i=1}^{N}n_{i}\times 3}\) to denote the protein point clouds, and \(\mathbf{H}=[H_{1},\ldots,H_{N}]\in\mathbb{R}^{\sum_{i=1}^{N}n_{i}\times n_{H}}\) for residue features.
### Gradient-based Learning with Surrogate Potentials
Learning the Potential FunctionFor each assembly structure in the training set, we generate a given number of _decoys_ by sampling random rotations and translations and applying these to each protein. We then score each decoy \(\mathbf{X}_{j}\) by its potential \(f(\mathbf{X}_{j},\mathbf{H})\) which in the context of this work we assume is represented by the commonly employed PyRosetta(Chaudhury et al., 2010) energy function.2 This constitutes our training dataset \(\mathcal{D}\) for learning a surrogate game potential. To back up our choice of using PyRosetta, we have observed assembly structures in the data to exhibit the lowest PyRosetta energy compared to the other structures in \(\mathcal{D}\) (see Figure 3 in Appendix B). To learn \(f\), we employ a parametrized architecture \(f_{\theta}\) which is invariant to permutations, translations and rotations of the input assembly, as discussed in Section 4.4.
Footnote 2: The approach is evidently more general and any scoring/ranking function can be applied. We stick to PyRosetta because it is the one we use in our experiments.
Each protein can have a different size, number and type of atoms, and thus, the range of energy values produced for different protein assemblies can vary widely. We observed this fact hinders model generalization across different proteins when training \(f_{\theta}\) using a simple regression (e.g., 12 error norm) objective. For this reason, we propose to train \(f_{\theta}\) solely based on energy _comparisons_, employing the widely used ranking loss:
\[\mathcal{L}(\theta)=-\operatorname*{\mathbb{E}}_{\mathbf{X}_{l},\mathbf{X}_{h }\sim\mathcal{D}}\big{[}\log\sigma(f_{\theta}(\mathbf{X}_{h},\mathbf{H})-f_{ \theta}(\mathbf{X}_{l},\mathbf{H}))\big{]} \tag{1}\]
where \(\mathbf{X}_{h},\mathbf{X}_{l}\) are random pairs of decoys such that \(f(\mathbf{X}_{h},\mathbf{H})>f(\mathbf{X}_{l},\mathbf{H})\), i.e. \(\mathbf{X}_{l}\) has lower (true) energy than \(\mathbf{X}_{h}\), and \(\sigma\) is the logistic function. This is a widely-used loss in preference-based learning and corresponds to maximum likelihood under a Bradley-Terry reward model, see e.g. (Bradley and Terry, 1952; Rafailov et al., 2023).
Equilibrium Computation via Gradient-Based LearningOnce \(f_{\theta}\) has been trained, we can compute assembly structures by randomly initializing each protein and updating their rototranslation actions via simultaneous gradient descent. In a potential game, this ensures convergence to equilibria (which are local minima of \(f_{\theta}\)), but in case of general-sum games more sophisticated update rules could be employed, see e.g., (Balduzzi et al., 2018). A caveat of PyRosetta scores, though, - which is also inherited by \(f_{\theta}\) - is that assembly structures where constituent proteins are far away (in 3D space) from each other also have low energy, which is undesirable as a docking outcome. This was also observed empirically in our early experiments. To discourage this behavior, we thus add a distance-based penalty \(d(X_{i},X_{j}):=\text{ReLU}(d_{\text{res}}(X_{i},X_{j})-d_{\text{ths}})\) which computes the minimum Euclidean distance between any pair of residues belonging to proteins \(i\) and \(j\) and penalizes the agents only when this distance is greater than a threshold \(d_{\text{ths}}\) (in our experiments, this was 5A). Overall, the action updates for proteins' reads as:
\[(R_{i}^{t+1},r_{i}^{t+1})=(R_{i}^{t},r_{i}^{t})+\eta^{t}\cdot\nabla_{(R_{i},r_ {i})}\Big{[}f_{\theta}(\mathbf{X}^{t},\mathbf{H})+\lambda\cdot\sum_{j\neq i} d(X_{i}^{t},X_{j}^{t})\Big{]},\quad i\in[N], \tag{2}\]
where \(\eta^{t}\) is a (decreasing) learning rate, \(\nabla_{(R_{i},r_{i})}\) is the Riemannian gradient (i.e. living in the tangent space of \(SO(3)\) and \(\mathbb{T}(3)\)) and \(\lambda\) is a tunable weight. In practice, the above updates need not necessarily be simultaneous as one could also update proteins sequentially in a round-robin fashion.
DiscussionOn one hand, the above approach is quite general and flexible: The discussed gradient-based update scheme can be carried with any potential function (also fully specified, and not necessarily trained) as long as it is differentiable. Moreover, it can also be easily extended to the case where each protein has its own objective \(f_{i}\) by adding penalty terms that are specific to each protein. In addition, while not the focus of this work, \(f_{\theta}\) can also be used beyond equilibrium computation, e.g. to characterize the associated energy landscape (see Jin et al. (2023)). On the other hand,
though, it heavily relies on the supervision signal used to train \(f_{\theta}\), which may not accurately reflect the true underlying game structure. In the next section, we propose an alternative approach that circumvents the need to utilize this supervision and learns solely from the observed assemblies.
### Diffusion Generative Models over the Proteins' Action Spaces
Section 4.2 views assembly structures as local minima of \(f\) wrt roto-translation gradients of all proteins. An equivalent characterization of the assembly structures is interpreting them as sample(s) from the mode(s) of the Gibbs distribution \(p\propto\exp\left(-f\right)\). Furthermore, it is easy to see that for any protein \(i\), \(\nabla_{(R_{i},r_{i})}\log p=-\nabla_{(R_{i},r_{i})}f\) where \(\nabla_{(R_{i},r_{i})}\log p\in T_{R_{i}}SO(3)\oplus T_{r_{i}}\mathbb{T}(3)\) are the Riemannian gradients in the tangent space of \(SO(3)\) and \(\mathbb{T}(3)\) at \(R_{i},r_{i}\) respectively. This implies an equivalence between the roto-translation scores of \(p\) and the roto-translation gradients wrt \(f\), and connects equilibrium computation to sampling for cooperative games. We can thus frame rigid multimeric docking as a generative modeling problem over the joint roto-translation action space \(\mathcal{A}\) using the framework of diffusion generative models. Intuitively, this corresponds to learning \(f\) implicitly through the (perturbed) scores of \(p\).
DGMs over the joint action space \(\mathcal{A}\)De Bortoli et al. (2022) showed that the DGM framework for standard Euclidean spaces (Section 3) also holds for compact Riemannian manifolds with minor modifications, with the drift function \(f(\mathbf{x}_{t},t)\) and score \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t})\) being elements of the tangent space, and the reverse diffusion corresponding to a geodesic random walk on the manifold.
As the action space \(\mathcal{A}_{i}\) for each protein \(i\) is a product space - following the arguments in Corso et al. (2023); Rodola et al. (2019) - the forward diffusion process proceeds independently on each component manifold, and the tangent space \(T_{(R,r)}\mathcal{A}_{i}\) is a direct sum, i.e., \(T_{(R,r)}\mathcal{A}_{i}=T_{R_{i}}SO(3)\oplus T_{r_{i}}\mathbb{T}(3)\). The tangent space \(T_{(\mathbf{R},r)}\mathcal{A}\) is thus, \(T_{(\mathbf{R},r)}\mathcal{A}=\bigoplus_{i=1}^{N}T_{R_{i}}SO(3)\oplus T_{r_{i} }\mathbb{T}(3)\),where \(\mathbf{R}=(R_{1},R_{2},\cdots,R_{N})\) and \(\mathbf{r}=(r_{1},r_{2},\cdots,r_{N})\). Note that this is different from recent \(SE(3)\) diffusion models (Yim et al., 2023), where the semi-direct product (between \(SO(3)\) and \(\mathbb{T}(3)\) requires defining a tailored inner product so that the diffusion processes can be decomposed over each component. Our DGM can also be interpreted as a multi-body extension of Corso et al. (2023), emerging from the connections between cooperative games and sampling.
Having established the framework, we can now apply standard denoising score matching objectives (Song and Ermon, 2019; Song et al., 2020) independently for each component. Furthermore, to ensure (global) translation invariance, we keep one protein fixed (with its COM at the origin), and define the diffusion model only over the joint action space of the remaining \(N-1\) proteins.
Diffusion Processes over \(SO(3)\) and \(\mathbb{T}(3)\)For both \(SO(3)\) and \(\mathbb{T}(3)\), we use the Variance Exploding SDE (VE-SDE) formulation (Song et al., 2020) to define the forward noising process \(d\mathbf{x}_{t}=\sqrt{\frac{d[\sigma^{2}(t)]}{dt}}d\mathbf{w}\), where \(\sigma\) is defined as an exponential schedule \(\sigma(t)=\sigma_{\min}^{1-t}\sigma_{\max}^{t}\), with appropriate values for \(SO(3)\) and \(\mathbb{T}(3)\). The transition kernel on \(SO(3)\) is the \(IGSO(3)\) distribution (Nikolayev et al., 1997; Leach et al., 2022), which can be sampled in the axis-angle parameterization of \(SO(3)\) as described in (Corso et al., 2023; Yim et al., 2023), by sampling a unit vector \(\hat{\omega}\) uniformly and a \(\omega\) according to:
\[p(\omega)=\frac{1-\cos\omega}{\pi}f(w)\ \ \text{where}\ \ f(\omega)=\sum_{l=0}^{ \infty}(2l+1)\exp(-l(l+1)\sigma^{2})\frac{\sin\left((l+1/2)\omega\right)}{ \sin(\omega/2)} \tag{3}\]
The score of the transition kernel is given by \(\left(\frac{d}{d\omega}\log f(\omega)\right)\hat{\omega}\), which can be precomputed by truncating an infinite sum. Sampling from the transition kernel can be accomplished by interpolating the CDF of \(p(\omega)\). For \(\mathbb{T}(3)\cong\mathbb{R}^{3}\), the transition kernel is simply the standard Gaussian with variance \(\sigma^{2}(t)\) and the score of the transition kernel is simply \(-r_{t}^{2}/\sigma^{2}(t)\).
Computing EquilibriaUnder the DGM paradigm, computing equilibria is equivalent to sampling using reverse diffusion guided by the learnt score. The reverse diffusion, as mentioned above, corresponds to a discretized geodesic random walk on the joint action space \(\mathcal{A}\).
We note that connections between sampling, SDEs and games (Chen et al., 2021; Liu et al., 2022) have also been explored in previous works, assuming either an infinite-player setting or a homoge
nous player one. In contrast, our approach is designed for a finite-player setting and places no restriction on player types. Furthermore, as discussed in Section 4.4, the parameterization of our score-network allows us to handle a varying number of players across training and inference.
### Architectures
In this section, we summarize the data representations and network architectures utilized by the proposed approaches. We defer more formal descriptions and additional details to Appendix A.
All our architectures take as inputs the protein point clouds \(\mathbf{X}\) and residue features \(\mathbf{A}\). The point clouds are then used to dynamically build graphs based on appropriate distance cutoffs between residues of the same and different proteins. Residue representations are then learnt using message-passing. The potential network and the score network share similar architectures for the message-passing layers, with the only differences in the output layers. The potential network predicts a \(SE(3)\)-invariant scalar, while the score-network predicts an output in the tangent space \(T_{(\mathbf{R},\mathbf{r})}\mathcal{A}=\bigoplus_{i=1}^{N}T_{R_{i}}SO(3)\oplus T _{r_{i}}\mathbb{T}(3)\), where all outputs are \(SE(3)\)-equivariant vectors.
Residue RepresentationsResidue representations are learnt using message-passing layers based on tensor products. These message-passing layers are based on \(SE(3)\) convolutions using tensor products, as implemented in the e3nn library (Geiger and Smidt, 2022). We use separate message-passing layers for edges within the same protein and between different proteins. These messages are then aggregated to produce scalar and vector representations for each residue.
Potential NetworkThe output of the potential network \(f_{\theta}\) is a \(SE(3)\)-invariant scalar, which can be interpreted as the energy of the system. We first generate edge representations by concatenating the scalar components of the corresponding residue representations, which are then passed to fully connected layers followed by a mean-pooling step to compute edge energy contributions. The edge and residue contributions (computed similarly) are added up to predict the energy.
Score NetworkFor each protein \(i\) (except the fixed protein), the score network \(s_{\theta}\) takes the corresponding learnt residue representations as input and computes the rotational and translational scores (i.e. two \(SE(3)\)-equivariant outputs) via a tensor-product convolution with the COM of protein \(i\), as done in Corso et al. (2023). This parameterization allows the score network to handle varying number of agents across training and inference, and could be of independent interest.
## 5 Experiments
DatasetsWe use two datasets in our experiments: Docking Benchmark 5.5 (DB5.5) and the Database of Interacting Protein Structures (DIPS). DB5.5 is a standard dataset used in benchmarking docking methods and contains 253 assembly structures. DIPS is a larger dataset containing 42826 assembly structures, but only consists of single-chain proteins. While DB5.5 has been traditionally used in the context of binary protein docking, many examples consist of proteins made up of multiple chains, allowing us to evaluate DockGame for multimeric docking. We use the same splits for DIPS and DB5.5 datasets as Ganea et al. (2022) for our experiments.
Experimental SetupFollowing Ganea et al. (2022), we first train our models on the DIPS dataset, and finetune it on the DB5.5 dataset. Unlike Ganea et al. (2022), however, we train our models at the granularity of protein chains. For the DIPS dataset, this implies no difference as all examples comprise two single-chain proteins. However, on the DB5.5 dataset, the examples comprise proteins with 2-8 chains. To the best of our knowledge, this is the first usage of the DB5.5 dataset for multimeric docking experiments. More experimental details can be found in Appendix B, with code available at [https://github.com/vosmnath/dockgame/](https://github.com/vosmnath/dockgame/).
BaselinesWe compare DockGame against traditional binary docking methods Attract, ClusPro, PatchDock, the traditional multimeric docking method Multi-LZerD, and recent deep learning methods for binary docking, EquiDock and DiffDock-PP. All baselines except Multi-LZerd are binary docking methods. Among other multimeric docking methods, we do not include any comparisons to AlphaFold-Multimer and SynDock as the DB5.5 dataset is part
of AlphaFold-Multimer's training set, while SynDock has no open-source implementation available at the time of writing. More details regarding the baselines can be found in Appendix B.4.
EvaluationDB5.5 has traditionally been used for benchmarking binary docking methods, where the task is to predict the best relative orientation (or a set thereof) between two proteins (referred to as ligand and receptor). For such binary docking baselines, we utilize the evaluations as provided with code in Ganea et al. (2022). However, many examples in DB5.5 consist of proteins that are subassembibles of _multiple_ chains. This fact is neglected by the aforementioned binary docking, where the relative configuration between all chains in the subassembly is assumed constant and already specified. Here, we test DockGame on the significantly harder multimeric docking setting, where the relative configuration even between chains in the same subassembly needs to be inferred. Notably, this task has many more degrees of freedom than its binary counterpart, and, to the best of our knowledge, it has not been considered in the literature.
MetricsDocking methods are commonly evaluated by the Complex Root Mean Squared Deviation (C-RMSD) metric, which measures the deviation between the predicted and ground truth assembly structure after Kabsch alignment (details in Ganea et al. (2022)). We also utilize TM-score (Zhang and Skolnick, 2005) as an alternative metric to quantify structural alignment.
A fundamental evaluation problem for DockGame and DiffDock-PP is that they can both generate multiple plausible equilibria, while the test set of DB5.5 only has a single assembly structure per example. We thus compute summary statistics (as described by Mean, Median and Std. deviation of C-RMSD and TM-score) on a filtered set of the predicted assemblies, constructed in two ways:
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Method** & **Avg. Runtime** & \multicolumn{3}{c}{**C-RMSD \(\downarrow\)**} & \multicolumn{3}{c}{**TM-score \(\uparrow\)**} \\ & (in [s]) & Mean & Median & Std & Mean & Median & Std \\
**Binary Docking** & & & & & & & \\ \hline Attract & 570 & 10.63 & 12.71 & 10.05 & 0.8317 & 0.8256 & 0.1668 \\ ClusPro & 15507 & 8.26 & 3.38 & 7.92 & 0.8318 & 0.8938 & 0.1535 \\ PatchDock & 3290 & 18.01 & 18.26 & 10.12 & 0.7270 & 0.7335 & 0.1237 \\ EquiDock & 5 & 14.72 & 14.1 & 5.3 & 0.7191 & 0.7107 & 0.1078 \\ DiffDock-PP & & & & & & & \\ (40, filtered by TM-score)\({}^{*}\) & 80 & 17.19 & 16.29 & 6.79 & 0.7086 & 0.7312 & 0.1142 \\
**(40, best C-RMSD)** & 80 & 12.81 & 11.79 & 4.61 & 0.7014 & 0.6773 & 0.1125 \\
**Multimer Docking** & & & & & & & \\ \hline DockGame-E & & & & & & & \\ (20, filtered by TM-score) & 182 & 19.28 & 17.74 & 7.37 & 0.6714 & 0.6820 & 0.1543 \\
**(20, best C-RMSD)** & 182 & 14.18 & 11.54 & 9.24 & 0.6182 & 0.6257 & 0.1795 \\ \hline DockGame-SM & & & & & & & \\ (40, filtered by TM-score) & 157 & 14.12 & 9.44 & 4.75 & 0.7173 & 0.7773 & 0.1419 \\
**(40, best C-RMSD)** & 157 & 8.73 & 8.72 & 4.68 & 0.7246 & 0.7681 & 0.1578 \\ \hline Multi-LZerD & 82753 & 21.23 & 20.89 & 6.82 & 0.6312 & 0.6266 & 0.1352 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Assembly Structure Prediction on DB5.5 Dataset**. For the first five baselines, the inputs consist of two proteins, with the goal of predicting their assembly structure (binary docking). Instead, DockGame and MultiZerd take as input constituent protein chains, and are faced with the harder _multimetric_ docking task. DockGame-E refers to the DockGame model with the learnt potential function (Section 4.2), while DockGame-SM, refers to the DockGame model with the learnt score network (Section 4.3). We use both methods to compute multiple (20 and 40, resp.) assemblies for each complex. The rule “(X, filtered by TM-score)” implies that, among the X generated assemblies, we identify the one with the highest TM-score to the true assembly and consider all other generated assemblies within a 0.05 TM-score absolute difference. This copes with the fact that, while DockGame (and DiffDock-PP) can generate multiple plausible equilibria, DB5.5 test set contains only a single assembly structure per example. The rule “(X, best C-RMSD)” implies that, among the X generated assemblies, we consider the one with the smallest C-RMSD.
* Identifying the predicted assembly with the highest TM-score (as an effective measure of structural alignment) to the ground truth, and considering all predicted assemblies within a TM-score (to ground truth) radius of 0.05 (we noticed similar results for a threshold of 0.1). We adopted this heuristic to extract predicted assemblies close to the equilibrium present in the data, and filter out different (but still potentially plausible) equilibria generated by DockGame.
* Identifying the predicted assembly with the lowest C-RMSD to the ground truth.
Results and DiscussionDespite the harder multimeric docking task, DockGame-SM, trained only with assembly structures, achieves comparable performance to binary docking methods on both the C-RMSD and TM-score metrics (Table 1). In particular, the median C-RMSD (more robust to outliers) for DockGame-SM across both filtered sets is better than all baselines except ClusPro (which however requires a significantly higher runtime). Furthermore, when compared with the traditional multimeric docking method Multi-LZerD, both DockGame-E and DockGame-SM achieve significantly better performance (both in terms of C-RMSD and TM-score), with almost 3 orders of magnitude faster runtimes.
To better assess the performance of DockGame-E, in Figure 2(a) we plot the energy differences predicted by the trained potential network, versus the respective differences computed by PyRosetta. We can see that a good approximation of the PyRosetta energy is achieved from our training set containing only \(10\) pairs of roto-translation perturbed structures per example. Furthermore, in Figure 2(b) we observe that DockGame-E reduces the PyRosetta energetics during gradient-based equilibrium computation as desired. This demonstrates the utility of DockGame-E as a faster, differentiable alternative for traditional scoring methods. However, the improved performance of DockGame-SM relative to DockGame-E highlights that PyRosetta might not offer the best supervision signal if the goal is predict assembly structures close to the ground truth. DockGame-SM might be the more preferred alternative, especially in data-rich settings.
Finally, we highlight that while DockGame can naturally predict different assembly structures (equilibria) for every example, we utilized the aforementioned filtering schemes to compare these to the only assembly structure present in the dataset. An exciting avenue for future work would be to explore ways to assess the plausibility of multiple generated equilibria, which is currently lacking.
## 6 Conclusion
In this work, we presented DockGame, a novel game-theoretic framework for rigid multimeric protein docking. We view protein docking as a cooperative game between protein chains, described by a underlying potential whose equilibria constitute the assembly structures. To learn the unknown game potential and compute equilibria, we propose two approaches - i) a supervised approach using traditional scoring functions to learn a surrogate potential function, ii) a self-supervised approach
Figure 2: **(a)** Learned vs. True PyRosetta energy differences, over 50 random decoy pairs for each complex in DB5.5 test. **(b)** PyRosetta energetics during game rounds. Median, \(25\)-\(75\)th percentiles, over 20 games for each complex in DB5.5 test.
based on diffusion generative models, that learns solely from observed assembly structures. Both approaches can efficiently decouple the combinatorial docking space into individual protein's action spaces. Moreover, our game-theoretic framework allows us to compute multiple assembly structures as plausible equilibria. We evaluated DockGame on the harder multimeric docking task (compared to binary docking baselines) using the DB5.5 dataset, and achieve comparable (or better) performance on C-RMSD and TM-score metrics, while exhibiting significantly faster runtimes.
While this work focused on cooperative games, the presented ideas are general and can be extended to general-sum games allowing us to model, e.g., protein's _flexibility_. Potential ways to do so would be to add protein structure-specific penalty terms, or to combine diffusion bridges (Holdijk et al., 2022; Somnath et al., 2023) with the proposed diffusion generative model. Other interesting avenues of future work include developing new metrics for evaluating multiple equilibria.
## 7 Acknowledgements
We thank Ya-Ping Hsieh, Mohammad Reza Karimi and Aditi Shenoy for useful discussions across different stages of the project. We also thank Scott Sussex for valuable feedback on multiple drafts of the paper. This publication was partly supported by the NCCR Catalysis (grant agreement No. 180544), a National Centre of Competence in Research funded by the Swiss National Science Foundation, European Union's Horizon 2020 research and innovation programme (grant agreement No. 826121), and ELSA (European Lighthouse on Secure and Safe AI) funded by the European Union (grant agreement No. 101070617).
|
2301.10054 | Constructing algebraic solutions of Painleve VI equation from $p$-adic
Hodge theory and Langlands Correspondence | We construct infinitely many non-isotrivial families of abelian varieties
over given four punctured projective lines. These families lead to algebraic
solutions of Painleve VI equation. Finally, based on a recent paper by
Lin-Sheng-Wang, we prove a complete characterization for the locus of motivic
Higgs bundles in the moduli space as fixed points of an ``additive'' self-map.
This is a note based on the lecture given by the second named author on 04 Nov.
2022 at Tsinghua University. | Jinbang Yang, Kang Zuo | 2023-01-24T14:59:15Z | http://arxiv.org/abs/2301.10054v1 | Constructing algebraic solutions of Painleve VI equation from \(p\)-adic Hodge theory and Langlands correspondence
###### Abstract.
We construct infinitely many non-isotrivial families of abelian varieties over given four punctured projective lines. These families lead to algebraic solutions of Painleve VI equation. Finally, based on a recent paper by Lin-Sheng-Wang, we prove a complete characterization for the locus of motivic Higgs bundles in the moduli space as fixed points of an "additive" self-map. This is a note based on the lecture given by the second named author on 04.Nov.2022 at Tsinghua University.
## 0. **Introduction**
Let \(R\) be a commutative ring with identity and \(X\) be a scheme over \(R\). An abelian scheme \(A\) over \(X\) together with a polarization \(\mu\) is called of _of \(\operatorname{GL}_{2}\)-type_, if there exists a number field \(K\) of degree \(\dim_{X}A\) such that the ring of integers \(\mathcal{O}_{K}\) can be embedded into the endomorphism ring \(\operatorname{End}_{\mu}(A/X)\). We will call the abelian scheme _of \(\operatorname{GL}_{2}(K)\)-type_, if we want to emphasize the role of \(K\).
Let \(f\colon A\to X\) be an abelian scheme of \(\operatorname{GL}_{2}(K)\)-type. Let \(D\) denote the discriminant locus and let \(X^{0}\) denote the complement of \(D\) in \(X\). Let \(\Delta\subset A\) denote the inverse image of \(D\) under the structure morphism \(f\) and let \(A^{0}\) denote the complement of \(\Delta\) in \(A\). Then we obtain the smooth abelian scheme \(f^{0}\colon A^{0}\to X^{0}\).
(0.1)
For \(R=\mathbb{C}\) we consider the Betti-local system
\[\mathbb{V}=R^{1}_{\mathrm{B}}f^{0}_{*}\mathbb{Z}_{A^{0}}\]
attached to \(f^{0}\), which is a \(\mathbb{Z}\)-local system over the base \(X^{0}\). Since \(f\) is of \(\operatorname{GL}_{2}(K)\)-type, the action of \(\mathcal{O}_{K}\) on \(f\) induces an action of \(K\) on the \(\overline{\mathbb{Q}}\)-local system \(\mathbb{V}\otimes\overline{\mathbb{Q}}\). Taking the \(K\)-eigen sheaves decomposition
\[\mathbb{V}\otimes\overline{\mathbb{Q}}=\bigoplus_{i=1}^{g}\mathbb{L}_{i}.\]
Then there \(\mathbb{L}_{i}\)'s are of rank-2 over \(X^{0}\) and defined over the ring of integers of some number field. On the other hand, consider the logarithmic de Rham bundle attached to the abelian scheme \(f\) and denote
\[(V,\nabla)=R^{1}_{\mathrm{dR}}f_{*}\Big{(}\Omega^{*}_{A/X}(\log\Delta), \mathrm{d}\Big{)}.\]
On this de Rham bundle, there is a canonical filtration satisfying Griffiths transversality given by
\[E^{1,0}:=R^{0}f_{*}\Omega^{1}_{A/X}(\log\Delta))\subset V.\]
Taking the grading with respect to this filtration, one gets a logarithmic graded Higgs bundle, which is so-called Kodaira-Spencer map attached to \(f\)
\[(E,\theta):=(E^{1,0}\oplus E^{0,1},\theta):=\operatorname{Gr}_{E^{1,0}}(V, \nabla)=\Big{(}R^{0}f_{*}\Omega^{1}_{A/X}(\log\Delta)\oplus R^{1}f_{*}\mathcal{ O}_{A},\operatorname{Gr}(\nabla)\Big{)}. \tag{0.2}\]
Since \(f\) is of \(\operatorname{GL}_{2}(K)\)-type, one also gets a \(K\)-eigen decomposition of the Higgs bundle
\[(E,\theta)=\bigoplus_{i=1}^{g}(E,\theta)_{i}. \tag{0.3}\]
Under Hitchin-Simpson's non-abelian Hodge theory, these eigensheaves \(\{(E,\theta)_{i}\}_{i=1,\cdots,g}\) are just those Higgs bundles correspond to the local systems \(\{\mathbb{L}_{i}\}_{i=1,\cdots,g}\).
Local systems and Higgs bundles arising as subquotient of local systems and Higgs bundles of some families of varieties are called _motivic_. Sometimes they are also called _coming from geometry origin_. Simpson had found a characterization for a rank-2 local system being motivic:
**Theorem 0.1** (Simpson).: _A rank-2 local system \(\mathbb{L}\) is motivic if and only if the following two conditions hold:_
1. \(\mathbb{L}\) _is defined over the ring of integers of some number field, and_
2. _for each element_ \(\sigma\in\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\) _the Higgs bundle corresponding to the Galois conjugation_ \(\mathbb{L}^{\sigma}\) _is again graded._
**Conjecture 0.2** (Simpson).: _A rigid local system is motivic._
Simpson's conjecture has been proved for the case of rank-2 by Corlette-Simpson [10] and rank-3 by Langer-Simpson [10] for cohomologically rigid local systems. The conjecture predicts that any rigid local system \(\mathbb{L}\) shall enjoy all properties of motivic local systems. For example,
* its corresponding filtered de Rham bundle is isomorphic to the underlying filtered de Rham bundle of some Fontaine-Faltings modules at almost all places, and
* if \(\mathbb{L}\) is in addtion cohomologically rigid, then it is defined over the ring of integers of some number field.
Those two properties have been verified by Esnault-Groechenig recently [1], [1].
In this note, we take \(X\) as the complex projective line \(\mathbb{P}^{1}\) and \(D\) as the 4 punctures \(\{0,1,\infty,\lambda\}\). Our goal is find some motivic rank-2 logarithmic graded Higgs bundles over \((\mathbb{P}^{1},\{0,1,\infty,\lambda\})\), which are coming from \(\operatorname{GL}_{2}\)-type abelian schemes over \(\mathbb{P}^{1}\) with discriminant locus contained in \(\{0,1,\infty,\lambda\}\).
Beavuille has shown that there exist exactly 6 non-isotrivial families of elliptic curves over \(\mathbb{P}^{1}\) with semistable reductions over \(\{0,1,\infty,\lambda_{i}\}\) for \(1\leq i\leq 6\). All of them are modular curves with respect to certain mixed level structures. Based on Beavuille's result, Viehweg-Zuo have shown that there no more non-isotrivial abelian schemes over \(\mathbb{P}^{1}\) of
\(\mathrm{GL}_{2}(K)\)-type with \([K:\mathbb{Q}]\geq 2\) and with semistable reductions over \(\{0,1,\infty,\lambda\}\). So except Beavuille's example any non-isotrivial smooth abelian schemes over \(\mathbb{P}^{1}\setminus\{0,1,\infty,\lambda\}\) of \(\mathrm{GL}_{2}(E)\)-type must have non-semistable reduction at some point in \(\{0,1,\infty,\lambda\}\). In this case, the some eigenvalues of the local monodromies of motivic local system must be roots of unity other than \(1\).
In this note, we consider the simplest situation: motivic rank-\(2\) local systems whose local monodromies around \(\{0,1,\infty,\lambda\}\) are unipotent and around \(\infty\) is quasiunipotent with eigenvalues \(\{-1,-1\}\). We call such type local monodromy to be _of type-\((1/2)_{\infty}\)_, or just _of type-\((1/2)\)_.
**Theorem 0.3**.: _For given \(\lambda\in\mathbb{P}^{1}\setminus\{0,1,\infty\}\), there exists infinitely many non-isotrivial abelian schemes of \(GL_{2}\)-type over \(\mathbb{P}^{1}\setminus\{0,1,\infty,\lambda\}\) with the associated rank-\(2\) eigen local systems being of type-\((1/2)_{\infty}\)._
Let \(M_{0,n}\) denote the moduli space of \(n\)-punctured projective lines and let \(S_{0,n}\) denote the total space of the universal family of \(n\)-punctured projective lines with structure morphism
\[p_{n}\colon S_{0,n}\to M_{0,n}.\]
Then \(M_{0,4}\simeq\mathbb{P}^{1}\setminus\{0,1,\infty\}\), and \(S_{0,4}=\bigcup\limits_{\lambda\in M_{0,4}}\left(\mathbb{P}^{1}\setminus\{0,1,\infty,\lambda\}\right)\) is an algebraic surface. Once we vary the parameter \(\lambda\), Theorem 0.3 implies the following result:
**Theorem 0.4**.: _There exist infinitely many non-isotrivial abelian schemes_
\[f:A\to\widetilde{S}_{0,4}\]
_of \(\mathrm{GL}_{2}\)-type over fiber products of finite etale base changes_
_and such that the local monodromies of \(f\) around \(\{0,1,\lambda\}\) are unipotent and around \(\{\infty\}\) is quasi-unipotent with all eigenvalues being \(-1\)._
**Corollary 0.5** (Corollary 4.).: _Let \(f:A\to\widetilde{S}_{0,4}\) be a family given in Theorem 0.4. Then all rank-\(2\) eigen local systems associated to the family \(f\) are algebraic solutions of Painleve VI equation of the type-\((1/2)_{\infty}\)._
For given \(\lambda\in\mathbb{P}^{1}\setminus\{0,1,\infty\}\), any family \(f_{\lambda}\colon A_{\lambda}\to\mathbb{P}^{1}\) in Theorem 0.3 has semistable reduction over \(\{0,1,\lambda\}\) and potentially semistable reduction over \(\infty\). Thus the eigen Higgs bundles \((E,\theta)_{i}\) associated to this family (constructed in 0.3) have the following form
\[E_{i}=\mathcal{O}\oplus\mathcal{O}(-1),\qquad\theta_{i}\colon\mathcal{O} \stackrel{{\neq 0}}{{\longrightarrow}}\mathcal{O}(-1)\otimes \Omega^{1}_{\mathbb{P}^{1}}(\log\{0,1,\infty,\lambda\}) \tag{0.4}\]
and are endowed with natural parabolic structures on the punctures \(\{0,1,\infty,\lambda\}\) of type-\((1/2)_{\infty}\). Here type-\((1/2)_{\infty}\) parabolic structures means that the parabolic structures at \(0\), \(1\) and \(\lambda\) are trivial and the parabolic filtration at \(\infty\) is
\[\left(E_{i}\mid_{\infty}\right)_{\alpha}=\left\{\begin{array}{cc}E_{i}\mid_ {\infty}&0\leq\alpha\leq 1/2,\\ 0&1/2<\alpha<1.\end{array}\right.\]
Let \(M^{gr^{\frac{1}{2}}}_{Hig\lambda}\) denote the moduli space of rank-2 semi-stable graded Higgs bundles over \(\mathbb{P}^{1}\) with the parabolic structure on \(\{0,1,\infty,\lambda\}\) of type-\((1/2)_{\infty}\). Then any Higgs bundle \((E,\theta)\in M^{gr^{\frac{1}{2}}}_{Hig\lambda}\) is parabolic stable and has the form as in (0.4).
In view of \(p\)-adic Hodge theory, a Higgs bundle \((E,\theta)\) over the Witt ring \(W(\mathbb{F}_{q})\) realized by an abelian scheme over \(W(\mathbb{F}_{q})\) of \(\operatorname{GL}(E)_{2}\)-type has to be the grading of an \(K\)-eigen sheaf of the Fontaine-Faltings module attached to the abelian scheme. Hence, by Lan-Sheng-Zuo functor \((E,\theta)\) is _periodic_ on \(M^{gr^{\frac{1}{2}}}_{Hig\lambda}\) over \(W(\mathbb{F}_{q})\).
One identifies
\[M^{gr^{\frac{1}{2}}}_{Hig\lambda}=\mathbb{P}^{1}\]
by sending \((E,\theta)\) to the zero locus of the Higgs map \((\theta)_{0}\in\mathbb{P}^{1}\), and takes then the elliptic curve \(C_{\lambda}\) of the Weierstrass form \(y^{2}=z(z-1)(z-\lambda)\) as the double cover
\[\pi:C_{\lambda}\to\mathbb{P}^{1}\]
ramified on \(\{0,1,\infty,\lambda\}\).
**Conjecture 0.6** (Sun-Yang-Zuo [17]).: _The self-map induced by Higgs-de Rham flow on \(M^{gr^{\frac{1}{2}}}_{Hig\lambda}\) over \(\mathbb{F}_{q}\) comes from multiplication map by \(p\) on the associated elliptic curve over \(\mathbb{F}_{q}\)_
The conjecture implies two things:
1. a Higgs bundle \((E,\theta)\) is periodic if and only if \(\pi^{-1}(\theta)_{0}\) is a torsion point in \(C_{\lambda}\) and of order \(p^{f}-1\).
2. for a prime \(p>2\) and assume \(C_{\lambda}\) is supersingular then \(\phi_{\lambda}(z)=z^{p^{2}}\). Hence, any Higgs bundle \((E,\theta)\in M^{gr^{\frac{1}{2}}}_{Hig\lambda}(\overline{\mathbb{F}}_{q})\) is periodic.
The Conjecture has been checked by Sun-Yang-Zuo for \(p<50.\) Very recently it has been proved by Lin-Sheng-Wang and becomes a theorem.
**Theorem 0.7** (Lin-Sheng-Wang [16]).: _Conjecture 0.6 holds true._
The technique used in Theorem 0.4 combined with Theorem 0.7 lead us to prove the following result on a characterization for motivic Higgs bundles contained in \(M^{gr^{\frac{1}{2}}}_{Hig\lambda}\).
**Theorem 0.8**.: _A Higgs bundle \((E,\theta)\in M^{gr^{\frac{1}{2}}}_{Hig\lambda}\) is an eigensheaf of the Kodaira-Spencer map attached to an abelian scheme of \(\operatorname{GL}_{2}\)-type if and only if \(\pi^{-1}(\theta)_{0}\) is a torsion point in \(C_{\lambda}\)._
## 1. Discussion on Theorem 0.3
The underlying principle behind Theorem 0.3 is very simple, the so-called isomonodromy deformation for motivic local systems over mixed characteristic. To illustrate the idea, let's first look at the situation over complex numbers. We assume, there exists an abelian scheme \(f_{\lambda_{0}}:A_{\lambda_{0}}\to\mathbb{P}^{1}\) of \(\operatorname{GL}_{2}(K)\)-type over complex numbers with bad reduction on \(\{0,\,1,\,\lambda_{0},\,\infty\}\) of type-\((1/2)\). Then the filtered logarithmic de Rham bundle decomposes as \(K\)-eigen sheaves
\[(V,\nabla,E^{1,0})=:(R^{1}_{\mathrm{dr}}f_{*}\Omega^{*}_{A_{\lambda_{0}}/ \mathbb{P}^{1}}(\log\Delta),d),\,R^{0}f_{*}\Omega^{1}_{A_{\lambda_{0}}/\mathbb{ P}^{1}}(\log\Delta))=\bigoplus_{i=1}^{g}(V,\nabla,E^{1,0})_{i},\]
where each eigen sheaf has the form
\[(V,\nabla,E^{1,0})_{i}\simeq(\mathcal{O}\oplus\mathcal{O}(-1),\nabla_{i}, \mathcal{O}).\]
Consider a family of 4-punctured projective line
\[(\mathbb{P}^{1},\{0,\,1,\,\lambda,\,\infty\})_{\hat{U}_{\lambda_{0}}}\to\hat{U }_{\lambda_{0}}\]
over a formal neighborhood \(\hat{U}_{\lambda_{0}}\subset M_{0\,4}\) of \(\lambda_{0}.\) Then by forgetting the Hodge filtration the de Rham bundle extends to a de Rham bundle \((V,\nabla)_{\hat{U}_{\lambda_{0}}}\) over \((\mathbb{P}^{1},\{0,\,1,\,\lambda,\,\infty\})_{\hat{U}_{\lambda_{0}}}\). It is known the abelian scheme extends over \((\mathbb{P}^{1},\{0,\,1,\,\lambda,\,\infty\})_{\hat{U}_{\lambda_{0}}}\) if and only if the Hodge filtration \(E^{1,0}\) extends to a sub bundle in the de Rham bundle \((V,\nabla)_{\hat{U}_{\lambda_{0}}}\). Using the \(K\)-eigen sheave decomposition we see that the obstruction for extending the Hodge filtration \(E^{1,0}=\bigoplus_{i=1}^{g}\mathcal{O}\) lies in \(\bigoplus_{i=1}^{g}H^{1}(\mathbb{P}^{1},\mathcal{O}(-1))=0\). Hence, the abelian scheme \(f_{\lambda_{0}}\) extends over the base \((\mathbb{P}^{1},\{0,\,1,\,\lambda,\,\infty\})_{\hat{U}_{\lambda_{0}}}\).
Back to the situation over mixed characteristic, along the diagram below. One
* starts with moduli space \(M^{gr^{\frac{1}{2}}}_{Hig\lambda}\) of rank-2 stable graded Higgs bundles on \(\mathbb{P}^{1}_{\mathbb{Z}_{q}}\) of parabolic type-\((1/2)_{\infty}\) on \(\{0,1,\lambda,\infty\}\),
* finds periodic Higgs bundles over \(\mathbb{P}^{1}_{\mathbb{F}_{q}}\)(i.e. fixed points of iterations of the self map on \(M^{gr^{\frac{1}{2}}}_{Hig\lambda}\otimes\mathbb{F}_{q}\) induced by Higgs-de Rham flow),
* gets Fontaine-Faltings modules via Lan-Sheng-Zuo functor from those periodic Higgs bundles and lifts these modules to \(\mathbb{P}^{1}_{\mathbb{Z}_{q}}\),
* obtains rank-2 \(\ell\)-adic local systems on \(\mathbb{P}^{1}-\{0,\,1,\,\lambda,\,\infty\}\) over \(\mathbb{F}_{q}\) by forgetting Hodge filtration in the Fontaine-Faltings module, tensoring with \(\mathbb{Q}_{p}\), and applying Deligne's \(p\)-\(\ell\) companion conjecture proven by Abe [1],
* finds \(\operatorname{GL}_{2}\)-type abelian schemes over \(\mathbb{P}^{1}_{\mathbb{F}_{q}}\) with bad reductions of type-\((1/2)_{\infty}\) realizing those \(\ell\)-adic local systems via Drinfeld's theorem on Langlands correspondence.
* modifies those \(\operatorname{GL}_{2}\)-type abelian schemes up to some \(p\)-isogeny and lifts these modified \(\operatorname{GL}_{2}\)-type abelian schemes to \(\mathbb{P}^{1}_{\mathbb{Z}_{q}}\).
\(M_{\text{\rm p-adic}}^{\text{\rm cris}}/\chi\)\(\begin{array}{c}\text{Fontaine-Faltings}\\ \text{\rm$p$-adic RH}\end{array}\)\(M_{\text{\rm dR}}^{FF}/\chi\)\(\begin{array}{c}\text{\rm$forgetting Hodge}\\ \text{\rm filtration, $\otimes\mathbb{Q}_{p}$}\end{array}\)\(M^{\text{\rm F-isoc}}/\chi\)\(\begin{array}{c}\text{\rm Deligne's $p-\ell$}\\ \text{\rm companion by Abe}\end{array}\)\(M_{\ell-\text{\rmadic}}\)
And a type of boundedness and rigidity arguments of classifying maps from the log base curve \((\mathbb{P}^{1},\{0,1,\lambda,\infty\})\) into the fine moduli space of polarized abelian varieties shows that the abelian scheme lifts over complex numbers for any \(\lambda\in M_{0,4}(\mathbb{C})\).
Below we give more detailed explanations on the technique issue in each step:
Constructing Fontaine-Faltings modules over \(\mathbb{Z}_{q}\) from semistable Higgs bundles over \(\mathbb{F}_{q}\) via Higgs de Rham flow
Take a prime \(p\) and an element \(\lambda\in\mathbb{Z}_{q^{2}}\) such that the modulo \(p\) reduction of \(C_{\lambda}\) is supersingular. Consider the moduli space \(M_{Hig\lambda}^{gr\frac{1}{2}}\otimes\mathbb{F}_{q^{2}}\) of rank-2 stable logarithmic graded Higgs bundles over \((\mathbb{P}^{1},D)_{\mathbb{F}_{q^{2}}}\) with parabolic structures on \(D\) of type-\((1/2)\). Identifying \(M_{Hig\lambda}^{gr\frac{1}{2}}\otimes\mathbb{F}_{q^{2}}\) with the projective line \(\mathbb{P}^{1}_{\mathbb{F}_{q^{2}}}\), the self-map induced by Higgs-de Rham flow is given by \(z\mapsto z^{p^{2}}\). We show that, for any \(n\geq 1\), each \((\overline{E},\overline{\theta})\in M_{Hig\lambda}^{gr\frac{1}{2}}(\overline{ \mathbb{F}}_{q^{2n}})\) is automatically periodic and lifts uniquely to a periodic Higgs bundle \((E,\theta)\in M_{Hig\lambda}^{gr\frac{1}{2}}(\mathbb{Z}_{q^{2n}}).\) Hence, the Lan-Sheng-Zuo functor induces following bijection
\[M_{Hig\lambda}^{gr\frac{1}{2}}(\mathbb{F}_{q^{2n}})\simeq M_{dR\,\lambda}^{FF \frac{1}{2}}(\mathbb{Z}_{q^{2n}})/\chi,\]
where the right hand side is \(M_{dR\,\lambda}^{FF\frac{1}{2}}(\mathbb{Z}_{q^{2n}})\) modulo an equivalent relation. The \(M_{dR\,\lambda}^{FF\frac{1}{2}}(\mathbb{Z}_{q^{2n}})\) is the set of rank-2 Fontaine-Faltings module over \((\mathbb{P}^{1},D)_{\mathbb{Z}_{q^{2n}}}\) such that the monodromy is of type-\((1/2)\), and two Fontaine-Faltings modules in \(M_{dR\,\lambda}^{FF\frac{1}{2}}(\mathbb{Z}_{q^{2n}})\) are called equivalent if they are differed by a constant rank-1 Fontaine-Faltings modules.
### Deligne's \(p\)-\(\ell\) companion conjecture solved by Abe
Consider the functors defined by forgetting Hodge filtration and tensoring with \(\mathbb{Q}_{p}\), which induce an injective map between a set of logarithmic Fontaine-Faltings modules with parabolic structure-\((\frac{1}{2})\) and a set of logarithmic \(F\)-isocrystals. These functors keeps the same type parabolic structure, and the result logarithmic \(F\)-isocrystals have a fixed constant determinant, the \(F\)-isocrystal \(\mathcal{E}_{\text{\rm cy}}\) associated to the cyclotomic character. The injectivity follows the fact that the underlying log-parabolic de Rham bundles are parabolic stable mod \(p\).
We apply furtherly the forgetful functor by sending logarithmic \(F\)-isocrystals to overconvergent F-isocrystals, which is known injective by work of Kedlaya [2, Proposition 6.3.2]. Hence, by putting every thing together we obtain an injective map
\[M_{Hig\lambda}^{gr\frac{1}{2}}(\mathbb{F}_{q^{2n}})\simeq M_{dR\,\lambda}^{FF \frac{1}{2}}(\mathbb{Z}_{q^{2n}})/\chi\hookrightarrow M_{\lambda}^{F-iso\frac{1 }{2}\,\dagger}(\mathbb{Q}_{q^{2n}})/\chi,\]
where \(M_{\lambda}^{F-iso\frac{1}{2}\dagger}(\mathbb{Q}_{q^{2n}})\) is the set of overconvergent F-isocrystals with parabolic structure of type-\((1/2)\) and determinant \(\mathcal{E}_{\text{\rm cy}}\).
We choose a prime \(\ell\neq p\) and fix an isomorphism \(\phi:\overline{\mathbb{Q}}_{p}\simeq\overline{\mathbb{Q}}_{\ell}\). Given an overconvergent \(F\)-isocrystal \((V,\nabla,\Phi)^{\dagger}\in M_{\lambda}^{F-iso}\frac{1}{2^{l}}(\mathbb{Q}_{q^{2n}})\), by applying Deligne's \(p-\ell\) companion proved by Abe to \((V,\nabla,\Phi)^{\dagger}\) we find a rank-2 \(\ell\)-adic irreducible local system \(\mathbb{L}\) with cyclotomic determinant corresponding to \((V,\nabla,\Phi)^{\dagger}\) in the sense the characteristic polynomial of \(\Phi_{x}\) on \(V_{x}\) and the characteristic polynomial of \(\sigma_{x}\) on \(\mathbb{L}_{x}\) via \(\phi\) at all close point \(x\) in the close fiber. By the compatibility of local-global Langlands correspondence we find the local monodromies of \(\mathbb{L}\) around \(\{0,1,\lambda\}\) are unipotent and around \(\infty\) is quasi-unipotent with eigenvalues \(\{-1,\,-1\}.\) Hence, we obtain a bijective functor
\[M_{\lambda}^{F-iso}\frac{1}{2}(\mathbb{Q}_{q^{2n}})/\chi\simeq M_{\ell-\text{ adic}\,\lambda}^{\frac{1}{2}}(\mathbb{F}_{q^{2n}})/\sim,\]
where \(M_{\ell-\text{adic}\,\lambda}^{\frac{1}{2}}(\mathbb{F}_{q^{2n}})\) is the set of equivalent classes of rank-2 \(\ell\)-adic local systems over \((\mathbb{P}^{1}-D)_{\mathbb{F}_{q}^{2n}}\) with cyclotomic determinant, unipotent local monodromies around \(\{0,1,\lambda\}\) and quasi-unipotent local monodromy around \(\infty\) of eigenvalues \(\{-1,-1\}\). The \(\sim\) stands for an equivalent relation, two local systems \(\mathbb{L}\) and \(\mathbb{L}^{\prime}\) are called equivalent if the restriction of them on geometric fundamental group are isomorphic. As we show that the restriction of \(\mathbb{L}\) to the geometric fundamental group is automatically irreducible, two local systems \(\mathbb{L}\) and \(\mathbb{L}^{\prime}\) are equivalent if and only if there are differed by a character of the absolute Galois group of \(\mathbb{F}_{q^{2n}}\).
Composing the above functors together we obtain an injective functor
\[M_{Hig\lambda}^{gr\frac{1}{2}}(\mathbb{F}_{q^{2n}})\simeq M_{dR\,\lambda}^{ FF\,\frac{1}{2}}(\mathbb{Z}_{q^{2n}})/\chi\hookrightarrow M_{\ell-\text{adic}\, \lambda}^{\frac{1}{2}}(\mathbb{F}_{q^{2n}})/\sim.\]
Over any field \(E\) we have
\[M_{Hig\lambda}^{gr\frac{1}{2}}\otimes E\simeq\mathbb{P}_{E}^{1},\]
in particular,
\[\#M_{Hig\lambda}^{gr\frac{1}{2}}(\mathbb{F}_{q^{2n}})=\#\mathbb{P}^{1}( \mathbb{F}_{q^{2n}})=q^{2n}+1.\]
On the other hand by applying a formula due to Hongjie Yu [Y22], which solved a conjecture by Deligne on counting the number of \(\ell\)-adic local systems on a punctured smooth projective curve \((C,D)/\mathbb{F}_{q}\) in terms of the number of semistable parabolic graded Higgs bundles on \((C,D)/\mathbb{F}_{q}\), we find
\[\#M_{\ell-\text{adic}\,\lambda}^{\frac{1}{2}}(\mathbb{F}_{q^{2n}})=q^{2n}+1.\]
This equality implies that the above injective functor is, in fact, bijective
\[M_{Hig\lambda}^{gr\frac{1}{2}}(\mathbb{F}_{q^{2n}})\simeq M_{dR\,\lambda}^{ FF\,\frac{1}{2}}(\mathbb{Z}_{q^{2n}})/\chi\simeq M_{\ell-\text{adic}\,\lambda}^{ \frac{1}{2}}(\mathbb{F}_{q^{2n}})/\sim. \tag{1.1}\]
Constructing abelian scheme over \(\mathbb{F}_{q}\) via Langlands correspondence and lifting Hodge filtration of relative differential 1-forms to characteristic zero
Given a local system \(\mathbb{L}\in M_{\ell-adic\,\lambda}^{\frac{1}{2}}(\mathbb{F}_{q^{2n}})\) with cyclotomic determinant. Then one shows that the restriction of \(\mathbb{L}\) to the geometric fundamental group is irreducible with infinite local monodromy at least on one puncture. By applying Drinfeld's theorem [Dri81] to \(\mathbb{L}\) there exists an abelian scheme
\[f:A\to\mathbb{P}_{\mathbb{F}_{q^{2n}}}^{1}\]
with the bad reduction \(\Delta\) over \(\{0,1,\infty,\lambda\}\) and such that:
1. The abelian scheme is of \(\mathrm{GL}_{2}(K)\)-type, where \(K\) is the number field generated by traces of Frobenius on \((\mathbb{L})_{x}\) at all close points \(x\in\mathbb{P}^{1}-D\).
2. Let \[\mathbb{V}:=R^{1}_{\mathrm{et}}f_{*}\overline{\mathbb{Q}}_{\ell\,A^{0}}=\bigoplus _{i=1}^{g}\mathbb{L}_{i}\] be the \(K\)-eigen decomposition of the \(\ell\)-adic local system attached to the family. Then the local system \(\mathbb{L}\) is isomorphic to a \(K\)-eigen sheaf, say \(\mathbb{L}_{1}\), and all eigen sheaves have the cyclotomic character. Moreover, the local monodromies matrices of eigen sheaves at any puncture \(x\in\{0,1,\infty,\lambda\}\) are of the same type.
3. Consider the realization of the \(\mathcal{O}_{K}\)-log Dieudonne crystal attached to \(f\) over \((\mathbb{P}^{1},\{0,1,\infty,\lambda\})_{\mathbb{Z}_{q^{2n}}}\), one gets \[(V,\nabla,\Phi,\mathcal{V})=\Big{(}R^{1}_{\mathrm{cry}}f_{*}\mathcal{O}_{A, crys}\Big{)}(\mathbb{P}^{1}_{\mathbb{Z}_{q^{2n}}}).\] The \(\mathrm{GL}_{2}\)-structure induces a \(K\)-eigen sheaves decomposition of \(F\)-isocrystals \[(V,\nabla,\Phi)\otimes\mathbb{Q}_{p}=\bigoplus_{i=1}^{g}(V,\nabla,\Phi)_{i\, \mathbb{Q}_{p}}\] where \((V,\nabla,\Phi)_{i,\mathbb{Q}_{p}}\) is a \(\sigma^{f}\)-log \(F\)-isocrystal with determinant \(\mathcal{E}_{cy}\). Under \(p\)-\(\ell\) companion, the eigen sheaves correspond to local systems with cyclotomic determinant. The parabolic structure induced by the residue of the connection is of type-\((1/2)\).
According the bijection (1.1), each \((V,\nabla,\Phi)_{i\mathbb{Q}_{p}}\) has an integral lattice, which underlies a Fontaine-Faltings module \((V,\nabla,Fil,\Phi)^{FF}_{i}\) over \(\mathbb{Z}_{q^{2n}}.\) Consequently the multiplication field \(K\) is unramified at each place above \(p\). This shows that the above decomposition can be extended over \(\mathbb{Z}_{q^{2n}}\). In other words, there exists \((V,\nabla,\Phi)_{i}\) such that
\[(V,\nabla,\Phi)=\bigoplus_{i=1}^{g}(V,\nabla,\Phi)_{i}\]
and
\[(V,\nabla,\Phi)_{i}\otimes\mathbb{Q}_{p}\simeq(V,\nabla,\Phi)^{FF}_{i}\otimes \mathbb{Q}_{p}.\]
Consider the new \(\mathcal{O}_{K}\)-lattice
\[(V,\nabla,\Phi)^{FF}:=\bigoplus_{i=1}^{g}(V,\nabla,\Phi)^{FF}_{i}\]
of \((V,\nabla,\Phi)\otimes\mathbb{Q}_{p}\) defined from the Fontaine-Faltings module
\[(V,\nabla,Fil,\Phi)^{FF}:=\bigoplus_{i=1}^{g}(V,\nabla,Fil,\Phi)^{FF}_{i}.\]
By extending the coefficient, one gets a Verschiebung on \((V,\nabla,\Phi)\otimes\mathbb{Q}_{p}\). By restricting onto the new lattice, one gets a Verschiebung structure \(\mathcal{V}\) on \((V,\nabla,Fil,\Phi)^{FF}\).
According equivalence of the Dieudonne functor, from the new modified \(\mathcal{O}_{K}\)-log Dieudonne crystal \((V,\nabla,\Phi,\mathcal{V})^{FF}\), one gets a \(p\)-isogeny \(f^{\prime}:A^{\prime}\to\mathbb{P}^{1}\) of the original abelian scheme such that \((V,\nabla,\Phi,\mathcal{V})^{FF}\) is the Dieudonne crystal attached to \(f^{\prime}\).
From the family \(f^{\prime}\), by taking relative differential 1-forms one gets the natural Hodge filtration on \((V,\nabla)^{FF}\otimes\mathbb{F}_{q^{2n}}\) given by
\[E^{1,0}:=R^{0}f^{\prime}_{*}\Omega^{1}_{A^{\prime}/\mathbb{P}^{1}}(\log\Delta) \subset(V,\nabla)^{FF}\otimes\mathbb{F}_{q^{2n}}=R^{1}_{dR}f^{\prime}_{*}( \Omega^{\bullet}_{A^{\prime}/\mathbb{P}^{1}}(\log\Delta),d),\]
which is a rank-\(g\) sub bundle. Since the relative Frobenius \(\Phi\) in the Fontaine-Faltings module satisfies the strong \(p\)-divisible condition with respect to the filtration \(Fil\), the Hodge filtration is coincide with the modulo \(p\) reduction of the filtration in the Fontaine-Faltings module. In other words, \(Fil\) is a filtration lifts the Hodge filtration relative differential 1-forms attached to \(f^{\prime}\).
Lifting abelian scheme from characteristic \(p\) to characteristic zero by Grothendieck-Messing-Kato logarithmic deformation theorem
By Zarhin's trick, the fiber product
\[f^{{}^{\prime}(4,4)}:A^{{}^{\prime}4,4}=(A^{\prime}\times A^{{}^{\prime}t})^{4 }\rightarrow\mathbb{P}^{1}\]
with the induced \(\mathcal{O}^{(4,4)}_{K}\)-multiplication carries an \(\mathcal{O}^{(4,4)}_{K}\)-principle polarization
\[\iota:A^{{}^{\prime}4,4}\simeq A^{{}^{\prime}4,4t}.\]
This \(\mathcal{O}^{4,4}_{K}\)-polarization induces an isomorphism between the \(\mathcal{O}^{4,4}_{K}\)-eigen sheaves of the Dieudonne module and its dual
\[\iota^{*}:(V,\nabla,\Phi,\mathcal{V})^{FF}=\bigoplus_{i=1}^{8g}(V,\nabla,\Phi,\mathcal{V})^{FF}_{i}\simeq\bigoplus_{i=1}^{8g}(V,\nabla,\Phi,\mathcal{V})^{ FF\vee}_{i}=(V,\nabla,\Phi,\mathcal{V})^{FF\vee},\]
which carries the Hodge filtration \(Fil\) and \(Fil^{\vee}\) as liftings of the Hodge bundles \(E^{1,0}_{f^{{}^{\prime}4,4}}\) and \(E^{1,0}_{f^{{}^{\prime}4,4t}}\). One checks that \(\operatorname{Gr}_{E^{1,0}_{f^{{}^{\prime}4,4}}}(V,\nabla)^{FF}\otimes \mathbb{F}_{q^{2n}}\) and \(\operatorname{Gr}_{E^{1,0}_{f^{{}^{\prime}4,4t}}}(V,\nabla)^{FF\vee}\otimes \mathbb{F}_{q^{2n}}\) are parabolic stable with respect to the \(\mathcal{O}^{4,4}_{K}\)-eigen sheaves decomposition. Hence, we obtain
\[\iota^{*}Fil=Fil^{\vee} \tag{1.2}\]
By Faltings-Chai Theorem [10] on the fine arithmetic moduli space \(\mathcal{A}_{8g,1,3}=:\mathcal{X}^{0}\) of principle polarized abelian varieties with level-3 exists over \(\mathbb{Z}[e^{\frac{2i\pi}{3}},\,1/3]\), which is smooth and carries an universal abelian scheme
\[\mathcal{A}^{0}\rightarrow\mathcal{X}^{0}.\]
Further more, there exists a smooth Toroidal compactification \(\mathcal{X}\supset\mathcal{X}^{0}\) over \(\mathbb{Z}[e^{\frac{2i\pi}{3}},\,1/3]\) with a smooth compactification of the universal abelian scheme
\[f:\mathcal{A}\rightarrow\mathcal{X}\]
such that \(\mathcal{A}\setminus\mathcal{A}^{0}\) is a relative normal crossing divisor over \(\mathcal{X}\setminus\mathcal{X}^{0}=:\infty\). For an \(\mathcal{O}^{(4,4)}_{E}\)-principle polarized abelian scheme \(f^{{}^{\prime}(4,4)}\) together with a lifting of the base \((\mathbb{P}^{1},D)_{\mathbb{Z}_{q^{2n}}}\)
In order to obtain a period map into the fine moduli space \(\mathcal{X}^{0}\) for \(f^{{}^{\prime}(4,4)}\) we take the base change
\[\chi:(\mathbb{P}^{1},D)_{\chi}\rightarrow(\mathbb{P}^{1},D)\]
defined by torsion-6 subgroup of \(f^{s(4,4)}\). The fiber product of the base change
\[f^{{}^{\prime}4,4}_{\chi}:A^{{}^{\prime}4,4}_{\chi}\rightarrow\mathbb{P}^{1}_ {\chi}\]
has semistable reduction on \(D_{\chi}\) and carries a level-6 structure. Hence, \(f^{s(\lambda 4,4)0}_{\chi}\) induces a log period map
\[\psi:(\mathbb{P}^{1},D)_{\chi}\rightarrow(\mathcal{X},\infty)\otimes k.\]
By Kato's log deformation theorem [12], the local liftings of \(\psi\) on \(W_{n}(k)\) defines an obstruction-cocycle in \(H^{1}(\mathbb{P}^{1}_{\chi}\otimes k,\psi^{*}\Theta_{\mathcal{X}/k}(\log\infty))\). The local liftings can be glue if and only if the obstruction class vanishes.
By Faltings-Chai [13], the universal Kodaira-Spencer map induces an identification
\[\Theta_{\mathcal{X}}(\log\infty)\simeq S^{2}E^{0,1}_{\mathcal{X}}\]
and we may identify the obstruction for lifting of \(\psi\) with the obstruction for lifting the Hodge bundle \(E^{1,0}_{f^{\prime}{}^{4,4}}\) satisfying (1.2). As \(E^{1,0}_{f^{\prime}{}^{4,4}}\) lifts we show \(\psi\) lifts. Which corresponds to a lifting of the abelian scheme
\[(f^{{}^{\prime}4,4}:A^{\prime}_{\chi}{}^{4,4}\to\mathbb{P}^{1}_{\chi})_{\mathbb{ Z}_{q^{2n}}}\]
with a lifting of \(\mathcal{O}^{4,4}_{K}\)-action, as the the Dieudonne crystal with \(Fil\) carries an \(\mathcal{O}^{4,4}_{K}\)-action.
As \(f^{{}^{\prime}4,4}\) over \(\mathbb{P}^{1}_{\chi}/\mathbb{F}_{q^{2n}}\) descends to \(\mathbb{P}^{1}/\mathbb{F}_{q^{2n}}\) and the Fontaine-Faltings module attached to \(f^{\prime}{}^{4,4}\) over \(\mathbb{P}^{1}_{\chi}/W(\mathbb{F}_{q^{2n}})\) descends to \(\mathbb{P}^{1}/W(\mathbb{F}_{q^{2n}})\) we obtain \(f^{\prime}{}^{4,4}\) descends to \(\mathbb{P}^{1}/W(\mathbb{F}_{q^{2n}})\).
## 2. Sketch the proof of Theorem 0.8
Let \(g:C\to\mathbb{P}^{1}\) be the Legendre family. Then one may identify the smooth locus of \(g\) with \(M_{0,4}\), the moduli space of projective line with 4-punctures, which sends \(\lambda\) to the projective line with punctures at \(\{0,1,\lambda,\infty\}\). For any \(\lambda\neq 0,1,\infty\), the fiber of \(g\) at \(\lambda\) is just the elliptic curve given by the double cover \(\pi_{\lambda}:C_{\lambda}\to\mathbb{P}^{1}\) ramified on \(\{0,1,\lambda,\infty\}\)
For a \(\lambda_{0}\in M_{0,4}\), take a Higgs bundle \((E,\theta)\) in \(M^{gr\frac{1}{2}}_{Hig\,\lambda_{0}}.\) Then Theorem 0.8 claims that \((E,\theta)\) is a motivic Higgs bundle if and only if \((\theta)_{0}\in\mathbb{P}^{1}\) is a torsion point with respect to \(\lambda_{0}\) (i.e. the preimage in \(\pi_{\lambda_{0}}^{-1}(\theta)_{0}\in C_{\lambda_{0}}\) are torsion points). In the following, we give a sketch of the proof of Theorem 0.8.
Assume \((E,\theta)\) is motivic. Then the modulo \(\mathfrak{p}\) reduction of \((E,\theta)\) is periodic for almost all places \(\mathfrak{p}\). According Theorem 0.7, the modulo \(\mathfrak{p}\) reduction of \((\theta)_{0}\) is torsion. By a theorem of Pink [14], it itself is torsion.
Conversely, assume \((\theta)_{0}\) is a torsion point with order \(m\), in the following we show \((E,\theta)\) is motivic. We first show a very special case:
**Case 1.** Assume \(\lambda_{0}\) takes a value in \(\mathcal{O}_{K}\), the ring of integers of some number field, such that \(C_{\lambda_{0}}\) is an elliptic curve with complex multiplication. Choose a sufficient large place \(\mathfrak{p}\) such that the reduction of \(C_{\lambda_{0}}\) at \(\mathfrak{p}\) is supersingular and \(\mathfrak{p}\nmid m\).
Since the modulo \(\mathfrak{p}\) reduction \((\bar{E},\bar{\theta})\) of the Higgs bundle \((E,\theta)\) is also torsion and of order \(m\) with \(\mathfrak{p}\nmid m\). By Theorem 0.7, the reduction \((\bar{E},\bar{\theta})\) is periodic. According the bijection in (1.1), the modulo \(p\) reduction \((\bar{E},\bar{\theta})\) lifts to a periodic Higgs bundle \((E,\theta)^{per}\) on \((\mathbb{P}^{1},\{0,1,\lambda_{0},\infty\})/W(\mathbb{F}_{q_{0}^{2n}})\). In Section 1.4 we show that there exists an abelian scheme \(f_{\lambda_{0}}:A\to\mathbb{P}^{1}\) of \(\operatorname{GL}_{2}(K)\)-type over characteristic 0 with bad reduction on \(\{0,\,1,\,\lambda_{0},\,\infty\}\)) of type-\((1/2)\) such that \((E,\theta)^{per}\) is an \(K\)-eigen Higgs bundle attached to \(f_{\lambda_{0}}\). Hence, \((E,\theta)^{per}\) is motivic. In particular \((\theta^{per})_{0}\) is also torsion and has the some modulo \(\mathfrak{p}\) reduction as \(\theta_{0}\).
We claim that \((E,\theta)\simeq(E,\theta)^{per}\). In particular, \((E,\theta)\) is motivic. Since \(C_{\lambda_{0}}\) has complex multiplication, the field generated by \(p\)-torsion point must be ramified above \(p\). In particular, the order of \((\theta^{per})_{0}\) does not divided by \(p\). Together with the fact that \((\theta)_{0}\equiv(\theta^{per})_{0}\pmod{p}\), one gets \((E,\theta)\simeq(E,\theta)^{per}\).
**Case 2.** Let \(\Sigma_{m}\subset C\) be the \(m\)-torsion (multiple) section, \(T_{m}=\pi(\Sigma_{m})\subset M_{0,4}.\) Then \(T_{m}\) is etale over \(M_{0,4}\). Let \(T_{m}^{\prime}\) be the irreducible component of \(T_{m}\) containing \((\theta)_{0}\).
Recall the fact that the set of elliptic curves with complex multiplication are dense in the moduli space. We can find a subset of infinitely many \(\{\lambda_{i}\}\) with the some modulo \(p\)-reduction such that all \(C_{\lambda_{i}}\) are elliptic curve with complex multiplication. Choose a point \(z_{i}\) in the intersection of \(T_{m}^{\prime}\) and the fiber above \(\lambda_{i}\) for each \(i\).
With the method in case 1, we find abelian schemes \(f_{\lambda_{i}}:A_{\lambda_{i}}\to\mathbb{P}^{1}\) of \(\operatorname{GL}_{2}(K)\)-type with bad reduction on \(\{0,\,1,\,\lambda_{i},\,\infty\}\), such that \((E,\theta)_{z_{i}}\) is an \(K\)-eigen Higgs bundle attached to \(f_{\lambda_{i}}\). One shows that infinitely many of \(\{f_{\lambda_{i}}\}\) glue together into an abelian scheme \(f:A\to M_{0,4}\). Consequently, the subset \(\{z_{i}\}\), where \(z_{i}=(\theta_{z_{i}})_{0}\), glue into a component \(Z_{m}\) of \(\Sigma_{m}\) such that for any \(z_{\lambda}\in Z_{m}\cap(\mathbb{P}^{1},\{0,\,1,\,\lambda,\,\infty\})\) the Higgs bundle \((E,\theta)_{z_{\lambda}}\) with \((\theta_{z_{\lambda}})_{0}=z_{\lambda}\) is a motivic Higgs bundle on \((\mathbb{P}^{1},\{0,\,1,\,\lambda,\,\infty\})\).
By the construction of \(Z_{m}\) and \(T_{m}^{\prime}\), Both of then are irreducible, relative dimensional one over the base and their intersect set is infinite. Hence \(Z_{m}=T_{m}\). In particular \((E,\theta)\) is motivic.
**Acknowledgement**.: _The authors warmly thank Raju Krishnamoorthy, Mao Sheng, Carlos Simpson, Rui-Ran Sun, Hong-Jie Yu and Shing-Tung Yau for helpful discussions._
|
2310.01618 | Operator Learning Meets Numerical Analysis: Improving Neural Networks
through Iterative Methods | Deep neural networks, despite their success in numerous applications, often
function without established theoretical foundations. In this paper, we bridge
this gap by drawing parallels between deep learning and classical numerical
analysis. By framing neural networks as operators with fixed points
representing desired solutions, we develop a theoretical framework grounded in
iterative methods for operator equations. Under defined conditions, we present
convergence proofs based on fixed point theory. We demonstrate that popular
architectures, such as diffusion models and AlphaFold, inherently employ
iterative operator learning. Empirical assessments highlight that performing
iterations through network operators improves performance. We also introduce an
iterative graph neural network, PIGN, that further demonstrates benefits of
iterations. Our work aims to enhance the understanding of deep learning by
merging insights from numerical analysis, potentially guiding the design of
future networks with clearer theoretical underpinnings and improved
performance. | Emanuele Zappala, Daniel Levine, Sizhuang He, Syed Rizvi, Sacha Levy, David van Dijk | 2023-10-02T20:25:36Z | http://arxiv.org/abs/2310.01618v1 | # Operator Learning Meets Numerical Analysis: Improving Neural Networks through Iterative Methods
###### Abstract
Deep neural networks, despite their success in numerous applications, often function without established theoretical foundations. In this paper, we bridge this gap by drawing parallels between deep learning and classical numerical analysis. By framing neural networks as operators with fixed points representing desired solutions, we develop a theoretical framework grounded in iterative methods for operator equations. Under defined conditions, we present convergence proofs based on fixed point theory. We demonstrate that popular architectures, such as diffusion models and AlphaFold, inherently employ iterative operator learning. Empirical assessments highlight that performing iterations through network operators improves performance. We also introduce an iterative graph neural network, PIGN, that further demonstrates benefits of iterations. Our work aims to enhance the understanding of deep learning by merging insights from numerical analysis, potentially guiding the design of future networks with clearer theoretical underpinnings and improved performance.
## 1 Introduction
Deep neural networks have become essential tools in domains such as computer vision, natural language processing, and physical system simulations, consistently delivering impressive empirical results. However, a deeper theoretical understanding of these networks remains an open challenge. This study seeks to bridge this gap by examining the connections between deep learning and classical numerical analysis.
By interpreting neural networks as operators that transform input functions to output functions, discretized on some grid, we establish parallels with numerical methods designed for operator equations. This approach facilitates a new iterative learning framework for neural networks, inspired by established techniques like the Picard iteration.
Our findings indicate that certain prominent architectures, including diffusion models, AlphaFold, and Graph Neural Networks (GNNs), inherently utilize iterative operator learning (see Figure 1). Empirical evaluations show that adopting a more explicit iterative approach in these models can enhance performance. Building on this, we introduce the Picard Iterative Graph Neural Network (PIGN), an iterative GNN model, demonstrating its effectiveness in node classification tasks.
In summary, our work:
* Explores the relationship between deep learning and numerical analysis from an operator perspective.
* Introduces an iterative learning framework for neural networks, supported by theoretical convergence proofs.
* Evaluates the advantages of explicit iterations in widely-used models.
* Presents PIGN and its performance metrics in relevant tasks.
* Provides insights that may inform the design of future neural networks with a stronger theoretical foundation.
The remainder of this manuscript is organized as follows: We begin by delving into the background and related work to provide the foundational understanding for our contributions. This is followed by an introduction to our theoretical framework for neural operator learning. Subsequently, we delve into a theoretical exploration of how various prominent deep learning frameworks undertake operator learning. We conclude with empirical results underscoring the advantages of our proposed framework.
## 2 Background and Related Work
Numerical Analysis.Numerical analysis is rich with algorithms designed for approximating solutions to mathematical problems. Among these, the Banach-Caccioppoli theorem is notable, used for iteratively solving operator equations in Banach spaces. The iterations, often called Fixed Point iterations, or Picard iterations, allow to solve an operator equation approximately, in an iterative manner. Given an operator \(T\), this approach seeks a function \(u\) such that \(T(u)=u\), called a fixed point, starting with an initial guess and refining it iteratively.
The use of iterative methods has a long history in numerical analysis for approximate solutions of intractable equations, for instance involving nonlinear operators. For example, integral
Figure 1: Overview of iterative framework. (A) Popular architectures which incorporate iterative components in their framework. (B) Convergence behavior of an iterative solver. (C) Behavior of iterative solver converging to a fixed point in the data manifold.
equations, e.g. of Urysohn and Hammerstein type, arise frequently in physics and engineering applications and their study has long been treated as a fixed point problem [13, 14, 15].
Convergence to fixed points can be guaranteed under contractivity assumptions by the Banach-Caccioppoli fixed point theorem [1]. Iterative solvers have also been crucial for partial differential equations and many other operator equations [16].
Operator Learning.Operator learning is a class of deep learning methods where the objective of optimization is to learn an operator between function spaces. Examples and an extended literature can be found in [17, 18]. The interest of such an approach, is that mapping functions to functions we can model dynamics datasets, and leverage the theory of operators. When the operator learned is defined through an equation, e.g. an integral equation as in [19], along with the training procedure we also need a way of solving said equation, i.e. we need a solver. For highly nonlinear problems, when deep learning is not involved, these solvers often utilize some iterative procedure as in [16]. Our approach here brings the generality of iterative approaches into deep learning by allowing to learn operators between function spaces through iterative procedure used in solving nonlinear operator equations.
Transformers.Transformers ([23, 17, 18]), originally proposed for natural language processing tasks, have recently achieved state-of-the-art results in a variety of computer vision applications ([19, 18, 16, 17, 20, 2]). Their self-attention mechanisms make them well-suited for tasks beyond just sequence modeling. Notably, transformers have been applied in an iterative manner in some contexts, such as the "recycling" technique used in AlphaFold2 [15].
AlphaFold.DeepMind's AlphaFold [19] is a protein structure prediction model, which was significantly improved in [15] with the introduction of AlphaFold2 and further extended to protein complex modeling in AlphaFold-Multimer [14]. AlphaFold2 employs an iterative refinement technique called "recycling", which recycles the predicted structure through its entire network. The number of iterations was increased from 3 to 20 in AF2Complex [1], where improvement was observed. An analysis of DockQ scores with increased iterations can be found in [1]. We only look at monomer targets, where DockQ scores do not apply and focus on global distance test (GDT) scores and root-mean-square deviation (RMSD).
Diffusion Models.Diffusion models were first introduced in [13] and were shown to have strong generative capabilities in [19] and [20]. They are motivated by diffusion processes in non-equilibrium thermodynamics [14] related to Langevin dynamics and the corresponding Kolmogorov forward and backward equations. Their connection to stochastic differential equations and numerical solvers is highlighted in [13, 14, 15, 16]. We focus on the performance of diffusion models at different amounts of timesteps used during training, including an analysis of FID [12] scores.
Graph Neural Networks (GNNs).GNNs are designed to process graph-structured data through iterative mechanisms. Through a process called message passing, they repeatedly aggregate and update node information, refining their representations. The iterative nature of GNNs was explored in [12], where the method combined repeated applications of the same GNN layer using confidence scores. Although this shares similarities with iterative techniques, our method distinctly leverages fixed-point theory, offering specific guarantees and enhanced performance, as detailed in Section 5.1.
## 3 Iterative Methods for Solving Operator Equations
In the realm of deep learning and neural network models, direct solutions to operator equations often become computationally intractable. This section offers a perspective that is applicable to
machine learning, emphasizing the promise of iterative methods for addressing such challenges in operator learning. We particularly focus on how the iterative numerical methods converge and their application to neural network operator learning. These results will be used in the Appendix to derive theoretical convergence guarantees for iterations on GNNs and Transformer architectures, see Appendix A.
### Setting and Problem Statement
Consider a Banach space \(X\). Let \(T:X\longrightarrow X\) be a continuous operator. Our goal is to find solutions to the following equation:
\[\lambda T(x)+f=x, \tag{1}\]
where \(f\in X\) and \(\lambda\in\mathbb{R}-\{0\}\) is a nontrivial scalar. A solution to this equation is a fixed point \(x^{*}\) for the operator \(P=\lambda T+f\):
\[\lambda T(x^{*})+f=x^{*}. \tag{2}\]
### Iterative Techniques
It is clear that for arbitrary nonlinear operators, solving Equation (1) is not feasible. Iterative techniques such as Picard or Newton-Kantorovich iterations become pivotal. These iterations utilize a function \(g\) and progress as:
\[x_{n+1}=g(T,x_{n}). \tag{3}\]
Central to our discussion is the interplay between iterative techniques and neural network operator learning. We highlight the major contribution of this work: By using network operators iteratively during training, convergence to network fixed points can be ensured. This approach uniquely relates deep learning with classical numerical techniques.
### Convergence of Iterations and Their Application
A particular case of great interest is when the operator \(T\) takes an integral form and \(X\) represents a function space, our framework captures the essence of an integral equation (IE). By introducing \(P_{\lambda}(x)=\lambda T(x)+f\), we can rephrase our problem as a search for fixed points.
We now consider the problem of approximating a fixed point of a nonlinear operator. The results of this section are applied to various deep learning settings in Appendix A to obtain theoretical guarantees for the iterative approaches.
**Theorem 1**.: _Let \(\epsilon>0\) be fixed, and suppose that \(T\) is Lipschitz with constant \(k\). Then, for all \(\lambda\) such that \(|\lambda k|<1\), we can find \(y\in X\) such that \(||\lambda T(y)+f-y||<\epsilon\) for any choice of \(\lambda\), independently of the choice of \(f\)._
Proof.: Let us set \(y_{0}:=f\) and \(y_{n+1}=f+\lambda T(y_{n})\) and consider the term \(||y_{1}-y_{0}||\). We have
\[||y_{1}-y_{0}||=||\lambda T(y_{0})||=|\lambda|||T(y_{0})||.\]
For an arbitrary \(n>1\) we have
\[||y_{n+1}-y_{n}||=||\lambda T(y_{n})-\lambda T(y_{n-1})||\leq k|\lambda|||y_{ n}-y_{n-1}||.\]
Therefore, applying the same procedure to \(y_{n}-y_{n-1}=T(y_{n-1})-T(y_{n-2})\) until we reach \(y_{1}-y_{0}\), we obtain the inequality
\[||y_{n+1}-y_{n}||\leq|\lambda|^{n}k^{n}||T(y_{0})||.\]
Since \(|\lambda|k<1\), the term \(|\lambda|^{n}k^{n}||T(y_{0})||\) is eventually smaller than \(\epsilon\), for all \(n\geq\nu\) for some choice of \(\nu\). Defining \(y:=y_{\nu}\) for such \(\nu\) gives the following
\[||\lambda T(y_{\nu})+f-y_{\nu}||=||y_{\nu+1}-y_{\nu}||<\epsilon.\]
The following now follows easily.
**Corollary 1**.: _Consider the same hypotheses as above. Then Equation 1 admits a solution for any choice of \(\lambda\) such that \(|\lambda|k<1\)._
Proof.: From the proof of Theorem 1 it follows that the sequence \(y_{n}\) is a Cauchy sequence. Since \(X\) is Banach, then \(y_{n}\) converges to \(y\in X\). By continuity of \(T\), \(y\) is a solution to Equation 1.
Recall that for nonlinear operators, continuity and boundedness are not equivalent conditions.
**Corollary 2**.: _If in the same situation above \(T\) is also bounded, then the choice of \(\nu\) of the iteration can be chosen uniformly with respect to \(f\), for a fixed choice of \(\lambda\)._
Proof.: From the proof of Theorem 1, we have that
\[||y_{n+1}-y_{n}||\leq|\lambda|^{n}k^{n}||T(y_{0})||=|\lambda|^{n}k^{n}||T(f)||.\]
If \(T\) is bounded by \(M\), then the previous inequality is independent of the element \(f\in X\). Let us choose \(\nu\) such that \(|\lambda|^{n}k^{n}<\epsilon/M\). Then, suppose \(f\) is an arbitrary element of \(X\). Initializing \(y_{0}=f\), \(y_{\nu}\) will satisfy \(||\lambda T(y_{\nu})+f-y_{\nu}||<\epsilon\), for any given choice of \(\epsilon\).
The following result is classic, and its proof can be found in several sources. See for instance Chapter 5 in [1].
**Theorem 2**.: _(Banach-Caccioppoli fixed point theorem) Let \(X\) be a Banach space, and let \(T:X\longrightarrow X\) be contractive mapping with contractivity constant \(0<k<1\). Then, \(T\) has a unique fixed point, i.e. the equation \(T(x)=x\) has a unique solution \(u\) in \(X\). Moreover, for any choice of \(u_{0}\), \(u_{n}=T^{n}(u_{0})\) converges to the solution with rate of convergence_
\[||u_{n}-u|| < \frac{k^{n}}{1-k}||u_{0}-u_{1}||, \tag{4}\] \[||u_{n}-u|| < \frac{k}{1-k}||u_{n-1}-u_{n}||. \tag{5}\]
The possibility of solving Equation 1 with different choices of \(f\) is particularly important in the applications that we intend to consider, as it is interpreted as the initialization of the model. While various models employ iterative procedures for operator learning tasks implicitly, they lack a general theoretical perspective that justifies their approach. Several other models can be modified using iterative approaches to produce better performance with lower number of parameters. We will give experimental results in this regard to validate the practical benefit of our theoretical framework.
While the iterations considered so far have a fixed procedure which is identical per iteration, more general iterative procedures where the step changes between iterations are also diffused, and this can be done also adaptively.
### Applications
Significance and Implications.Our results underscore the existence of a solution for Equation 1 under certain conditions. Moreover, when the operator \(T\) is bounded, our iterative method showcases uniform convergence. It follows that ensuring that the operators approximated by deep neural network architectures are contractive, we can introduce an iterative procedure that will allow us to converge to the fixed point as in Equation 2.
Iterative Methods in Modern Deep Learning.In contemporary deep learning architectures, especially those like Transformers, Stable Diffusion, AlphaFold, and Neural Integral Equations, the importance of operator learning is growing. However, these models, despite employing iterative techniques, often lack the foundational theoretical perspective that our framework provides. We will subsequently present experimental results that vouch for the efficacy and practical advantages of our theoretical insights.
Beyond Basic Iterations.While we have discussed iterations with fixed procedures, it is imperative to highlight that more general iterative procedures exist, and they can adapt dynamically. Further, there exist methods to enhance the rate of convergence of iterative procedures, and our framework is compatible with them.
## 4 Neural Network Architectures as Iterative Operator Equations
In this section, we explore how various popular neural network architectures align with the framework of iterative operator learning. By emphasizing this operator-centric view, we unveil new avenues for model enhancements. Notably, shifting from implicit to explicit iterations can enhance model efficacy, i.e. through shared parameters across layers. A detailed discussion of the various methodologies given in this section is reported in Appendix B. In the appendix, we investigate architectures such as neural integral equations, transformers, AlphaFold for protein structure prediction, diffusion models, graph neural networks, autoregressive models, and variational autoencoders. We highlight the iterative numerical techniques underpinning these models, emphasizing potential advancements via methods like warm restarts and adaptive solvers. Empirical results substantiate the benefits of this unified perspective in terms of accuracy and convergence speed.
Diffusion models.Diffusion models, especially denoising diffusion probabilistic models (DDPMs), capture a noise process and its reverse (denoising) trajectory. While score matching with Langevin dynamics models (SMLDs) is relevant, our focus is primarily on DDPMs for their simpler setup. These models transition from complex pixel-space distributions to more tractable Gaussian distributions. Notably, increasing iterations can enhance the generative quality of DDPMs, a connection we wish to deepen. This procedure can be seen as instantiating an iteration procedure, where iterations are modified as in the methods found in [20]. This operator setting and iterative interpretation is described in detail in Appendix B.1.
To empirically explore convergence with iterations in diffusion models, we train 10 different DDPMs with 100-1000 iterations and analyze their training dynamics and perceptual quality. Figure 2 reveals that increasing timesteps improves FID scores of generated images. Additionally, Figure 2 demonstrates a consistent decrease in both training and test loss with more time steps, attributed to the diminished area under the expected KL divergence curve over time (Figure 8). Notably, FID scores decline beyond the point of test loss convergence, stabilizing after approximately 150,000 steps (Figure 8). This behavior indicates robust convergence with increasing iterations.
AlphaFold.AlphaFold, a revolutionary protein structure prediction model, takes amino acid sequences and predicts their three-dimensional structure. While the model's intricacies can be found in [17], our primary interest lies in mapping AlphaFold within the operator learning context. Considering an input amino acid sequence, it undergoes processing to yield a multiple sequence alignment (MSA) and a pairwise feature representation. These data are subsequently fed into Evoformers and Structure Modules, iteratively refining the protein's predicted structure. We can think of the output of the Evoformer model as pair of functions lying in some discretized Banach space, while the Structure Modules of AlphaFold can be thought of as being operators over a space of matrices. This is described in detail in Appendix B.2.
To empirically explore the convergence behavior of AlphaFold as a function of iterations, we applied AlphaFold-Multimer across a range of 0-20 recycles on each of the 29 monomers using ground truth targets from CASP15. Figure 3 presents the summarized results, which show that while on average the GDT scores and RMSD improve with AlphaFold-Multimer, not all individual targets consistently converge, as depicted in Figures 4 and 5. Given that AlphaFold lacks a convergence constraint in its training, its predictions can exhibit variability across iterations.
Graph Neural Networks.Graph neural networks (GNNs) excel in managing graph-structured data by harnessing a differentiable message-passing mechanism. This enables the network to assimilate information from neighboring nodes to enhance their representations. We can think of the feature spaces as being Banach spaces of functions, which are discretized according to some grid. The GNN architecture can be thought of as being an operator acting on the direct sum of the Banach spaces, where the underlying geometric structure of the graph determines how the operator combines information through the topological information of the graph. A detailed description is given in Appendix A.3, where theoretical guarantees for the convergence of the iterations are given, and Appendix B.3.
Neural Integral Equations.Neural Integral Equations (NIEs), and their variant Attentional Neural Integral Equations (ANIEs), draw inspiration from integral equations. Here, an integral operator, determined by a neural network, plays a pivotal role.
Denoting the integrand of the integral operator as \(G_{\theta}\) within an NIE, the equation becomes:
\[\mathbf{y}=f(\mathbf{y},\mathbf{x},t)+\int_{\Omega\times[0,1]}G_{\theta}( \mathbf{y},\mathbf{x},\mathbf{z},t,s)d\mathbf{z}ds\]
To solve such integral equations, one very often uses iterative methods, as done in [23] and the training of the NIE model consists in finding the parameters \(\theta\) such that the solutions of the corresponding integral equations model the given data. A more detailed discussion of this model is given in Appendix B.4.
## 5 Experiments
In this section, we showcase experiments highlighting the advantages of explicit iterations. We introduce a new GNN architecture based on Picard iteration and enhance vision transformers with
Figure 2: **Left and Middle**: Losses always decrease with more iterations in DDPMs. Training is stable and overfitting never occurs. EMA smoothing with \(\alpha=0.1\) is used for the loss curves to make the differences clearer. **Right**: DDPMs show FID and loss improves with an increased number of iterations on CIFAR-10. The number of iterations represent the denoising steps during training and inference. All diffusion models, UNets of identical architecture, are trained on CIFAR-10’s training dataset with 64-image batches.
Picard iteration.
### PIGN: Picard Iterative Graph Neural Network
To showcase the benefits of explicit iterations in GNNs, we developed **P**icard **I**teration **G**raph neural **N**etwork (PIGN), a GNN that applies Picard iterations for message passing. We evaluate PIGN against state-of-the-art GNN methods and another iterative approach called IterGNN [THG\({}^{+}\)20] on node classification tasks.
GNNs can suffer from over-smoothing and over-squashing, limiting their ability to capture long-range dependencies in graphs [NHN\({}^{+}\)23]. We assess model performance on noisy citation graphs (Cora and CiteSeer) with added drop-in noise. Drop-in noise involves increasing a percentage \(p\) of the bag-of-words feature values, hindering classification. We also evaluate on a long-range benchmark (LRGB) for graph learning [DRG\({}^{+}\)22].
Table 5 shows PIGN substantially improves accuracy over baselines on noisy citation graphs. The explicit iterative process enhances robustness. Table 1 illustrates PIGN outperforms prior iterative and non-iterative GNNs on the long-range LRGB benchmark, using various standard architectures. Applying Picard iterations enables modeling longer-range interactions.
The PIGN experiments demonstrate the benefits of explicit iterative operator learning. Targeting weaknesses of standard GNN training, PIGN effectively handles noise and long-range dependencies. A theoretical study of convergence guarantees is given in Appendix A.3.
### Enhancing Transformers with Picard Iteration
We hypothesize that many neural network frameworks can benefit from Picard iterations. Here, we empirically explore adding iterations to Vision Transformers. Specifically, we demonstrate the
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **GCN**[15] & **GAT**[15] & **GraphSAGE**[15] \\ \hline w/o iterations & \(0.1510\pm 0.0029\) & \(0.1204\pm 0.0127\) & \(0.3015\pm 0.0032\) \\ IterGNN & \(0.1736\pm 0.0311\) & \(0.1099\pm 0.0459\) & \(0.1816\pm 0.0014\) \\ PIGN (Ours) & \(\mathbf{0.1831\pm 0.0038}\) & \(\mathbf{0.1706\pm 0.0046}\) & \(\mathbf{0.3560\pm 0.0037}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: F1 scores of different models on the standard test split of the LRGB PascalVOC-SP dataset. Rows refer to model frameworks and columns are GNN backbone layers. A budget of 500k trainable parameters is set for each model. Each model is run on the same set of 5 random seeds. The mean and standard deviation are reported. For IterGNN with GAT backbone, two of the runs keep producing exploding loss so the reported statistics only include three runs.
Figure 3: On average, additional iterations enhance AlphaFold-Multimer’s performance, though the model doesn not invariably converge with more iterations. Target-specific trends can be seen in Figures 4 and 5.
benefits of explicit Picard iteration in transformer models on the task of solving the Navier-Stokes partial differential equation (PDE) as well as self-supervised masked prediction of images. We evaluate various Vision Transformer (ViT) architectures [1] along with Attentional Neural Integral Equations (ANIE) [13].
For each model, we perform training and evaluation with different numbers of Picard iterations as described in Section 3.2. We empirically observe improved performance with more iterations for all models, since additional steps help better approximate solutions to the operator equations.
Table 2 shows lower mean squared error on the PDE task for Vision Transformers when using up to three iterations compared to the standard single-pass models. Table 3 shows a similar trend for self-supervised masked prediction of images. Finally, Table 4 illustrates that higher numbers of iterations in ANIE solvers consistently reduces error. We observe in our experiments across several transformer-based models and datasets that, generally, more iterations improve performance.
Overall, these experiments highlight the benefits of explicit iterative operator learning. For transformer-based architectures, repeating model application enhances convergence to desired solutions. Our unified perspective enables analyzing and improving networks from across domains. A theoretical study of the convergence guarantees of the iterations is given in Appendix A.2.
## 6 Discussion
We introduced an iterative operator learning framework in neural networks, drawing connections between deep learning and numerical analysis. Viewing networks as operators and employing techniques like Picard iteration, we established convergence guarantees. Our empirical results, exemplified by PIGN--an iterative GNN, as well as an iterative vision transformer, underscore the benefits of explicit iterations in modern architectures.
For future work, a deeper analysis is crucial to pinpoint the conditions necessary for convergence
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & \(N_{\text{iter}}=1\) & \(N_{\text{iter}}=2\) & \(N_{\text{iter}}=3\) \\ \hline ViT & \(0.2472\pm 0.0026\) & \(0.2121\pm 0.0063\) & \(\mathbf{0.0691\pm 0.0024}\) \\ ViTsmall & \(0.2471\pm 0.0025\) & \(0.1672\pm 0.0087\) & \(\mathbf{0.0648\pm 0.0022}\) \\ ViTparallel & \(0.2474\pm 0.0027\) & \(0.2172\pm 0.0066\) & \(\mathbf{0.2079\pm 0.0194}\) \\ ViT3D & \(0.2512\pm 0.0082\) & \(\mathbf{0.2237\pm 0.0196}\) & \(0.2529\pm 00.0079\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: ViT models used to solve a PDE (Navier-Stokes). The mean squared error is reported for each model as the number of iterations varies. A single iteration indicates the baseline ViT model. Higher iterations perform better than the regular ViT (\(N_{\text{iter}}=1\)).
and stability within our iterative paradigm. There remain unanswered theoretical elements about dynamics and generalization. Designing network architectures inherently tailored for iterative processes might allow for a more effective utilization of insights from numerical analysis. We are also intrigued by the potential of adaptive solvers that modify the operator during training, as these could offer notable advantages in both efficiency and flexibility.
In summation, this work shines a light on the synergies between deep learning and numerical analysis, suggesting that the operator-centric viewpoint could foster future innovations in the theory and practical applications of deep neural networks.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Model size** & \(N_{\text{iter}}=1\) & \(N_{\text{iter}}=2\) & \(N_{\text{iter}}=4\) & \(N_{\text{iter}}=6\) & \(N_{\text{iter}}=8\) \\ \hline \(1H|1B\) & \(0.0564\pm 0.0070\) & \(0.0474\pm 0.0065\) & \(0.0448\pm 0.0062\) & \(0.0446\pm 0.0065\) & \(\mathbf{0.0442\pm 0.0065}\) \\ \(4H|1B\) & \(0.0610\pm 0.0078\) & \(0.0516\pm 0.0083\) & \(0.0512\pm 0.0070\) & \(0.0480\pm 0.0066\) & \(\mathbf{0.0478\pm 0.0066}\) \\ \(2H|2B\) & \(0.0476\pm 0.0065\) & \(0.0465\pm 0.0067\) & \(0.0458\pm 0.0067\) & \(0.0451\pm 0.0064\) & \(\mathbf{0.0439\pm 0.0062}\) \\ \(4H|4B\) & \(0.0458\pm 0.0062\) & \(0.0461\pm 0.0065\) & \(0.0453\pm 0.0063\) & \(0.0453\pm 0.0061\) & \(\mathbf{0.0445\pm 0.0059}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of ANIE on a PDE (Navier-Stokes) as the number of iterations of the integral equation solver varies, and for different sizes of architecture. Here \(H\) indicates the number of heads and \(B\) indicates the number of blocks (layers). A single iteration means that the integral operator is applied once. As the number of iterations of the solver increases, the performance of the model in terms of mean squared error improves.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & \(N_{\text{iter}}=1\) & \(N_{\text{iter}}=2\) & \(N_{\text{iter}}=3\) \\ \hline ViT (MSE) & \(0.0126\pm 0.0006\) & \(\mathbf{0.0121\pm 0.0006}\) & \(0.0122\pm 0.0006\) \\ ViT (FID) & \(20.0433\) & \(20.0212\) & \(\mathbf{19.2956}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: ViT models trained with a pixel dropout reconstruction objective on CIFAR-10. The ViT architecture contains 12 encoder layers, 4 decoder layers, 3 attention heads in both the encoder and decoder. The embedding dimension and patch size are 192 and 2. The employed loss is \(\text{MSE}((1-\lambda)T(x_{i})+\lambda x_{i},y)\), computed on the final iteration \(N_{\text{iter}}=i\). Images are altered by blacking out 75% of pixels. During inference, iterative solutions are defined as \(x_{i+1}=(1-\lambda)T(x_{i})+\lambda x_{i}\), for \(i\in\{0,1,\ldots N\}\). Here, \(N=2\) and \(\lambda=1/2\). |
2302.01409 | Hyperbolic Contrastive Learning | Learning good image representations that are beneficial to downstream tasks
is a challenging task in computer vision. As such, a wide variety of
self-supervised learning approaches have been proposed. Among them, contrastive
learning has shown competitive performance on several benchmark datasets. The
embeddings of contrastive learning are arranged on a hypersphere that results
in using the inner (dot) product as a distance measurement in Euclidean space.
However, the underlying structure of many scientific fields like social
networks, brain imaging, and computer graphics data exhibit highly
non-Euclidean latent geometry. We propose a novel contrastive learning
framework to learn semantic relationships in the hyperbolic space. Hyperbolic
space is a continuous version of trees that naturally owns the ability to model
hierarchical structures and is thus beneficial for efficient contrastive
representation learning. We also extend the proposed Hyperbolic Contrastive
Learning (HCL) to the supervised domain and studied the adversarial robustness
of HCL. The comprehensive experiments show that our proposed method achieves
better results on self-supervised pretraining, supervised classification, and
higher robust accuracy than baseline methods. | Yun Yue, Fangzhou Lin, Kazunori D Yamada, Ziming Zhang | 2023-02-02T20:47:45Z | http://arxiv.org/abs/2302.01409v1 | # Hyperbolic Contrastive Learning
###### Abstract
Learning good image representations that are beneficial to downstream tasks is a challenging task in computer vision. As such, a wide variety of self-supervised learning approaches have been proposed. Among them, contrastive learning has shown competitive performance on several benchmark datasets. The embeddings of contrastive learning are arranged on a hypersphere that results in using the inner (dot) product as a distance measurement in Euclidean space. However, the underlying structure of many scientific fields like social networks, brain imaging, and computer graphics data exhibit highly non-Euclidean latent geometry. We propose a novel contrastive learning framework to learn semantic relationships in the hyperbolic space. Hyperbolic space is a continuous version of trees that naturally owns the ability to model hierarchical structures and is thus beneficial for efficient contrastive representation learning. We also extend the proposed Hyperbolic Contrastive Learning (HCL) to the supervised domain and studied the adversarial robustness of HCL. The comprehensive experiments show that our proposed method achieves better results on self-supervised pretraining, supervised classification, and higher robust accuracy than baseline methods.
## 1 Introduction
In computer vision, downstream tasks could be fine-tuned efficiently and effectively with good image representations. However, learning a good image representation remains a challenging task [25, 29, 41, 54, 65, 73, 85]. The "pretext" self-supervised learning [9, 28, 62, 78] relies on heuristic hand-crafted task design to learn representations. Recently, contrastive learning [17, 36] has become the dominant method in self-supervised learning and has shown competitive performance over its supervised counterpart on several downstream tasks such as classification, object detection, and segmentation [18, 21, 24, 32, 38, 39, 44, 52, 56, 64].
Typically, given an anchor point \(\mathbf{x}\), contrastive learning takes augmented views of the same data as positive pairs \((\mathbf{x},\mathbf{x}^{+})\), and other data in the same batch as negative pairs \((\mathbf{x},\mathbf{x}^{-})\). Since the similarity in the embedding space reflects the similarity of semantics. The contrastive representation learning attempts to pull the embeddings of positive pairs closer and push the embeddings of negative pairs away in the latent space by optimizing the objective such as the InfoNCE loss [12, 64].
Despite the promising results shown by current contrastive learning literature, it suffers from a fundamental limitation that has been encountered by many embedding methods: the ability to model complex patterns is inherently bounded by the dimensionality of the embedding space [60]. The embeddings of contrastive learning are arranged on a hypersphere that results in using inner (dot) product as a distance measurement. However, the underlying structure of many scientific fields like social networks, brain imaging, and computer graphics data are hierarchical [6]. In this paper, we attempt to build an efficient learning framework by introducing hyperbolic space.
Different from the Euclidean space \(\mathbb{R}^{n}\) that has polynomial volume growth w.r.t.the radius, the hyperbolic space \(\mathbb{H}^{n}\) has exponential growth that is suitable for tree-like structure data. The representation power of hyperbolic space has been demonstrated in NLP [60, 61] as well as image segmentation [3, 84], few-shot [48] and zero-shot learning [53], and metric learning equipped with vision transformers [20]. To better reveal the underlying hierarchical structure of data, we explore the potential of hyperbolic space, where the curvature is a constant negative, in contrastive learning. Instead of computing the feature similarity in Euclidean space, we project data to hyperbolic space for distance measurement. Similar to the general tree structure, the hyperbolic space is a continuous version of trees that naturally owns the ability to model hierarchical structures and is thus beneficial for efficient contrastive learning.
* In this paper, we propose _Hyperbolic Contrastive Learning (HCL)_, a new contrastive learning framework for self-supervised image pretraining that leverages the representation power of hyperbolic space.
* We further propose _Supervised Hyperbolic Contrastive
_Learning (SHCL)_, a variant of general supervised contrastive loss that yields even better performance on supervised image classification tasks.
Though the feature consistency w.r.t.data augmentations introduced by the contrastive loss is effective for standard generalization of CNNs and unsupervised learning [43, 45, 87, 88, 92, 46], deep learning models also exhibit adversarial fragility [14]. Considering pretrained models from self-supervision is usually used in downstream tasks for faster fine-tuning or better accuracy, it is natural to explore whether pretrained self-supervised models play a similar role for adversarial training as they have for standard training [14]. The connection between self-supervised contrastive learning and adversarial training has not been built until recently [45, 49].
* To develop label-efficient and robust models, we further investigate _Robust Hyperbolic Contrastive Learning (RHCL)_ and verify that hyperbolic space is more suitable for contrastive learning and more robust to adversarial attacks.
Our approach is based on Poincare model, a particular model of hyperbolic space that is well-suited for gradient-based optimization [60]. To the best of our knowledge, we are the first to investigate contrastive learning, its supervised counterpart, as well as its adversarial robustness in the hyperbolic space. Empirically, We show that the proposed _HCL_ and _SHCL_ are more appropriate in capturing the underlying relationships of image data and thus result in better classification performance on several benchmark datasets. We also demonstrate that _RHCL_ is more robust to adversarial perturbations compared with general contrastive learning and other baseline methods.
## 2 Related Work
**Self-supervised Contrastive Learning.** The idea of pulling the representations of similar data (positive pairs) in the embedding space while pushing away representations of dissimilar data (negative pairs) has long been the classic idea in metric learning [59]. One of the fundamental losses in metric learning is contrastive loss [36]. A wide variety of its variants has since been proposed [5, 30, 63, 74, 83]. Recently, learning representations from unlabeled data in contrastive way [36, 17] has been one of the most competitive research field [4, 11, 12, 13, 16, 38, 42, 44, 51, 56, 64, 77, 86]. Popular model structures like SimCLR [12] and Moco [38] apply the commonly used loss function InfoNCE [64] to learn a latent representation that is beneficial to downstream tasks. Several theoretical studies show that self-supervised contrastive loss optimizes data representations by aligning the same image's two views (positive pairs) while pushing different images (negative pairs) away on the hypersphere [15, 82, 2, 81]. Though these pair-based methods in self-supervised contrastive learning do not require labels, they rely heavily on calculating Euclidean distance between data embeddings. In addition, hierarchical semantic structures naturally exist in image datasets. While other work like the Hierarchical Contrastive Selective Coding (HCSC) [34] learns a set of hierarchical prototypes to represent the hierarchical semantic structures underlying the data in the latent space explicitly, our work proposes to learn the hierarchy structure in a different space with hyperbolic embedding without defining extra prototypes.
**Adversarial Robustness of Contrastive Learning.** Deep Neural Networks (DNNs) for computer vision are vulnerable to small image perturbations [8, 31, 75]. For example, small perturbations to the visual input can result in large feature variations and crucial challenges in safety-critical applications [8, 22, 31, 58, 66, 75]. Adversarial defense algorithms have been proposed in response to the adversarial threat [55, 70, 91]. One of the most popular approaches is adversarial training (AT) [55], which trains the neural network with the worst-case adversarial examples. Although existing contrastive learning literature has shown boosted performance on the standard generalization, its connection with adversarial robustness has not been studied until recently [45, 49]. For detailed review please refer [69]. Essentially, SimCLR encourages feature consistency to specified data augmentations. Coincidentally, enforcing consistency during training w.r.t. perturbations has been shown to smooth feature space near samples and thus immediately help adversarial robustness [10, 90, 1]. Several closely relevant works have investigated improving the adversarial robustness via self-supervised contrastive pretraining [14, 35, 49, 89]. We draw upon the insights in these works and seek to investigate the adversarial robustness of contrastive learning in a new embedding space.
**Supervised Contrastive Learning.** The cross-entropy loss has been the most widely used loss function for supervised learning of deep classification models for years. With the development of contrastive learning, the approach has been extended to fully-supervised setting [47]. Since the label information is known in supervised setting, instances from the same class naturally form a positive data pool, and data from different classes are negatives. Technically, each anchor has many positives as opposed to self-supervised contrastive learning which uses only a single augmentation as a positive. The supervised contrastive (SupCon) loss has been shown consistently outperforms cross-entropy on several image classification tasks [47]. Our work investigates the hyperbolic space application on both self-supervised domain and its supervised extension which will provide preliminary results for some future studies in this direction.
**Hyperbolic Embeddings.** The Euclidean space has been widely used by the machine learning community for representation learning as this space is a natural generalization of our intuition-friendly, visual three-dimensional space, and
the easy measurement of distance with inner-product in this space [26, 67]. However, the Euclidean embedding is not the suitable choice for some complex tree-like data fields such as Biology, Network Science, Computer Graphics, or Computer Vision that exhibit highly non-Euclidean latent geometry [6, 26]. This encourages the research community to develop deep neural networks in non-Euclidean space such as the hyperbolic space, which is a Riemannian manifold of constant negative curvature. Recently, the gap between the hyperbolic embeddings and the Euclidean embeddings has been narrowed by deriving the essential components of deep neural networks in hyperbolic geometry (_e.g_.multinomial logistic regression, fully-connected layers, and recurrent neural networks _etc_.). [26, 72]. In the computer vision domain, the hyperbolic space has been found well-suited for image segmentation [3, 84], zero-shot recognition [23, 53], few-shot image classification [23, 27, 48] as well as point cloud classification [57]. The work of [33] revealed the vanishing gradients issue of Hyperbolic Neural Networks (HNNs) when applied to classification benchmarks that may not exhibit hierarchies and showed clipped HNNs are more robust to adversarial attacks. Concurrently, [20] mapped the output of image representations encoded by a vision transformer and a fully-connected layer to a hyperbolic space to group the representations of similar objects in the embedding space. Their goal is to investigate the pairwise cross-entropy loss with the hyperbolic distance function in metric learning while we focus on investigating the self-supervised and supervised contrastive learning in hyperbolic space.
## 3 Method
We briefly introduce the hyperbolic geometry and Poincare embedding in Section 3.1. We then discuss the proposed frameworks HCL and SHCL in Section 3.2 and 3.3. We describe the adversarial robustness of HCL in Section 3.4.
### Hyperbolic Geometry & Embedding
Unlike the Euclidean geometry where the circle length (\(2\pi r\)) and disc area (\(2\pi r^{2}\)) grow only linearly and quadratically with regard to \(r\), hyperbolic disc area and circle length grow exponentially with their radius [60]. In hyperbolic geometry, only two dimensions are needed to represent a regular tree with branching factor \(b\) with \((b+1)b^{\ell-1}\) nodes at level \(\ell\) and \(((b+1)b^{\ell}-2)/(b-1)\) nodes on a level less or equal than \(\ell\). Thus the space is a natural choice for complex data with hierarchical structure.
Following the assumption in general contrastive learning, we are interested in finding an embedding that the distance in the latent space can reflect semantic similarity. In addition, we are interested in embedding the latent hierarchy efficiently. In the meantime, we do not assume we have access to the hierarchy information of the data. Among several isometric models [7] of hyperbolic space, similar to [20] and [60], we stick to the Poincare ball model [71] that is well-suited for gradient-based optimization (_i.e_. the distance function is differentiable). The embedding is fully unsupervised in our proposed HCL. In particular, the model (\(\mathbb{M}_{c}^{n},g^{\mathbb{M}}\)) is defined by the manifold \(\mathbb{M}^{n}=\{x\in\mathbb{R}^{n}:c\|x\|^{2}<1,c\geq 0\}\) equipped with the Riemannian metric \(g^{\mathbb{M}}=\lambda_{c}^{2}g^{E}\), where \(c\) is the curvature parameter, \(\lambda_{c}=\frac{2}{1-c\|x\|^{2}}\) is conformal factor that scales the local distances and \(g^{E}=\mathbf{I}_{n}\) denotes the Euclidean metric tensor.
The framework of _gyrovector spaces_ provides an elegant non-associative algebraic formalism for hyperbolic geometry just as vector spaces provide the algebraic setting for Euclidean geometry [7, 26, 79, 80]. For two vectors \(\mathbf{x},\mathbf{y}\in\mathbb{M}_{c}^{n}\), their addition is defined as
\[\mathbf{x}\oplus_{c}\mathbf{y}=\frac{(1+2c(\mathbf{x},\mathbf{y})+c\| \mathbf{y}\|^{2})\mathbf{x}+(1-c\|\mathbf{x}\|^{2})\mathbf{y}}{1+2c\langle \mathbf{x},\mathbf{y}\rangle+c^{2}\|\mathbf{x}\|^{2}\|\mathbf{y}\|^{2}}. \tag{1}\]
The hyperbolic distance between \(\mathbf{x},\mathbf{y}\in\mathbb{M}_{c}^{n}\) is defined as:
\[D_{hyp}(\mathbf{x},\mathbf{y})=\frac{2}{\sqrt{c}}\mathrm{arctanh}(\sqrt{c}\| -\mathbf{x}\oplus_{c}\mathbf{y}\|). \tag{2}\]
In particular, when \(c=0\), the Eq. 1 is the Euclidean addition of two vectors in \(\mathbb{R}^{n}\) and Eq. 2 recovers Euclidean geometry: \(\lim_{c\to 0}D_{hyp}(\mathbf{x},\mathbf{y})=2\|\mathbf{x}-\mathbf{y}\|\). For an open \(n\)-dimensional unit ball, the geodesics of the Poincare disk are then circles that are orthogonal to the boundary of the ball. See Fig. 1 for an illustration.
Before performing operations in the hyperbolic space, a bijective map from \(\mathbb{R}^{n}\) to \(\mathbb{M}_{c}^{n}\) that maps Euclidean vectors to the hyperbolic space is necessary. Such a map is termed
Figure 1: The geodesics (red curve) of two points (\(A,B\)) of the Poincaré disk. The blue line is the straight line between \(A,B\) that is no longer the shortest distance. The relative size of objects in this disk is getting smaller while the distance of points increases exponentially (relative to their Euclidean distance) when they are getting closer to the boundary.
exponential_ map when mapping from Euclidean space to the Poincare model of hyperbolic geometry and the inverse to it is called _logarithmic_ map [48].
The _exponential_ map is defined as:
\[\exp_{\mathbf{x}}^{c}(\mathbf{v})=\mathbf{x}\oplus_{c}\bigg{(}\tanh\bigg{(} \sqrt{c}\frac{\lambda_{\mathbf{x}}^{c}\|\mathbf{v}\|}{2}\bigg{)}\frac{\mathbf{ v}}{\sqrt{c}\|\mathbf{v}\|}\bigg{)}. \tag{3}\]
In practice, we follow the setting of [48] and [20] with the base point \(\mathbf{x}=\mathbf{0}\) so that the formulas are less cumbersome and empirically have little impact on the obtained results.
### Hyperbolic Contrastive Learning
Fig. 2 shows our proposed framework Hyperbolic Contrastive Learning. We follow the name convention of [47] in the following sections. Similar to the general contrastive learning like SimCLR, the proposed method contains the following components:
* _Data Augmentation_ module [76, 12, 42]. For a batch of data with size \(N\), the general operation of contrastive learning is to generate multiview of transformed \(t(\mathbf{x})\) with stochastic data augmentations \(t\sim\mathcal{T}\). In our work, for each input sample, \(\mathbf{x}\), we generate two random augmentations \(\mathbf{\tilde{x}}=t(\mathbf{x})\) from the original view of the data.
* _Encoder Network_\(f(\cdot)\) that maps \(\mathbf{\tilde{x}}\) to a lower dimension. The encoder is shared by all views. In our case, we work on ResNet-18 and ResNet-50.
* _Projection Network_\(g(\cdot)\) that maps \(f(\mathbf{\tilde{x}})\) to a latent space \(\mathbf{z}=g(f(\mathbf{\tilde{x}}))\). The \(g(\cdot)\) could be either a multi-layer perceptron [37] or just a single linear layer of size. During the downstream linear evaluation, this layer can be removed and replaced with a classification head.
* _exponential mapping_ that maps Euclidean vectors to the hyperbolic space.
Given the proposed HCL framework, for a set of \(N\) samples, \(\{\mathbf{x}_{k}\}_{k=1...N}\), in a batch, we augment each data to generate two views. This will result in \(2N\) samples in a batch. Let \(i\in I\equiv\{1...2N\}\) be the index of an arbitrary augmented instance, and let \(j(i)\) be the index of the other samples in the same batch. The self-supervised contrastive learning (e.g., [12, 42, 76]) takes the following loss form to pull positive pairs together and push negatives away from the anchor in
Figure 2: Overview of the proposed _HCL_. Different image augmentations are applied to the same image first. After going through the encoder network, the vectors are then projected into a latent space with a fully connected (FC) layer. Different from the general contrastive learning (dashed line) that maps the data to the unit sphere in Euclidean space, our method (solid arrow) maps the data to hyperbolic space. We illustrate the tree embedding in a two-dimensional unit ball in the hyperbolic space. Due to the tree structure representation power of hyperbolic space, our method tends to capture the hierarchy of data while traditional contrastive learning might push the same class of data away since they are treated as negatives. Note class labels are not available during self-supervised pretraining.
the latent space.
\[\mathcal{L}^{self}=\sum_{i\in I}\mathcal{L}_{i}^{self}=-\sum_{i\in I}\log\frac{ \text{exp}\left(\mathbf{z}_{i}\cdot\mathbf{z}_{j(i)}/\tau\right)}{\sum\limits_{a \in A(i)}\text{exp}\left(\mathbf{z}_{i}\cdot\mathbf{z}_{a}/\tau\right)} \tag{4}\]
In the above equation, \(\mathbf{z}=g(f(\mathbf{\tilde{x}}))\). Usually, \(\mathbf{z}\) is normalized before the loss calculation so that features lie on a unit hypersphere. The \(\cdot\) symbol denotes the inner (dot) product, \(\tau\in\mathcal{R}^{+}\) is a scalar temperature parameter, and \(A(i)\equiv I\setminus\{i\}\). The index \(i\) indicates the anchor, index \(j(i)\) is its positive pair, and the other \(2(N-1)\) indices (\(\{k\in A(i)\setminus\{j(i)\}\}\) indicate the negatives of the anchor. For each anchor \(i\), there is \(1\) positive pair and \(2N-2\) negative pairs. The denominator has a total of \(2N-1\) terms (the positive and negatives).
A distance could be defined with the cosine similarity implemented with a squared Euclidean distance between normalized vectors as follow
\[D_{cos}(\mathbf{z}_{i},\mathbf{z}_{j})=\left\|\frac{\mathbf{z}_{i}}{\left\| \mathbf{z}_{i}\right\|_{2}}-\frac{\mathbf{z}_{j}}{\left\|\mathbf{z}_{j}\right\| _{2}}\right\|_{2}^{2}=2-2\frac{\mathbf{z}_{i}\cdot\mathbf{z}_{j}}{\left\| \mathbf{z}_{i}\right\|_{2}\cdot\left\|\mathbf{z}_{j}\right\|_{2}} \tag{5}\]
In our proposed HCL, the loss function is defined as
\[\mathcal{L}_{hyp}^{self}=\sum_{i\in I}\mathcal{L}_{hyp_{i}}^{self}=-\sum_{i\in I }\log\frac{\text{exp}\left(-D(\mathbf{z}_{i},\mathbf{z}_{j(i)})/\tau\right)}{ \sum\limits_{a\in A(i)}\text{exp}\left(-D(\mathbf{z}_{i},\mathbf{z}_{a})/\tau \right)} \tag{6}\]
where \(D\) is the distance measurement like \(D_{cos}\) or \(D_{hyp}\). In our case we project \(\mathbf{z}\) to the hyperbolic space and use the pre-defined hyperbolic distance \(D_{hyp}\) for distance measurement.
### Supervised Hyperbolic Contrastive Learning
In self-supervised pretraining, class labels are unknown. For supervised contrastive learning, the contrastive loss was generalized to handle more positives and negatives with information of class labels. For a given dataset \(\{\mathbf{x}_{k},\mathbf{y}_{k}\}_{k=1\dots N}\), the supervised contrastive (SupCon) loss proposed by [47] is
\[\mathcal{L}^{sup}=\sum_{i\in I}\mathcal{L}_{i}^{sup}=\sum_{i\in I}\frac{-1}{ |P(i)|}\sum_{p\in P(i)}\log\frac{\text{exp}\left(\mathbf{z}_{i}\cdot\mathbf{z }_{p}/\tau\right)}{\sum\limits_{a\in A(i)}\text{exp}\left(\mathbf{z}_{i}\cdot \mathbf{z}_{a}/\tau\right)} \tag{7}\]
where \(P(i)\equiv\{p\in A(i):\mathbf{y}_{p}=\mathbf{y}_{i}\}\) is the set of indices of all positives in a batch distinct from \(i\) (i.e., the augment of \(\mathbf{x}_{i}\) as well as any of the remaining samples with the same label), and \(|P(i)|\) is its cardinality. The summation over negatives in the contrastive denominator of Eq. 4 is also preserved to improve the ability of to discriminate between signal and noise (negatives).
Following the construction of HCL, the Supervised Hyperbolic Contrastive loss could be easily constructed as
\[\mathcal{L}_{hyp}^{sup}=\sum_{i\in I}\frac{-1}{|P(i)|}\sum_{p\in P(i)}\log \frac{\text{exp}\left(-D(\mathbf{z}_{i},\mathbf{z}_{p})/\tau\right)}{\sum \limits_{a\in A(i)}\text{exp}\left(-D(\mathbf{z}_{i},\mathbf{z}_{a})/\tau \right)} \tag{8}\]
### Adversarial Robustness of HCL
One of the most popular approaches to mitigate the effect of adversarial perturbation is adversarial training (AT) [55], which trains the neural network with worst-case adversarial examples. Very recently, the connection between self-supervised contrastive learning and adversarial training has been built to develop label-efficient and robust models [45, 49].
The work of Robust Contrastive Learning (RoCL) [49] performs instance-wise adversarial attack with
\[\mathbf{\tilde{x}}^{i+1}=\Pi_{B(\mathbf{\tilde{x}},\epsilon)}(\mathbf{\tilde{ x}}^{i}+\alpha\texttt{sign}(\nabla_{\mathbf{\tilde{x}}^{i}}\mathcal{L}^{self}( \mathbf{\tilde{x}},\mathbf{\tilde{x}}^{+},\{\mathbf{\tilde{x}}^{-}\}))) \tag{9}\]
where \(\mathbf{\tilde{x}}\) is augmented anchor point, \(\mathbf{\tilde{x}}^{+}\) and\(\mathbf{\tilde{x}}^{-}\) are its positive and negative pairs. \(B(\mathbf{\tilde{x}},\epsilon)\) is the \(\ell_{\infty}\) norm-ball around \(\mathbf{\tilde{x}}\) with radius \(\epsilon\), and \(\Pi\) is the projection function for norm-ball. To learn robust representation via self-supervised contrastive learning, the adversarial learning objective for an instance-wise attack following the min-max formulation is
\[\operatorname*{arg\,min}_{\theta}\mathbb{E}_{(\mathbf{\tilde{x}})\sim\mathbb{D }}[\max_{\delta\in B(\mathbf{\tilde{x}},\epsilon)}\mathcal{L}^{self}(\mathbf{ \tilde{x}}+\delta,\mathbf{\tilde{x}}^{+},\{\mathbf{\tilde{x}}^{-}\})] \tag{10}\]
where \(\theta\) is model parameter and \(\mathbb{D}\) is dataset, \(\mathbf{\tilde{x}}+\delta\) is the adversarial image \(\mathbf{\tilde{x}}^{adv}\) generated by _instance-wise_ attacks (Eq. 9). After generating label-free adversarial examples using instance-wise adversarial attacks, the contrastive learning objective Eq. 4 is used to maximize the similarity between clean examples and their instance-wise perturbation. The final loss of RoCL is a combination of \(\mathcal{L}^{self}(\mathbf{\tilde{x}},\{\mathbf{\tilde{x}}^{+},\mathbf{\tilde{x} }^{adv}\},\{\mathbf{\tilde{x}}^{-}\})\) and \(\mathcal{L}^{self}(\mathbf{\tilde{x}}^{adv},\mathbf{\tilde{x}}^{+},\{\mathbf{ \tilde{x}}^{-}\}\) where the first term has extra \(\mathbf{\tilde{x}}^{adv}\) as positive pair for anchor \(\mathbf{\tilde{x}}\) and the second term uses \(\mathbf{\tilde{x}}^{adv}\) as the anchor.
The RoCL could be easily extended to Robust Hyperbolic Contrastive loss where the output of the network is projected to hyperbolic space for better semantic relationship representation. The objective \(\mathcal{L}_{hyp}^{RHCL}\) is defined as
\[\mathcal{L}_{hyp}^{self}(\mathbf{\tilde{x}},\{\mathbf{\tilde{x}}^{+},\mathbf{ \tilde{x}}^{adv}\},\{\mathbf{\tilde{x}}^{-}\})+\lambda\mathcal{L}_{hyp}^{self}( \mathbf{\tilde{x}}^{adv},\mathbf{\tilde{x}}^{+},\{\mathbf{\tilde{x}}^{-}\}) \tag{11}\]
where \(\lambda\) is a regularization parameter.
## 4 Experiments and Results
We conduct comprehensive experiments to cover hyperbolic contrastive learning in three directions: self-supervised domain, supervised domain, and adversarial robustness evaluation. To demonstrate the effectiveness and generality of
our method, we verify each proposed method on a variety of datasets. We first introduce the datasets and contrastive methods in Section 4.1. The implementation details are given in Section 4.2. We present the experiment results in Section 4.3 and finally, we show the ablation study results in Section 4.4.
### Datasets & Baseline Approaches
_Self-supervised Learning_ We perform the evaluation of our method on a wide range of datasets include **CIFAR-10/CIAFR-100**[50], **Tiny ImageNet**, and **ImageNet**[19] with different number of classes [77]. Our proposed HCL is plugged in on the SimCLR. We compare the linear evaluation result with SimCLR [12].
_Supervised Learning_ As an extension of HCL, SHCL uses label information for positive and negative pair distinguishment. We compare our proposed SHCL with SupCon [47] and cross-entropy by measuring classification accuracy on **CIFAR-10/CIAFR-100**.
_Adversarial Robust Learning_ We conduct adversarial attack with our proposed RHCL on **CIFAR-10** and compare the result with SimCLR, HCL, and the recent fully self-supervised robustness learning work RoCL [49].
### Implementation Details
_Self-supervised Learning_ The proposed HCL is aiming at exploring the data representation power of hyperbolic space for contrastive learning. Thus the training components like backbone networks, losses, optimizers, _etc._ are agnostic. For our method and baseline training, we keep the same training settings when making comparisons. Larger gains are possible with further hyper-parameter tuning.
We follow the code implementation and hyper-parameter of [68]. For small datasets (_i.e_., CIFAR-10/100 and Tiny ImageNet), we use the same training setup in _all_ experiments. At the pretrain stage, we train ResNet-18 [40] for 200 epochs with a batch size of 512 and a cosine-annealed learning rate of 0.5. The linear classifier is trained for 100 epochs with an initial learning rate of 10.0 multiplied by 0.1 at 60_th_ and 80_th_ epochs.
For experiments on ImageNet, we use ResNet-50 as the backbone. Since the data size is larger, we train SimCLR and our method with the batch size of 1024 and cosine-annealed learning rate of 0.6 for faster convergence. We adopt the same setting as in [68] for training the linear classifier.
When projecting the data from Euclidian space to hyperbolic space, we define the curvature \(c=0.1\) except for CIFAR-10 \(c=0.6\). All the experiments are conducted on 4-GPUs. We use SGD optimizer with momentum of 0.9, weight decay of \(10^{-4}\) and \(0\) for pre-train and linear evaluation, respectively.
_Supervised Learning_ Following the work of [47], we experiment with ResNet-50 for supervised classification training. We pretrain 200 epochs of SupCon and our SHCL with the same hyper-parameters we used in self-supervised learning. We then freeze the pretrained model but retrain a classification head using the same hyper-parameters for both models. We tune the curvature \(c\) with \(0.1,0.2\). We also tune the baseline cross entropy case a little bit so that it could reach a comparable result on both CIFAR-10 and CIFAR-100.
_Adversarial Robust Learning_ We use the code structure and evaluation procedure by the first work that explored the adversarial robustness of self-supervised contrastive learning RoCL [49]. Since our purpose is to explore the robustness of the proposed RHCL, we pretrain all the methods for 200 epochs with the same hyper-parameters used in self-supervised learning. Similar to RoCL, we use ResNet-18 trained on CIFAR-10. For all baselines and our method, we train with \(\ell_{\infty}\) attacks with the same attack strength of \(\epsilon=8/255\). We perform two kinds of evaluations. 1) Since general self-supervised learning involves two steps, pretraining and linear evaluation. After training our proposed RHCL with adversarial perturbations, we perform linear evaluation with adversarial examples after fixing the pretrained backbone. The linear evaluation for _all_ experiments were trained 150 epochs following the RoCL paper, all the other hyper-parameters are exactly the same as the literature [49]. 2) The whole pretrained network could be finetuned with adversarial examples. We perform supervised adversarial finetune [55]. Same as linear evaluation, the finetune also trained the network for 150 epochs. All other parameters including \(\lambda\) are exactly the same as in the work of [49] for evaluation.
### Results
**Self-supervised Learning Results**
In this section, we verify our method with linear probe following the common self-supervised contrastive learning procedure. We freeze pre-trained weights of the encoder and train a supervised linear classifier on top of it. We then report the Top-1 classification accuracy on the validation set.
Our results on CIFAR-10/CIFAR-100, Tiny ImageNet (Tiny IN), ImageNet (IN-1K) and subset of ImageNet with 100/200 classes (IN-100, IN-200) are shown in Tab. 1. With the exact same training process for _all_ experiments, _HCL_ consistently improves baseline methods SimCLR by at least \(0.44\) (IN-100). For CIFAR-100, the gain is \(6.98\) without any heavy parameter tuning. For the largest dataset IN-1K, the linear classification improvement is \(1.12\) with our method. Note that we did not make much effort to tune the hyper-parameter including the curvature \(c\). In the ablation study we show that when \(c=0.6\) we got the best result on CIFAR-10 compared with other \(c\) values under the same condition. We report the CIFAR-10 best result after searching \(c\) in Tab. 1. For all other datasets, we use default \(c=0.1\) without
searching. It is shown that the proposed HCL is an effective and easy to plug-in method for general contrastive learning and does not require heavy parameter tuning.
**Supervised Classification**
We plug in our method of projecting the embeddings to hyperbolic space and calculate the proposed supervised objective in the new space. Table 2 shows that our SHCL could generalize better than SupCon [47] and cross-entropy (CE) on CIFAR-10 and CIFAR-100 when training on ResNet-50. Both SupCon and SHCL are trained with the same hyper-parameter. For CE, we tune the model a little bit and trained 300 epochs in total for CIFAR-10 so it can reach a comparable result with others. Our SHCL could reach \(95.1\) accuracy with 200 epochs pertaining on CIAR-10 compared with SupCon. Both experiments show that our method is slightly superior to SupCon in the supervised classification domain.
normalizing the output of the network before calculating the loss. In our proposed method we need to project the output to hyperbolic space. We found whether normalizing the embedding before mapping to the Poincare affects the model performance. We evaluate HCL on CIFAR-10 and CIFAR-100 with and without normalization. Our experiment shows that the accuracy of CIFAR-10 drops from \(87.98\) to \(85.37\) without normalization. The performance of CIFAR-100 drops from \(55.89\) to \(46.44\). The normalization layer is vital in our model. A similar scenario has been observed in [47].
## 5 Discussion
The hyperbolic space owns better tree-structure representation power than Euclidean space. Inspired by the recent success of hyperbolic embedding in NLP and other computer vision domains, in this paper, we explore contrastive learning in hyperbolic space. We propose a new contrastive learning framework for self-supervised image representation learning. The proposed method is evaluated on different small to large-scale datasets and shows promising results. We further extend hyperbolic contrastive learning to supervised contrastive learning and demonstrate its superior performance on different classification tasks. Self-supervised representation learning usually involves learning a pretrained model so that downstream tasks could be fine-tuned faster or gain higher accuracy. Lately, the research attempt has been made to combine adversarial training with self-supervision for robust pretrained models that can be rapidly used by downstream tasks. We explore the adversarial robustness of proposed hyperbolic contrastive learning. To the best of our knowledge, this is the first work that attempts to build contrastive models in hyperbolic space and explore the self-supervised adversarial robustness in this space. In our study, we show some preliminary results in this direction to give some insights for future studies. There are some other questions that have not been explored such as whether the hyperbolic space representation will benefit other downstream tasks. Could we explicitly guide the model with hierarchy information when labels are available? We leave these questions for future studies.
|
2308.14739 | Sharper dimension-free bounds on the Frobenius distance between sample
covariance and its expectation | We study properties of a sample covariance estimate $\widehat \Sigma$ given a
finite sample of $n$ i.i.d. centered random elements in $\R^d$ with the
covariance matrix $\Sigma$. We derive dimension-free bounds on the squared
Frobenius norm of $(\widehat\Sigma - \Sigma)$ under reasonable assumptions. For
instance, we show that $\smash{\|\widehat\Sigma - \Sigma\|_{\rm F}^2}$ differs
from its expectation by at most $\smash{\mathcal O({\rm{Tr}}(\Sigma^2) / n)}$
with overwhelming probability, which is a significant improvement over the
existing results. This allows us to establish the concentration phenomenon for
the squared Frobenius distance between the covariance and its empirical
counterpart in the case of moderately large effective rank of $\Sigma$. | Nikita Puchkin, Fedor Noskov, Vladimir Spokoiny | 2023-08-28T17:41:13Z | http://arxiv.org/abs/2308.14739v2 | Sharper dimension-free bounds on the Frobenius distance between sample covariance and its expectation
###### Abstract
We study properties of a sample covariance estimate \(\widehat{\Sigma}=(\mathbf{X}_{1}\mathbf{X}_{1}^{\top}+\ldots+\mathbf{X}_{n} \mathbf{X}_{n}^{\top})/n\), where \(\mathbf{X}_{1},\ldots,\mathbf{X}_{n}\) are i.i.d. random elements in \(\mathbb{R}^{d}\) with \(\mathbb{E}\mathbf{X}_{1}=\mathbf{0}\), \(\mathbb{E}\mathbf{X}_{1}\mathbf{X}_{1}^{\top}=\Sigma\). We derive dimension-free bounds on the squared Frobenius norm of \((\widehat{\Sigma}-\Sigma)\) under reasonable assumptions. For instance, we show that \(||\widehat{\Sigma}-\Sigma||_{\mathrm{F}}^{2}-\mathbb{E}\|\widehat{\Sigma}- \Sigma\|_{\mathrm{F}}^{2}|=\mathcal{O}(\mathrm{Tr}(\Sigma^{2})/n)\) with overwhelming probability, which is a significant improvement over the existing results. This leads to a bound the ratio \(\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}/\mathbb{E}\|\widehat{\Sigma}- \Sigma\|_{\mathrm{F}}^{2}\) with a sharp leading constant when the effective rank \(\mathrm{r}(\Sigma)=\mathrm{Tr}(\Sigma)/\|\Sigma\|\) and \(n/\mathrm{r}(\Sigma)^{6}\) tend to infinity: \(\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}/\mathbb{E}\|\widehat{\Sigma}- \Sigma\|_{\mathrm{F}}^{2}=1+\mathcal{O}(1/\mathrm{r}(\Sigma))\).
## 1 Introduction
Covariance estimation is one of classical topics in multivariate statistics with applications in plenty of areas, including signal processing (Krim and Viberg, 1996; Haghighatshoar and Caire, 2018), bioinformatics (Xie and Bentler, 2003; Schafer and Strimmer, 2005; Hero and Rajaratnam, 2012), image analysis (Dahmen et al., 2000; Zhang and Schneider, 2010), and finance (Ledoit and Wolf, 2003; Holtz, 2010; Bai and Shi, 2011; Fan et al., 2015). Let \(\mathbf{X},\mathbf{X}_{1},\ldots,\mathbf{X}_{n}\) be i.i.d. centered random vectors in \(\mathbb{R}^{d}\) with a covariance matrix \(\mathbb{E}\mathbf{X}\mathbf{X}^{\top}=\Sigma\). In the present paper, we study properties of the most natural estimator of \(\Sigma\), namely, the sample covariance, which is defined by the formula
\[\widehat{\Sigma}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i}\mathbf{X}_{i}^{\top}.\]
The question of how well \(\widehat{\Sigma}\) approximates \(\Sigma\) was extensively studied in the last decades. For instance, it arose in the paper of Kannan, Lovasz, and Simonovits (1997) in the context of volume estimation. The authors used the sample covariance to bring a convex body \(\mathcal{K}\subset\mathbb{R}^{d}\) into a nearly isotropic position on one of intermediate steps of their algorithm. Assuming that \(\mathbf{X}_{1},\ldots,\mathbf{X}_{n}\) are uniformly distributed on \(\mathcal{K}\), Kannan, Lovasz, and Simonovits (1997) showed that, for any \(\delta\in(0,1)\), the operator norm of \((\widehat{\Sigma}-\Sigma)\) is of order \(\mathcal{O}(d/(\delta\sqrt{n}))\) with probability at least \((1-\delta)\) (see their proof of Theorem 5.11). This result was improved in a series of works (Bourgain, 1999; Rudelson, 1999; Giannopoulos et al., 2005; Paouris, 2006; Adamczak et al., 2010) until Adamczak, Litvak, Pajor, and Tomczak-Jaegermann (2011) obtained that, if
the sample size \(n\) is large enough, then
\[\left\|\widehat{\Sigma}-\Sigma\right\|\lesssim\sqrt{\frac{d}{n}}\]
with overwhelming probability \((1-\exp\{-\mathcal{O}(\sqrt{d})\})\) for a large class of distributions, including the log-concave ones and the measures, satisfying the Poincare inequality. Here and further in this paper, the relation \(\lesssim\) stands for inequality up to a positive multiplicative constant (see the notation section below). A bit sharper bound can be derived for random vectors with lighter tails. For example, in (Vershynin, 2012b, Corollary 5.50), the author showed that, if \(\mathbf{X}_{1},\ldots,\mathbf{X}_{n}\) are sub-Gaussian, then
\[\left\|\widehat{\Sigma}-\Sigma\right\|\lesssim\sqrt{\frac{d+\log(1/\delta)}{n} }\vee\frac{d+\log(1/\delta)}{n} \tag{1}\]
with probability at least \((1-\delta)\). Here and further in this paper, \((a\lor b)\) stands for \(\max\{a,b\}\). The inequality (1), in particular, yields that the Frobenius norm of \((\widehat{\Sigma}-\Sigma)\) is of order \(\mathcal{O}(d/\sqrt{n})\) with high probability.
The bound (1) cannot be improved in the worst-case scenario. If the population covariance \(\Sigma\) has \(d\) large eigenvalues, then the number of parameters to be estimated is \(\Omega(d^{2})\). This means that the performance of \(\widehat{\Sigma}\) suffers from the curse of dimensionality. In (Tropp, 2012; Vershynin, 2012b), the authors noticed that if the distribution of \(\mathbf{X}_{1},\ldots,\mathbf{X}_{n}\) is supported on a Euclidean ball of radius \(R\) centered at the origin, then the following inequality holds with high probability:
\[\left\|\widehat{\Sigma}-\Sigma\right\|\lesssim\sqrt{\frac{R\|\Sigma\|\log(d/ \delta)}{n}}\vee\frac{R\log(d/\delta)}{n}. \tag{2}\]
We see that the finite support assumption almost eliminates the dependence on \(d\). However, we still face problems when the dimension becomes exponentially large compared to the sample size \(n\). Fortunately, Vershynin (2012b, Remark 5.53) noticed that if the data lies near a low-dimensional subspace (which is often the case in high-dimensional tasks), then the rate of convergence of the sample covariance matrix depends on the intrinsic dimension, characterized by the effective rank
\[\mathtt{r}(\Sigma)=\frac{\operatorname{Tr}(\Sigma)}{\|\Sigma\|}.\]
In (Adamczak, 2015) and (Koltchinskii and Lounici, 2017), the authors proved a dimension-free version of (1) in the Gaussian setup:
\[\left\|\widehat{\Sigma}-\Sigma\right\|\lesssim\|\Sigma\|\left(\sqrt{\frac{ \mathtt{r}(\Sigma)+\log(1/\delta)}{n}}\vee\frac{\mathtt{r}(\Sigma)+\log(1/ \delta)}{n}\right). \tag{3}\]
As mentioned in (Koltchinskii and Lounici, 2017, Theorem 9), a similar inequality holds for a broad class of sub-Gaussian distributions, satisfying the \(\psi_{2}\)-\(L_{2}\)-equivalence condition. The term "dimension-free" means that the right-hand side of (3) depends only on the operator norm and effective rank of \(\Sigma\). Hence, the bound (3) still makes sense even if the ambient dimension \(d\) is huge. Later, this result was extended in the papers of Vershynin (2018, Theorem 9.2.4 and Exercise 9.2.5), Zhivotovskiy (2021, Theorem 1), and Han (2022). In (Zhivotovskiy, 2021, Theorem 3), the author also obtained a dimension-free version of (Adamczak et al., 2010, Theorem 4.1) for the log-concave case. A bit earlier, Bunea and Xiao (2015) studied the behaviour
of the Frobenius norm of \((\widehat{\Sigma}-\Sigma)\) under the same \(\psi_{2}\)-\(L_{2}\)-equivalence assumption. They proved that (see (Bunea and Xiao, 2015, Proposition A.3))
\[\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}\lesssim\|\Sigma\|_{\mathbf{ r}}(\Sigma)\left(\sqrt{\frac{\log(2/\delta)}{n}}\vee\frac{\log(2/\delta)}{n}\right) \tag{4}\]
with probability at least \((1-\delta)\). We would like to note that the low effective rank assumption is not the only way to escape the curse of dimensionality. There are a lot of papers devoted to the problem of covariance estimation under structural assumptions (for example, sparsity (Bickel and Levina, 2008; Cai and Zhou, 2012; Fan et al., 2015), Kronecker product structure (Tsiligkaridis and Hero, 2013; Leng and Pan, 2018), bandable (Bickel and Levina, 2008; Cai et al., 2010) or Toeplitz (Xiao and Wu, 2012; Cai et al., 2013) matrix \(\Sigma\) to mention a few). This goes beyond the scope of the present paper. A reader is referred to the comprehensive survey (Cai et al., 2016) on this subject. We consider the situation when \(\Sigma\) has no other properties but a small effective rank.
The works we referred to illustrate how rapidly the topic of covariance estimation evolved in the last decade. Recently developed advanced techniques allowed statisticians to examine more challenging setups, like those with missing observations (Lounici, 2014; Lounici and Pacreau, 2023; Abdalla, 2023), heavy tails (Vershynin, 2012; Srivastava and Vershynin, 2013; Youssef, 2013; Tikhomirov, 2018; Mendelson and Zhivotovskiy, 2020; Abdalla and Zhivotovskiy, 2022) and adversarial contamination (Abdalla and Zhivotovskiy, 2022; Minasyan and Zhivotovskiy, 2023). We do not pursue the goal of pushing these results even further. In contrast, we use our technical findings to discover subtle effects that have not been noticed in the classical sub-Gaussian setting so far.
Contribution.In the present paper, we study properties of the squared Frobenius distance between the sample covariance and its expectation, which received less attention compared to the operator norm of \((\widehat{\Sigma}-\Sigma)\). The state-of-the-art result (4) of Bunea and Xiao (2015) implies that
\[\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}\lesssim\frac{\|\Sigma \|^{2}\mathbf{r}(\Sigma)^{2}\log(2/\delta)}{n}\]
with high probability in the case of moderate confidence level \(\delta\) (that is, \(\log(1/\delta)\lesssim n\)). This bound exhibits the correct dependence on the operator norm and on the effective rank of \(\Sigma\). For instance, if \(\mathbf{X}_{1},\ldots,\mathbf{X}_{n}\) have the Gaussian distribution \(\mathcal{N}(\mathbf{0},\Sigma)\), then
\[\mathbb{E}\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}=\frac{( \mathrm{Tr}(\Sigma))^{2}+\mathrm{Tr}(\Sigma^{2})}{n},\]
and, according to the Paley-Zigmund inequality, \(\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\) is not smaller than \(\big{[}(\mathrm{Tr}(\Sigma))^{2}+\mathrm{Tr}(\Sigma^{2})\big{]}/(2n)\) on an event of positive probability. However, the upper bound \(\mathcal{O}((\mathrm{Tr}(\Sigma))^{2}/n)\) becomes suboptimal for some distributions, if we are speaking of the difference \(\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}-\mathbb{E}\|\widehat{\Sigma}- \Sigma\|_{\mathrm{F}}^{2}\). A thorough analysis of higher-order derivatives of the cumulant generating function
\[\varphi(U)=\log\mathbb{E}e^{\boldsymbol{\xi}^{\top}U\boldsymbol{\xi}},\quad \text{where}\quad\boldsymbol{\xi}=\Sigma^{-1/2}\mathbf{X}, \tag{5}\]
allows us to establish a dimension-free high-probability upper bound
\[\left|\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}-\mathbb{E} \left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}\right|\lesssim\frac{ \|\Sigma\|^{2}}{n}\max\left\{\mathbf{r}(\Sigma^{2})\sqrt{\log(7/\delta)},\log (7/\delta)\right\},\quad\text{where}\quad\mathbf{r}(\Sigma^{2})=\frac{ \mathrm{Tr}(\Sigma^{2})}{\|\Sigma\|^{2}},\]
which holds for a large subclass of sub-Gaussian distributions, arbitrary \(\delta\in(0,1)\), such that \(\log(1/\delta)\lesssim n/\mathtt{r}(\Sigma)^{2}\) and \(n\gtrsim\mathtt{r}(\Sigma)^{6}\) (see Theorem 2.3 and Theorem 2.5). We would like to note that \(\mathtt{r}(\Sigma^{2})\), also referred to as stable rank of \(\Sigma\), is always not greater than \(\mathtt{r}(\Sigma)^{2}\) and sometimes it can be significantly smaller than the squared effective rank. As a consequence, we obtain the following bound on \(\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\) with a sharp constant in a high-dimensional setting:
\[\left|\frac{\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}}{\mathbb{E}\|\widehat {\Sigma}-\Sigma\|_{\mathrm{F}}^{2}}-1\right|\lesssim\frac{\sqrt{\log(7/\delta) }}{\mathtt{r}(\Sigma)-1}=\mathcal{O}\left(\frac{1}{\mathtt{r}(\Sigma)}\right), \quad\mathtt{r}(\Sigma),\,n/\mathtt{r}(\Sigma)^{6}\to\infty.\]
Notation.Throughout the paper, \(\|A\|_{\mathrm{F}}\) and \(\|A\|\) stand for the Frobenius and the operator norm of \(A\), respectively. If the matrix \(A\) is symmetric positive semidefinite and \(A\neq O\), where \(O\) is the matrix with zero entries, we denote its effective rank by
\[\mathtt{r}(A)=\mathrm{Tr}(A)/\|A\|.\]
We use the standard notation \((A\otimes B)\) for the Kronecker product of matrices \(A\) and \(B\). Its definition and some useful properties are listed in Appendix A. For any matrix \(U\in\mathbb{R}^{p\times q}\) with columns \(\mathbf{u}_{1},\ldots,\mathbf{u}_{q}\), the vectorization operator is given by \(\mathbf{vec}(U)=(\mathbf{u}_{1}^{\top},\ldots,\mathbf{u}_{q}^{\top})^{\top} \in\mathbb{R}^{pq}\). Here and further in the paper, the bold font is reserved for vectors, while matrices and scalars are displayed in regular font. For a random variable \(\eta\), its Orlicz \(\psi_{s}\)-norm, \(s\geqslant 1\), is defined as
\[\|\eta\|_{\psi_{s}}=\inf\left\{t>0:\mathbb{E}e^{|\eta|^{s}/t^{s}}\leqslant 2 \right\}.\]
The expressions \((a\lor b)\) and \((a\wedge b)\) denote \(\max\{a,b\}\) and \(\min\{a,b\}\), respectively. Sometimes, instead of the standard \(\mathcal{O}\) notation, we use \(f\lesssim g\) or \(g\gtrsim f\), which means that there is a universal constant \(c>0\), such that \(f\leqslant cg\). Finally, \(\mathcal{B}(\mathbf{x},R)\) is the Euclidean ball of radius \(R\) centered at \(\mathbf{x}\).
Paper structure.The rest of the paper is organized as follows. In Section 2, we present our main results and discuss their implications. After that, we illustrate them by some numerical simulations in Section 3. Sections 4, 5, and 6 are devoted to the proofs of the statements from Section 2, some technical details are moved to Appendix.
## 2 Main results
In this section, we present rigorous statements of the main results of the paper. Let us start with a couple of auxiliary definitions. Throughout the paper, \(\boldsymbol{\xi}=\Sigma^{-1/2}\mathbf{X}\) stands for the whitened vector. We assume that the covariance matrix \(\Sigma\) is invertible for simplicity. In a general situation, one can always reduce the problem of covariance estimation to the non-degenerate case, adding tiny isotropic Gaussian noise to the observations. For a matrix \(U\in\mathbb{R}^{d\times d}\), such that \(\mathbb{E}e^{\boldsymbol{\xi}^{\top}U\boldsymbol{\xi}}<\infty\), let \(\mathsf{P}_{U}\) denote a probability measure, defined as
\[\mathrm{d}\mathsf{P}_{U}(\mathbf{x})=\frac{e^{\mathbf{x}^{\top}U\mathbf{x}}}{ \mathbb{E}e\boldsymbol{\xi}^{\top}U\boldsymbol{\xi}}\mathrm{d}\mathbb{P}_{ \boldsymbol{\xi}}(\mathbf{x}),\quad\text{where}\quad\boldsymbol{\xi}=\Sigma^{- 1/2}\mathbf{X}.\]
For any Borel function \(f:\mathbb{R}^{d}\times\mathbb{R}^{d\times d}\to\mathbb{R}\), the expression \(\mathsf{P}_{U}f(\boldsymbol{\xi},U)\) stands for the expectation of \(f(\boldsymbol{\xi},U)\) with respect to the measure \(\mathsf{P}_{U}\):
\[\mathsf{P}_{U}f(\boldsymbol{\xi},U)=\frac{\mathbb{E}f(\boldsymbol{\xi},U)e^{ \boldsymbol{\xi}^{\top}U\boldsymbol{\xi}}}{\mathbb{E}e\boldsymbol{\xi}^{\top}U \boldsymbol{\xi}}.\]
As we mentioned in the introduction, our proof is based on the study of derivatives of the cumulant generating function \(\varphi(U)\), defined in (5). They admit a nice representation in terms of the introduced functional \(\mathsf{P}_{U}\). The only property we require is the regularity of the third and the fourth derivatives of \(\varphi(U)\), which is guaranteed by the following assumption.
**Assumption 2.1**.: _There exist positive numbers \(\tau\), and \(\rho_{\max}\), such that the vector \(\mathbf{\xi}=\Sigma^{-1/2}\mathbf{X}\) satisfies the inequality_
\[\mathsf{P}_{U}\left(\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathsf{P}_{U}\mathbf{\xi}^{\top}V\bm {\xi}\right)^{4}\leqslant\tau^{2}\ \|V\|_{\mathrm{F}}^{4}\]
_for all \(U\in\mathbb{R}^{d\times d}\), such that \(\|U\|_{\mathrm{F}}\leqslant\rho_{\max}\), and all \(V\in\mathbb{R}^{d\times d}\)._
To our knowledge, Assumption 2.1 has not been used in the literature on covariance estimation, so the natural question is what are the examples of random vectors \(\mathbf{\xi}\), satisfying this condition. We claim that Assumption 2.1 holds for a broad class of distributions and support our assertion with the next proposition.
**Proposition 2.2**.: _Assume that there exists \(\omega>0\), such that_
\[\left\|\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathrm{Tr}(V)\right\|_{\psi_{1}}\leqslant \omega\|V\|_{\mathrm{F}}\quad\text{for all }V\in\mathbb{R}^{d\times d}. \tag{6}\]
_Then the random vector \(\mathbf{\xi}\) satisfies Assumption 2.1 with \(\tau=64\omega^{2}\) and \(\rho_{\max}=(6\omega)^{-1}\)._
Proposition 2.2 immediately yields that Assumption 2.1 is fulfilled for all random vectors \(\mathbf{\xi}\), satisfying the Hanson-Wright inequality. For instance, this is the case for sub-Gaussian random vectors with independent entries (see, e.g., (Hanson and Wright, 1971, Rudelson and Vershynin, 2013)). Less trivial examples are the distributions with the convex concentration property (Adamczak, 2015). The random vector \(\mathbf{\xi}\) is said to have the convex concentration property if there exists a positive constant \(K\), such that
\[\mathbb{P}\left(|g(\mathbf{\xi})-\mathbb{E}g(\mathbf{\xi})|\geqslant t\right)\leqslant 2 e^{-t^{2}/K^{2}}\]
for any convex \(1\)-Lipschitz function \(g\) and any \(t>0\). We would like to note that random vectors with strongly log-concave density or satisfying the log-Sobolev inequality have such property. It is also easy to verify that the distributions, satisfying the Hanson-Wright inequality, have the \(\psi_{2}\)-\(L_{2}\)-equivalence property. Thus, (6) is a stronger condition than the one considered in (Bunea and Xiao, 2015). However, it is not clear, whether Assumption 2.1 implies the \(\psi_{2}\)-\(L_{2}\)-equivalence of \(\mathbf{\xi}\).
We proceed with a high-probability upper bound on the Frobenius norm of \((\widehat{\Sigma}-\Sigma)\).
**Theorem 2.3**.: _Grant Assumption 2.1. Fix any \(\delta\in(0,1)\) and let_
\[\mathtt{R}(\Sigma,\delta)=2\mathtt{r}(\Sigma)+\frac{1}{2}\sqrt{\tau\mathtt{ r}(\Sigma^{2})}+2\sqrt{e\log(2/\delta)}.\]
_Assume that the sample size \(n\) satisfies the inequalities_
\[\left\|\Sigma\right\|\mathtt{R}(\Sigma,\delta)\leqslant\rho_{\max}\sqrt{n} \quad\text{and}\quad\mathtt{r}(\Sigma)^{2}\mathtt{R}(\Sigma,\delta)^{2}\left( \frac{\tau^{3}\mathtt{R}(\Sigma,\delta)^{2}}{4}+3\tau^{2}\right)\leqslant 36n. \tag{7}\]
_Then, with probability at least \((1-\delta)\), it holds that_
\[\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}-\mathbb{E}\left\| \widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}<\frac{4\|\Sigma\|^{2}}{n} \max\left\{\sqrt{2\big{(}\tau\mathtt{r}(\Sigma^{2})^{2}+\tau^{2}\mathtt{r}( \Sigma^{4})\big{)}\log(2/\delta)},\ 4e\tau\log(2/\delta)\right\}. \tag{8}\]
According to Lemma 5.5 below, under Assumption 2.1, it holds that
\[n\,\mathbb{E}\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}\leqslant \left(\mathrm{Tr}(\Sigma)\right)^{2}+(\tau-1)\,\mathrm{Tr}(\Sigma^{2}).\]
Hence, Theorem 2.3 yields that
\[\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}\lesssim\frac{\|\Sigma \|^{2}\left(\mathtt{r}(\Sigma)^{2}+\log(2/\delta)\right)}{n}\]
with probability at least \((1-\delta)\), resembling the result of Bunea and Xiao (2015) with slightly better dependence on \(\delta\) (we have \(\mathtt{r}(\Sigma)^{2}+\log(2/\delta)\), instead of \(\mathtt{r}(\Sigma)^{2}\log(1/\delta)\), cf. (4)). However, as we discuss after Theorem 2.5 a bit later, the expression in the right-hand side of (8) may be much smaller, than the expectation of \(\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\).
The result of Theorem 2.3 is also related to concentration of quadratic forms, because the squared Frobenius norm of \((\widehat{\Sigma}-\Sigma)\) can be naturally represented as
\[n\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}=\mathbf{vec}(H)^{ \top}\left(\Sigma\otimes\Sigma\right)\mathbf{vec}(H),\]
where \(\otimes\) stands for the Kronecker product of matrices and \(\mathbf{vec}(H)\) is a random vector, obtained by reshaping of
\[H=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\left(\boldsymbol{\xi}_{i}\boldsymbol{\xi} _{i}^{\top}-I_{d}\right).\]
It is easy to observe that the matrix \(H\) may have sub-exponential dependent entries under Assumption 2.1. At the same time, most of the papers studying the quadratic forms work with sub-Gaussian random vectors (see, for instance, (Laurent and Massart, 2000; Hsu et al., 2012; Klochkov and Zhivotovskiy, 2020; Spokoiny, 2023) and the previously mentioned (Hanson and Wright, 1971; Rudelson and Vershynin, 2013; Adamczak, 2015)). The only exception we are aware of is the paper of Sambale (2023), where the author considers quadratic forms of sub-exponential random vectors with independent components. One may also consider \(\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\) as a polynomial of a sub-Gaussian random vector of degree \(4\). Unfortunately, the latest results on concentration of sub-Gaussian polynomials (see, for instance, Schudy and Sviridenko (2012); Adamczak and Wolff (2015); Gotze et al. (2021)) cannot recover the upper bound (8).
The final remark on Theorem 2.3 we want to make is that \(\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\) should behave like a square of a sub-exponential random variable. However, in the right-hand side of (8), we have the standard sub-exponential tail \(\sqrt{\log(2/\delta)}\vee\log(2/\delta)\), rather than \((\log(2/\delta))^{2}\). This is because the statement of Theorem 2.3 holds not for all \(\delta\) from the interval \((0,1)\) but only for \(\delta\gtrsim e^{-\mathcal{O}(n)}\), excluding the case of extremely high confidence.
We proceed with a complementary lower bound on \(\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\). In contrast to the upper bound, we do not need the random vector \(\boldsymbol{\xi}\) to be sub-Gaussian anymore. The following will be sufficient for our purposes.
**Assumption 2.4**.: _There exists \(\alpha>0\), such that_
\[\mathbb{E}\left(\boldsymbol{\xi}^{\top}V\boldsymbol{\xi}-\mathbb{E}\, \boldsymbol{\xi}^{\top}V\boldsymbol{\xi}\right)^{4}\leqslant\alpha^{2}\,\,\|V \|_{\mathrm{F}}^{4}\quad\text{for all }V\in\mathbb{R}^{d\times d}.\]
The reason for the significant relaxation of Assumption 2.1 is that in the proof of lower bounds we have to study the exponential moment \(\mathbb{E}\exp\{-\lambda\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\}\), which is finite for all \(\lambda>0\) whatever the distribution of \(\boldsymbol{\xi}\) is. As a result, the weaker Assumption 2.4 does not lead to worse dependence on the parameter \(\delta\). A similar effect was observed in (Oliveira, 2016), where the author studied lower tails of the sample covariance.
**Theorem 2.5**.: _Grant Assumption 2.4 and let the sample size \(n\) be sufficiently large in a sense that_
\[7\alpha^{2}\mathtt{r}(\Sigma^{2})^{2}\left(\mathtt{r}(\Sigma)^{2}+\alpha \mathtt{r}(\Sigma^{2})\right)\leqslant n\quad\text{and}\quad 96\left(1+\sqrt{ \alpha}\right)^{2}\mathtt{r}(\Sigma^{2})\left(\mathtt{r}(\Sigma)^{2}+\alpha \mathtt{r}(\Sigma^{2})\right)\leqslant n. \tag{9}\]
_Then, for any \(\delta\in(0,1)\), with probability at least \((1-\delta)\), it holds that_
\[\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}-\mathbb{E}\left\| \widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}>-\frac{4\|\Sigma\|^{2}}{n} \left(2\log(5/\delta)\vee\sqrt{2\mathfrak{R}(\Sigma)\log(5/\delta)}\right), \tag{10}\]
_where_
\[\mathfrak{R}(\Sigma)=\alpha\mathtt{r}(\Sigma^{2})^{2}+\alpha^{2}\mathtt{r}( \Sigma^{4})+\frac{15\alpha^{2}\mathtt{r}(\Sigma)^{2}\mathtt{r}(\Sigma^{2}) \left(\mathtt{r}(\Sigma)^{2}+\alpha\mathtt{r}(\Sigma^{2})\right)}{4n}.\]
Summing up the results of Theorem 2.3 and Theorem 2.5 and using the union bound, we obtain that, under Assumption 2.1,
\[\left|\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}-\mathbb{E}\left\| \widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}\right|\lesssim\frac{\|\Sigma \|^{2}}{n}\max\left\{\mathtt{r}(\Sigma^{2})\sqrt{\log(7/\delta)},\log(7/\delta )\right\} \tag{11}\]
with probability at least \((1-\delta)\), provided that the sample size \(n\) is large enough, that is \(n\gtrsim\mathtt{r}(\Sigma)^{6}+\mathtt{r}(\Sigma)^{2}(\log(2/\delta))^{2}\). The inequality (11) implies that the ratio \(\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}/\mathbb{E}\|\widehat{\Sigma}- \Sigma\|_{\mathrm{F}}^{2}\) is close to \(1\), when the effective rank of \(\Sigma\) is large. Indeed, according to Lemma 5.5 below, it holds that
\[n\,\mathbb{E}\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2} \geqslant\left(\mathrm{Tr}(\Sigma)\right)^{2}-\mathrm{Tr}(\Sigma^{2})=\| \Sigma\|^{2}\left(\mathtt{r}(\Sigma)^{2}-\mathtt{r}(\Sigma^{2})\right).\]
This yields that
\[\left|\frac{\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}}{\mathbb{ E}\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}}-1\right| \lesssim\max\left\{\frac{\mathtt{r}(\Sigma^{2})\sqrt{\log(7/ \delta)}}{\mathtt{r}(\Sigma)^{2}-\mathtt{r}(\Sigma^{2})},\frac{\log(7/ \delta)}{\mathtt{r}(\Sigma)^{2}-\mathtt{r}(\Sigma^{2})}\right\}\] \[\lesssim\max\left\{\frac{\sqrt{\log(7/\delta)}}{\mathtt{r}(\Sigma )-1},\frac{\log(7/\delta)}{\mathtt{r}(\Sigma)\left(\mathtt{r}(\Sigma)-1 \right)}\right\},\]
where the last inequality is due to the fact that \(1\leqslant\mathtt{r}(\Sigma^{2})\leqslant\mathtt{r}(\Sigma)\). In other words, we obtain that
\[\frac{\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}}{\mathbb{E}\|\widehat{ \Sigma}-\Sigma\|_{\mathrm{F}}^{2}}=1+\mathcal{O}\left(\frac{1}{\mathtt{r}( \Sigma)}\right)\quad\text{almost surely when}\quad\mathtt{r}(\Sigma),\,n/ \mathtt{r}(\Sigma)^{6}\to\infty. \tag{12}\]
## 3 Experiments
In this section, we illustrate the concentration phenomenon discussed in Section 2 with numerical simulations. Our goal is to observe that the ratio \(\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}/\mathbb{E}\|\widehat{\Sigma}- \Sigma\|_{\mathrm{F}}^{2}\) concentrates around \(1\) when \(\mathtt{r}(\Sigma)\) tends to infinity, as guaranteed by (12).
Let us describe the process of population covariance generation. We set the ambient dimension \(d\) equal to \(50\). Then, for each \(t\in[0,1]\), we define the covariance matrix \(\Sigma_{t}=U\Lambda_{t}U^{\top},t\in[0,1]\), where \(U\) is a drawn from the uniform distribution on the orthogonal group \(\mathbb{O}(d)\) and \(\Lambda_{t}\) is defined as follows:
\[\Lambda_{t}=\begin{cases}\mathrm{diag}\left(1,2t\left(1-\frac{1}{d}\right),2t \left(1-\frac{2}{d}\right),\ldots,\frac{2t}{d}\right)&\text{if }t\in[0,0.5];\\ \mathrm{diag}\left(1,(1-1/d)^{2(1-t)},(1-2/d)^{2(1-t)},\ldots,(1/d)^{2(1-t)} \right),&\text{otherwise}.\end{cases}\]
In our simulations, we take \(t\) from the grid \(\mathcal{T}=\{0,1/69,2/69;\ldots,1\}\), so the effective ranks \(\mathtt{r}(\Sigma_{t})\), \(t\in\mathcal{T}\), are spread regularly over the segment \([1,d]\).
The simulation process goes as follows. For each \(\Sigma=U\Lambda U^{\top}\in\{\Sigma_{t}:t\in\mathcal{T}\}\), we consider two different distribution laws of \(\mathbf{X}_{1},\ldots,\mathbf{X}_{n}\).
1. The first variant is \(\mathbf{X}=U\Lambda^{1/2}\boldsymbol{\xi}/\sigma\), where \(\boldsymbol{\xi}=(\xi_{1},\ldots,\xi_{d})^{\top}\) is a random vectors with independent entries drawn from the _truncated Laplace_ distribution with the density \[\mathsf{p}(x)=\frac{1}{Z}e^{-|x|}\mathbb{1}\,(|x|\leqslant 6),\] where \(Z\) is the normalizing constant. The constant \(\sigma>0\) is chosen is a way to ensure that \(\mathbb{E}\mathbf{X}\mathbf{X}^{\top}=\Sigma\): \[\sigma^{2}=\int\limits_{-6}^{6}x^{2}\mathsf{p}(x)\mathrm{d}x.\]
2. The second case we consider is \(\mathbf{X}=\sqrt{d}U\Lambda^{1/2}\boldsymbol{\xi}\), where \(\boldsymbol{\xi}\) has the uniform distribution on the unit sphere \(\mathbb{S}^{d-1}\)
Using numerical integration, we calculate \(\mathbb{E}\|\widehat{\Sigma}-\Sigma_{t}\|_{\mathrm{F}}^{2}\) for the truncated Laplace distribution with a reasonable accuracy:
\[n\mathbb{E}\left\|\widehat{\Sigma}-\Sigma_{t}\right\|_{\mathrm{F}}^{2}= \mathbb{E}\left\|\mathbf{X}\mathbf{X}^{\top}-\Sigma_{t}\right\|_{\mathrm{F}}^ {2}=(\operatorname{Tr}(\Lambda_{t}))^{2}+(K-2)\operatorname{Tr}(\Lambda_{t}^ {2}),\]
where \(K\) is the kurtosis of the truncated Laplace distribution, defined as
\[K=\frac{1}{\sigma^{4}}\int\limits_{-6}^{6}x^{4}\mathsf{p}(x)\mathrm{d}x.\]
Similarly, we compute \(\mathbb{E}\|\widehat{\Sigma}-\Sigma_{t}\|_{\mathrm{F}}^{2}\) for the uniform distribution on the unit sphere:
\[n\mathbb{E}\left\|\widehat{\Sigma}-\Sigma_{t}\right\|_{\mathrm{F}}^{2}= \mathbb{E}\left\|\mathbf{X}\mathbf{X}^{\top}-\Sigma_{t}\right\|_{\mathrm{F}}^ {2}=\frac{d}{d+2}\left(\operatorname{Tr}(\Lambda_{t})\right)^{2}+\frac{d-2}{ d+2}\operatorname{Tr}(\Lambda_{t}^{2}).\]
Note that both distributions fulfill the condition of Proposition 2.2 since in the former case \(\mathbf{\xi}\) is a vector of independent bounded random variables, and in the second case \(\mathbf{\xi}\) satisfies the logarithmic Sobolev inequality (see, for instance, [1, Theorem 5.7.4]). Consequently, they admit concentration bounds provided in Theorem 2.3 and Theorem 2.5.
For each \(n\in\{10,50,100,1000\}\), \(t\in\mathcal{T}=\{0,1/69,2/69;\ldots,1\}\) and both distributions, we generate \(5000\) samples \((\mathbf{X}_{1}^{j,t},\ldots,\mathbf{X}_{n}^{j,t})\), \(j=1,\ldots,5000\), of size \(n\). Next, we compute
\[a_{j,t}:=\frac{\|\widehat{\Sigma}_{j,t}-\Sigma_{t}\|_{\mathrm{F}}^{2}}{\mathbb{ E}\|\widehat{\Sigma}_{j,t}-\Sigma_{t}\|_{\mathrm{F}}^{2}},\]
where \(\widehat{\Sigma}_{j,t}\) is the empirical covariance matrix based on \(\mathbf{X}_{1}^{j,t},\ldots,\mathbf{X}_{n}^{j,t}\). For a fixed \(t\in\mathcal{T}\), we calculate the width \(w_{t}\) of the empirical \(0.95\)-confidence interval for \(\{a_{j,t}:1\leqslant j\leqslant 5000\}\).
Finally, we plot the dependence of \(\log_{2}(w_{t})\) on \(\mathtt{r}(\Sigma_{t})\). The results are displayed on Figure 1. We observe that the width of the interval goes to zero as \(\mathtt{r}(\Sigma_{t})\) increases.
## 4 Proof of Proposition 2.2
The key step of our proof is an upper bound the Orlicz norm of \((\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathrm{Tr}(V))\) with respect to the measure \(\mathsf{P}_{U}\). Similarly to the standard Orlicz \(\psi_{1}\)-norm, we define
\[\|f(\mathbf{\xi})\|_{\psi_{1}(\mathsf{P}_{U})}=\inf\left\{t>0:\mathsf{P}_{U}e^{|f( \mathbf{\xi})|/t}\leqslant 2\right\}\]
for any function \(f:\mathbb{R}^{d}\to\mathbb{R}\) and any \(U\in\mathbb{R}^{d\times d}\), such that \(\mathbb{E}\exp\{\mathbf{\xi}^{\top}U\mathbf{\xi}\}<\infty\). The \(\psi_{1}(\mathsf{P}_{U})\)-norm is just the Orlicz norm with respect to a different probability measure, and hence, it inherits all the properties of the usual \(\psi_{1}\)-norm. For instance, analogously to the bound on moments of sub-exponential random variables (see, e.g., [1, proof of Lemma 2, eq. (10)]), we have
\[\mathsf{P}_{U}|f(\mathbf{\xi})|^{k}\leqslant 2\,\Gamma(k+1)\,\|f(\mathbf{\xi})\|_{\psi _{1}(\mathsf{P}_{U})}^{k}\quad\text{for all }k\in\mathbb{N}. \tag{13}\]
If we manage to obtain an upper bound on
\[\left\|\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathrm{Tr}(V)\right\|_{\psi_{1}(\mathsf{P}_ {U})},\]
then it is straightforward to confine
\[\mathsf{P}_{U}\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathrm{Tr}(V)\quad\text{and}\quad \mathsf{P}_{U}\left(\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathrm{Tr}(V)\right)^{4}.\]
After that, the claim of the proposition follows immediately from the triangle inequality:
\[\left[\mathsf{P}_{U}\left(\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathsf{P}_{U}\mathbf{\xi}^{ \top}V\mathbf{\xi}\right)^{4}\right]^{1/4}\leqslant\left[\mathsf{P}_{U}\left(\bm {\xi}^{\top}V\mathbf{\xi}-\mathrm{Tr}(V)\right)^{4}\right]^{1/4}+\left|\mathsf{P}_ {U}\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathrm{Tr}(V)\right|.\]
For convenience, we split the rest of the proof into several steps.
Step 1: a bound on the Orlicz norm.We are going to show that
\[\left\|\boldsymbol{\xi}^{\top}V\boldsymbol{\xi}-\mathrm{Tr}(V)\right\|_{\psi_{1} (\mathsf{P}_{U})}\leqslant\frac{2\omega\|V\|_{\mathrm{F}}}{1-\omega\|U\|_{ \mathrm{F}}}. \tag{14}\]
For this purpose, it is enough to prove that
\[\mathsf{P}_{U}\exp\left\{\frac{|\boldsymbol{\xi}^{\top}V\boldsymbol{\xi}- \mathrm{Tr}(V)|}{t}\right\}\leqslant 2,\quad\text{where}\quad t=\frac{2 \omega\|V\|_{\mathrm{F}}}{1-\omega\|U\|_{\mathrm{F}}}. \tag{15}\]
According to Jensen's inequality, we have
\[\mathbb{E}e^{\boldsymbol{\xi}^{\top}U\boldsymbol{\xi}}\geqslant e^{\mathbb{E }\boldsymbol{\xi}^{\top}U\boldsymbol{\xi}}=e^{\mathrm{Tr}(U)}.\]
Then it holds that
\[\mathsf{P}_{U}\exp\left\{\frac{|\boldsymbol{\xi}^{\top}V\boldsymbol {\xi}-\mathrm{Tr}(V)|}{t}\right\} \leqslant\left(\mathsf{P}_{U}\exp\left\{\frac{2|\boldsymbol{\xi} ^{\top}V\boldsymbol{\xi}-\mathrm{Tr}(V)|}{t}\right\}\right)^{1/2}\] \[\leqslant\left(\mathbb{E}\exp\left\{\frac{2|\boldsymbol{\xi}^{ \top}V\boldsymbol{\xi}-\mathrm{Tr}(V)|}{t}+\boldsymbol{\xi}^{\top}U \boldsymbol{\xi}-\mathrm{Tr}(U)\right\}\right)^{1/2}.\]
Since \(|\boldsymbol{\xi}^{\top}V\boldsymbol{\xi}-\mathrm{Tr}(V)|=\max\{\boldsymbol {\xi}^{\top}V\boldsymbol{\xi}-\mathrm{Tr}(V),\mathrm{Tr}(V)-\boldsymbol{\xi} ^{\top}V\boldsymbol{\xi}\}\), the expression in the right-hand side is not greater than
\[\left(\sum_{\varepsilon\in\{-1,1\}}\exp\left\{\boldsymbol{\xi}^{\top}\left(U+2 \varepsilon V/t\right)\boldsymbol{\xi}-\mathrm{Tr}(U+2\varepsilon V/t) \right\}\right)^{1/2}.\]
Note that, due to the triangle inequality and (6), for any \(\varepsilon\in\{-1,1\}\), we have
\[\left\|\boldsymbol{\xi}^{\top}\left(U+2\varepsilon V/t\right) \boldsymbol{\xi}-\mathrm{Tr}(U+2\varepsilon V/t)\right\|_{\psi_{1}} \leqslant\left\|\boldsymbol{\xi}^{\top}U\boldsymbol{\xi}- \mathrm{Tr}(U)\right\|_{\psi_{1}}+\frac{2}{t}\left\|\boldsymbol{\xi}^{\top}V \boldsymbol{\xi}-\mathrm{Tr}(V)\right\|_{\psi_{1}}\] \[\leqslant\omega\|U\|_{\mathrm{F}}+\frac{2\omega\|V\|_{\mathrm{F} }}{t}=1,\]
where \(\|\cdot\|_{\psi_{1}}\) stands for the standard \(\psi_{1}\)-norm (see the notations in Section 1). Then the Holder inequality implies that
\[\mathbb{E}\exp\left\{\boldsymbol{\xi}^{\top}\left(U+2\varepsilon V/t\right) \boldsymbol{\xi}-\mathrm{Tr}(U+2\varepsilon V/t)\right\}\leqslant 2^{\left\| \boldsymbol{\xi}^{\top}\left(U+2\varepsilon V/t\right)\boldsymbol{\xi}- \mathrm{Tr}(U+2\varepsilon V/t)\right\|_{\psi_{1}}}\leqslant 2.\]
Hence, we obtain that
\[\mathsf{P}_{U}\exp\left\{\frac{|\boldsymbol{\xi}^{\top}V\boldsymbol {\xi}-\mathrm{Tr}(V)|}{t}\right\} \leqslant\left(\sum_{\varepsilon\in\{-1,1\}}\exp\left\{\boldsymbol {\xi}^{\top}\left(U+2\varepsilon V/t\right)\boldsymbol{\xi}-\mathrm{Tr}(U+2 \varepsilon V/t)\right\}\right)^{1/2}\] \[\leqslant\left(\sum_{\varepsilon\in\{-1,1\}}2\right)^{1/2}=2,\]
which yields (14).
Step 2: bounds on \(\mathsf{P}_{U}\left(\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathrm{Tr}(V)\right)^{4}\) and \(\left|\mathsf{P}_{U}\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathrm{Tr}(V)\right|\).Applying the inequality (13), we obtain that
\[\mathsf{P}_{U}\left(\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathrm{Tr}(V)\right)^{4}\leqslant 4 8\left\|\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathrm{Tr}(V)\right\|_{\psi_{1}(\mathsf{P}_{ U})}^{4}\leqslant 48\left(\frac{2\omega\|V\|_{\mathrm{F}}}{1-\omega\|U\|_{\mathrm{F} }}\right)^{4}. \tag{16}\]
Concerning \(\left|\mathsf{P}_{U}\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathrm{Tr}(V)\right|\), we will prove a tighter bound than in (13). Let \(t>0\) be as defined in (15). Then it holds that
\[\left|\mathsf{P}_{U}\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathrm{Tr}(V)\right|\leqslant t \log\mathbb{E}\exp\left\{\frac{\left|\mathsf{P}_{U}\mathbf{\xi}^{\top}V\mathbf{\xi}- \mathrm{Tr}(V)\right|}{t}\right\}\leqslant t\log 2=\frac{2\omega\|V\|_{ \mathrm{F}}}{1-\omega\|U\|_{\mathrm{F}}}\cdot\log 2. \tag{17}\]
Step 3: final bound.Summing up the inequalities (16) and (17) and using the triangle inequality, we obtain that
\[\left[\mathsf{P}_{U}\left(\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathsf{P}_{U}\mathbf{\xi}^{ \top}V\mathbf{\xi}\right)^{4}\right]^{1/4}\leqslant\frac{2\omega\|V\|_{\mathrm{F} }}{1-\omega\|U\|_{\mathrm{F}}}\left(\log 2+48^{1/4}\right).\]
If \(\omega\|U\|_{\mathrm{F}}\leqslant 1/6\), then
\[\left[\mathsf{P}_{U}\left(\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathsf{P}_{U}\mathbf{\xi}^{ \top}V\mathbf{\xi}\right)^{4}\right]^{1/4}\leqslant\frac{12}{5}\left(\log 2+48^{1/4} \right)\omega\|V\|_{\mathrm{F}}\leqslant 8\omega\|V\|_{\mathrm{F}}.\]
This implies that, for any \(U\in\mathbb{R}^{d\times d}\), satisfying the inequality \(6\omega\|U\|_{\mathrm{F}}\leqslant 1\), it holds that
\[\mathsf{P}_{U}\left(\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathsf{P}_{U}\mathbf{\xi}^{\top}V\bm {\xi}\right)^{4}\leqslant\left(64\omega^{2}\right)^{2}\|V\|_{\mathrm{F}}^{4}.\]
## 5 Proof of Theorem 2.3
This section is devoted to the proof of Theorem 2.3. Let us denote
\[H=\sqrt{n}\ \Sigma^{-1/2}\left(\widehat{\Sigma}-\Sigma\right)\Sigma^{-1/2}= \frac{1}{\sqrt{n}}\sum_{i=1}^{n}\left((\Sigma^{-1/2}\mathbf{X}_{i})(\Sigma^{- 1/2}\mathbf{X}_{i})^{\top}-I\right)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\left(\bm {\xi}_{i}\mathbf{\xi}_{i}^{\top}-I\right).\]
Then
\[\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}=\frac{1}{n}\left\| \Sigma^{1/2}H\Sigma^{1/2}\right\|_{\mathrm{F}}^{2},\]
and it is enough to show that
\[\left\|\Sigma^{1/2}H\Sigma^{1/2}\right\|_{\mathrm{F}}^{2}- \mathbb{E}\left\|\Sigma^{1/2}H\Sigma^{1/2}\right\|_{\mathrm{F}}^{2}\] \[\leqslant 4\|\Sigma\|^{2}\max\left\{\sqrt{2\big{(}\tau\mathbf{\tau}( \Sigma^{2})+\tau^{2}\mathbf{\tau}(\Sigma^{4})\big{)}\log(2/\delta)},\ 4\epsilon\tau\log(2/\delta)\right\}\]
with probability at least \((1-\delta)\). In order to do this, we first prove that
\[\left\|\Sigma^{1/2}H\Sigma^{1/2}\right\|_{\mathrm{F}}^{2}-\mathbb{E}\left\| \Sigma^{1/2}H\Sigma^{1/2}\right\|_{\mathrm{F}}^{2}\leqslant\max\left\{4\upnu \sqrt{2\log(2/\delta)},\ 16e(\kappa\vee\|\Sigma\|^{2})\log(2/\delta)\right\} \tag{18}\]
with high probability, where
\[\upnu^{2} =\left\|\mathbb{E}\mathbf{vec}(\mathbf{XX}^{\top}-\Sigma) \mathbf{vec}(\mathbf{XX}^{\top}-\Sigma)^{\top}\right\|_{\mathrm{F}}^{2},\] \[\kappa =\sup_{\|U\|_{\mathrm{F}}=1}\mathbb{E}\left[\mathbf{X}^{\top}U \mathbf{X}-\mathrm{Tr}(U^{\top}\Sigma)\right]^{2}, \tag{19}\]
and then use Lemma 5.6 to derive the final bound. In the rest of the proof we study exponential moments of \(\|\Sigma^{1/2}H\Sigma^{1/2}\|_{\mathrm{F}}^{2}\) and derive the large deviation bound (18). Since the proof is quite technical, we split it into several steps to improve readability of the paper. The proofs of some auxiliary results are moved to Appendix.
Step 1: linearization.We start with the linearization trick. Let \(\Gamma\in\mathbb{R}^{d\times d}\) be a random matrix with i.i.d. standard Gaussian entries. Then it is straightforward to check that
\[\mathbb{E}\exp\left\{\lambda\|\Sigma^{1/2}H\Sigma^{1/2}\|_{\mathrm{F}}^{2}/2 \right\}=\mathbb{E}\exp\left\{\sqrt{\lambda}\mathrm{Tr}(\Gamma^{\top}\Sigma^{ 1/2}H\Sigma^{1/2})\right\}. \tag{20}\]
The idea is that the power of the exponent in the right-hand side is linear in \(H\), and the analysis of linear statistics is more convenient. Unfortunately, the linearization trick in the form (20) will not bring us to the desired result, because the elements of \(H\) are sub-exponential, and hence, the conditional moment
\[\mathbb{E}_{H}\exp\left\{\sqrt{\lambda}\mathrm{Tr}(\Gamma^{\top}\Sigma^{1/2}H \Sigma^{1/2})\right\}\equiv\mathbb{E}\left[\exp\left\{\sqrt{\lambda}\mathrm{ Tr}(\Gamma^{\top}\Sigma^{1/2}H\Sigma^{1/2})\right\}\ \Big{|}\,\Gamma\right]\]
exists not for all \(\Gamma\). Nevertheless, the idea of linearization is still useful, and we just have to tailor the right-hand side of (20) for our setup. This brings us to the following lemma.
**Lemma 5.1**.: _Let \(\mathfrak{z}\) and \(\lambda\) be any positive numbers, and let \(\Gamma\) be a random matrix with i.i.d. standard Gaussian entries. Then, for any matrix \(A\in\mathbb{R}^{d\times d}\), satisfying the inequality_
\[\|\Sigma A\Sigma\|_{\mathrm{F}}\leqslant\mathfrak{z},\]
_it holds that_
\[\exp\left\{\lambda\|\Sigma^{1/2}A\Sigma^{1/2}\|_{\mathrm{F}}^{2} /2\right\}\] \[\leqslant 2\mathbb{E}_{\Gamma}\,\exp\left\{\sqrt{\lambda}\mathrm{Tr} (\Gamma^{\top}\Sigma^{1/2}A\Sigma^{1/2})\right\}\mathbbm{1}\left(\|\Sigma^{1/2 }\Gamma\Sigma^{1/2}\|_{\mathrm{F}}\leqslant\mathfrak{z}\sqrt{\lambda}+\sqrt{ 2}\mathrm{Tr}(\Sigma)\right).\]
The proof of Lemma 5.1 is moved to Appendix B. For any deterministic matrix \(U\in\mathbb{R}^{d\times d}\), let us denote
\[\Phi(U)=\log\mathbb{E}e^{\mathrm{Tr}(H^{\top}U)}=\sum_{i=1}^{n}\log\mathbb{E }e^{\boldsymbol{\xi}_{i}^{\top}U\boldsymbol{\xi}_{i}/\sqrt{n}}. \tag{21}\]
Applying Lemma 5.1 and using the Fubini theorem, we obtain that
\[\mathbb{E}e^{\lambda\|\Sigma^{1/2}H\Sigma^{1/2}\|_{\mathrm{F}}^{2}/2} \mathbb{1}\left(\sqrt{\lambda}\|\Sigma H\Sigma\|_{\mathrm{F}}\leqslant\mathfrak{z }\right)\] \[\leqslant 2\mathbb{E}e^{\sqrt{\lambda}\mathrm{Tr}(\Gamma^{\top} \Sigma^{1/2}H\Sigma^{1/2})}\mathbb{1}\left(\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\|_ {\mathrm{F}}\leqslant\mathfrak{z}\sqrt{\lambda}+\sqrt{2}\mathrm{Tr}(\Sigma)\right)\] \[=2\mathbb{E}e^{\Phi(\sqrt{\lambda}\Sigma^{1/2}\Gamma\Sigma^{1/2})} \mathbb{1}\left(\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\|_{\mathrm{F}}\leqslant \mathfrak{z}\sqrt{\lambda}+\sqrt{2}\mathrm{Tr}(\Sigma)\right).\]
Step 2: Taylor's expansion.It is easy to observe that the function \(\Phi\) from (21) can be expressed through the function \(\varphi\), defined in (5):
\[\Phi(U)=\sum_{i=1}^{n}\log\mathbb{E}e^{\boldsymbol{\xi}_{i}^{\top}U\boldsymbol {\xi}_{i}/\sqrt{n}}=n\varphi(U/\sqrt{n}). \tag{22}\]
On this step, we use the smoothness of \(\varphi(U)\), guaranteed by Assumption 2.1, to derive an upper bound on
\[\mathbb{E}e^{\Phi(\sqrt{\lambda}\Sigma^{1/2}\Gamma\Sigma^{1/2})}\mathbb{1} \left(\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\|_{\mathrm{F}}\leqslant\mathfrak{z} \sqrt{\lambda}+\sqrt{2}\mathrm{Tr}(\Sigma)\right)\]
for a fixed \(\mathfrak{z}>0\). The next lemma helps to bound the derivatives of \(\varphi\).
**Lemma 5.2**.: _Suppose that the random vector \(\boldsymbol{\xi}=\Sigma^{-1/2}\mathbf{X}\) satisfies Assumption 2.1. Then, for any \(U\in\mathbb{R}^{d\times d}\), such that \(\|U\|_{\mathrm{F}}\leqslant\rho_{\max}\), the derivatives of the cumulant generating function_
\[\varphi(U)=\log\mathbb{E}\exp\left\{\boldsymbol{\xi}^{\top}U\boldsymbol{\xi}\right\}\]
_satisfy the inequalities_
\[0\leqslant\left\langle\nabla^{2}\varphi(U),V^{\otimes 2}\right\rangle= \mathsf{P}_{U}\left(\boldsymbol{\xi}^{\top}V\boldsymbol{\xi}-\mathsf{P}_{U} \boldsymbol{\xi}^{\top}V\boldsymbol{\xi}\right)^{2}\leqslant\tau\|V\|_{ \mathrm{F}}^{2},\]
\[-\tau^{3/2}\|V\|_{\mathrm{F}}^{3}\leqslant\left\langle\nabla^{3}\varphi(U),V ^{\otimes 3}\right\rangle=\mathsf{P}_{U}\left(\boldsymbol{\xi}^{\top}V \boldsymbol{\xi}-\mathsf{P}_{U}\boldsymbol{\xi}^{\top}V\boldsymbol{\xi} \right)^{3}\leqslant\tau^{3/2}\|V\|_{\mathrm{F}}^{3},\]
_and_
\[-2\tau^{2}\|V\|_{\mathrm{F}}^{4}\leqslant\left\langle\nabla^{4}\varphi(U),V^ {\otimes 4}\right\rangle\leqslant\tau^{2}\|V\|_{\mathrm{F}}^{4}.\]
**Remark 5.3**.: _Throughout the paper, we use the standard notation_
\[\langle\nabla\varphi(U),V\rangle\equiv\mathrm{Tr}\left(\nabla\varphi(U)^{\top }V\right),\]
_where \(\nabla\varphi(U)\) is the gradient of the scalar function \(\varphi:\mathbb{R}^{d\times d}\to\mathbb{R}\) with respect to the matrix \(U\in\mathbb{R}^{d\times d}\). For \(k\geqslant 2\), the notation \(\left\langle\nabla^{k}\varphi(U),V^{\otimes k}\right\rangle\) is defined recursively:_
\[\left\langle\nabla^{k}\varphi(U),V^{\otimes k}\right\rangle\equiv\left\langle \nabla\left\langle\nabla^{k-1}\varphi(U),V^{\otimes(k-1)}\right\rangle,V \right\rangle.\]
The proof of Lemma 5.2 is moved to Appendix C. Due to (22), we immediately obtain that
\[\left\langle\nabla^{2}\Phi(O),U^{\otimes 2}\right\rangle =\left\langle\nabla^{2}\Phi(O),U^{\otimes 2}\right\rangle=\mathbb{E }\left(\mathbf{\xi}^{\top}U\mathbf{\xi}-\mathbb{E}\mathbf{\xi}^{\top}U\mathbf{\xi}\right)= \mathbb{E}\left[\mathrm{Tr}\left((\mathbf{\xi}\mathbf{\xi}^{\top}-I)U\right)\right]^{2},\] \[\left|\left\langle\nabla^{3}\Phi(O),U^{\otimes 3}\right\rangle\right| =\frac{1}{\sqrt{n}}\left|\left\langle\nabla^{3}\varphi(O),U^{ \otimes 3}\right\rangle\right|=\frac{1}{\sqrt{n}}\mathbb{E}\left[\mathrm{Tr} \left((\mathbf{\xi}\mathbf{\xi}^{\top}-I)U\right)\right]^{3}\leqslant\frac{\tau^{3/2}} {\sqrt{n}}\|U\|_{\mathrm{F}}^{3}, \tag{23}\] \[\left|\left\langle\nabla^{4}\Phi(V),U^{\otimes 4}\right\rangle \right| =\frac{1}{n}\left|\left\langle\nabla^{4}\varphi(V),U^{\otimes 4} \right\rangle\right|\leqslant\frac{\tau^{2}}{n}\|U\|_{\mathrm{F}}^{4}\quad \text{for all }\|V\|_{\mathrm{F}}\leqslant\rho_{\max}.\]
Here \(O\in\mathbb{R}^{d\times d}\) denotes the matrix with zero entries. Let us introduce
\[\texttt{m}=n\ \mathbb{E}\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{ 2}=\mathbb{E}\left\|\Sigma^{1/2}H\Sigma^{1/2}\right\|_{\mathrm{F}}^{2} \tag{24}\]
for brevity. Using the inequalities (23), we prove the following result.
**Lemma 5.4**.: _Assume 2.1 and let \(\lambda\) and \(\mathfrak{z}\) be any positive numbers, satisfying the inequalities_
\[\mathfrak{z}\sqrt{\lambda}+\sqrt{2}\mathrm{Tr}(\Sigma)\leqslant\rho_{\max} \sqrt{n} \tag{25}\]
_and_
\[\lambda\kappa+G(\lambda,\mathfrak{z})\|\Sigma\|^{2}\leqslant\frac{1}{2}, \tag{26}\]
_where_
\[G(\lambda,\mathfrak{z})=\frac{\lambda(\mathfrak{z}\lambda+\sqrt{2\lambda} \mathrm{Tr}(\Sigma))^{2}}{36n}\left(\tau^{3}\big{(}\mathfrak{z}\lambda+\sqrt{2 \lambda}\mathrm{Tr}(\Sigma)\big{)}^{2}+3\tau^{2}\right). \tag{27}\]
_Then it holds that_
\[\mathbb{E}e^{\Phi(\sqrt{\lambda}\Sigma^{1/2}\Gamma\Sigma^{1/2})} \mathds{1}\left(\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\|_{\mathrm{F}}\leqslant \mathfrak{z}\sqrt{\lambda}+\sqrt{2}\mathrm{Tr}(\Sigma)\right)\] \[\leqslant\exp\left\{\frac{\lambda\texttt{m}}{2}+2\lambda^{2} \texttt{v}^{2}+\frac{1}{2}G(\lambda,\mathfrak{z})\mathrm{Tr}(\Sigma)^{2}+2G( \lambda,\mathfrak{z})^{2}\left\|\Sigma\right\|_{\mathrm{F}}^{4}\right\},\]
_where \(\texttt{m}\), \(\texttt{v}\) and \(\kappa\) are defined in (24) and (19)._
The proof of Lemma 5.4 is deferred to Appendix D.
Step 3: peeling argument.On this step, we transform the restricted exponential moment bound from Lemma 5.4 into a large deviation bound on \(\|\Sigma^{1/2}H\Sigma^{1/2}\|_{\mathrm{F}}\). Let \(\mathfrak{z}\) and \(\lambda\) be positive numbers to be specified later. Then it holds that
\[\mathbb{P}\left(\|\Sigma^{1/2}H\Sigma^{1/2}\|_{\mathrm{F}} \geqslant\mathfrak{z}\right) \leqslant\sum_{k=1}^{\infty}\mathbb{P}\left(e^{k-1}\mathfrak{z} \leqslant\|\Sigma^{1/2}H\Sigma^{1/2}\|_{\mathrm{F}}<e^{k}\mathfrak{z}_{k}\right)\] \[\leqslant\sum_{k=1}^{\infty}\mathbb{P}\left(\|\Sigma^{1/2}H \Sigma^{1/2}\|_{\mathrm{F}}\geqslant e^{k-1}\mathfrak{z}\text{ and }\|\Sigma H\Sigma\|_{\mathrm{F}} \leqslant e^{k}\mathfrak{z}\|\Sigma\|\right).\]
Obviously, for any \(k\geqslant 1\), on the event \(\{\|\Sigma^{1/2}H\Sigma^{1/2}\|_{\mathrm{F}}\geqslant e^{k-1}\mathfrak{z}\}\), we have
\[\mathds{1}\left(\|\Sigma^{1/2}H\Sigma^{1/2}\|_{\mathrm{F}}\geqslant e^{k-1} \mathfrak{z}\right)\leqslant\exp\left\{-\frac{e^{2k-2}\lambda\mathfrak{z}^{2} }{2\cdot e^{k}}+\frac{\lambda\|\Sigma^{1/2}H\Sigma^{1/2}\|_{\mathrm{F}}^{2}} {2\cdot e^{k}}\right\}\]
This yields the following version of the Markov inequality:
\[\sum_{k=1}^{\infty}\mathbb{P}\left(\|\Sigma^{1/2}H\Sigma^{1/2}\|_{ \mathrm{F}}\geqslant e^{k-1}\mathfrak{z}\text{ and }\|\Sigma H\Sigma\|_{\mathrm{F}}\leqslant e^{k} \mathfrak{z}\|\Sigma\|\right)\] \[\leqslant\sum_{k=1}^{\infty}\mathbb{E}\exp\left\{-\frac{e^{k-2} \lambda\mathfrak{z}^{2}}{2}+\frac{\lambda\|\Sigma^{1/2}H\Sigma^{1/2}\|_{ \mathrm{F}}^{2}}{2\cdot e^{k}}\right\}\mathbbm{1}\big{(}\|\Sigma H\Sigma\|_{ \mathrm{F}}\leqslant e^{k}\mathfrak{z}\|\Sigma\|\big{)}.\]
Lemma 5.1 and Lemma 5.4 imply that
\[\sum_{k=1}^{\infty}\mathbb{E}\exp\left\{-\frac{e^{k-2}\lambda \mathfrak{z}^{2}}{2}+\frac{\lambda\|\Sigma^{1/2}H\Sigma^{1/2}\|_{\mathrm{F}}^{ 2}}{2\cdot e^{k}}\right\}\mathbbm{1}\big{(}\|\Sigma H\Sigma\|_{\mathrm{F}} \leqslant e^{k}\mathfrak{z}\|\Sigma\|\big{)}\] \[\leqslant\sum_{k=1}^{\infty}\exp\left\{-\frac{e^{k-2}\lambda \mathfrak{z}^{2}}{2}+\frac{\lambda\mathfrak{m}}{2\cdot e^{k}}+\frac{2\lambda^ {2}\mathsf{v}^{2}}{e^{2k}}+\frac{1}{2}G\left(e^{-k}\lambda,e^{k}\mathfrak{z} \|\Sigma\|\right)\mathrm{Tr}(\Sigma)^{2}+2G\left(e^{-k}\lambda,e^{k}\mathfrak{ z}\|\Sigma\|\right)^{2}\|\Sigma\|_{\mathrm{F}}^{4}\right\}.\]
It is straighforward to check that, by the definition of \(G(\lambda,\mathfrak{z})\) (see (27)), we have
\[G\left(e^{-k}\lambda,e^{k}\mathfrak{z}\|\Sigma\|\right)\leqslant G\left( \lambda,\mathfrak{z}\|\Sigma\|\right).\]
Hence,
\[\leqslant\exp\left\{-\frac{\lambda(\mathfrak{z}^{2}-\mathfrak{m })}{2e}+\frac{2\lambda^{2}\mathsf{v}^{2}}{e^{2}}+\frac{1}{2}G\left(\lambda, \mathfrak{z}\|\Sigma\|\right)\mathrm{Tr}(\Sigma)^{2}+2G\left(\lambda, \mathfrak{z}\|\Sigma\|\right)^{2}\|\Sigma\|_{\mathrm{F}}^{4}\right\}\] \[\quad+\sum_{k=2}^{\infty}\exp\left\{-\frac{(e^{k-1}-1)\lambda \mathfrak{z}^{2}}{2e}-\frac{\lambda(\mathfrak{z}^{2}-\mathfrak{m})}{2e}+\frac {2\lambda^{2}\mathsf{v}^{2}}{e^{2}}+\frac{1}{2}G\left(\lambda,\mathfrak{z}\| \Sigma\|\right)\mathrm{Tr}(\Sigma)^{2}+2G\left(\lambda,\mathfrak{z}\|\Sigma\| \right)^{2}\|\Sigma\|_{\mathrm{F}}^{4}\right\}.\]
Introduce a function \(g:\mathbb{R}_{+}\to\mathbb{R}_{+}\), defined by
\[g(u)=1+\sum_{k=1}^{\infty}\exp\left\{-\frac{(e^{k}-1)u}{2e}\right\}=\sum_{k=0 }^{\infty}\exp\left\{-\frac{(e^{k}-1)u}{2e}\right\}. \tag{28}\]
Then
\[\sum_{k=1}^{\infty}\exp\left\{-\frac{e^{k-2}\lambda\mathfrak{z}^{ 2}}{2}+\frac{\lambda\mathfrak{m}}{2\cdot e^{k}}+\frac{2\lambda^{2}\mathsf{v}^ {2}}{e^{2k}}+\frac{1}{2}G\left(e^{-k}\lambda,e^{k}\mathfrak{z}\|\Sigma\| \right)\mathrm{Tr}(\Sigma)^{2}+2G\left(e^{-k}\lambda,e^{k}\mathfrak{z}\| \Sigma\|\right)^{2}\|\Sigma\|_{\mathrm{F}}^{4}\right\}\] \[\leqslant g(\lambda\mathfrak{z}^{2})\exp\left\{-\frac{\lambda( \mathfrak{z}^{2}-\mathfrak{m})}{2e}+\frac{2\lambda^{2}\mathsf{v}^{2}}{e^{2}}+ \frac{1}{2}G\left(\lambda,\mathfrak{z}\|\Sigma\|\right)\mathrm{Tr}(\Sigma)^{2 }+2G\left(\lambda,\mathfrak{z}\|\Sigma\|\right)^{2}\|\Sigma\|_{\mathrm{F}}^{4 }\right\},\]
and thus,
\[\mathbb{P}\left(\|\Sigma^{1/2}H\Sigma^{1/2}\|_{\mathrm{F}} \geqslant\mathfrak{z}\right)\] \[\leqslant g(\lambda\mathfrak{z}^{2})\exp\left\{-\frac{\lambda( \mathfrak{z}^{2}-\mathfrak{m})}{2e}+\frac{2\lambda^{2}\mathsf{v}^{2}}{e^{2}}+ \frac{1}{2}G\left(\lambda,\mathfrak{z}\|\Sigma\|\right)\mathrm{Tr}(\Sigma)^{2 }+2G\left(\lambda,\mathfrak{z}\|\Sigma\|\right)^{2}\|\Sigma\|_{\mathrm{F}}^{4 }\right\}. \tag{29}\]
Step 4: choosing \(\lambda\) and \(\mathfrak{z}\).On this step, we specify \(\lambda\) and \(\mathfrak{z}\) and ensure that they satisfy (25) and (26) under the conditions of the theorem. Let us take
\[\lambda=\frac{e(\mathfrak{z}^{2}-\mathfrak{m})}{8\mathsf{v}^{2}}\wedge\frac{1}{ 4(\kappa\vee\|\Sigma\|^{2})}\quad\text{and}\quad\mathfrak{z}^{2}=\mathfrak{m }+\max\left\{4\mathsf{v}\sqrt{2\log(2/\delta)},16e(\kappa\vee\|\Sigma\|^{2}) \log(2/\delta)\right\}. \tag{30}\]
Such \(\lambda\) minimizes the expression
\[-\frac{\lambda(\mathfrak{z}^{2}-\mathfrak{m})}{2e}+\frac{2\lambda^{2}\mathsf{ v}^{2}}{e^{2}}\quad\text{over}\quad\lambda\in\left[0,\frac{1}{4(\kappa\vee\| \Sigma\|^{2})}\right].\]
With \(\lambda\) and \(\mathfrak{z}\), defined in (30), we have
\[-\frac{\lambda(\mathfrak{z}^{2}-\mathfrak{m})}{2e}+\frac{2\lambda^{2}\mathsf{ v}^{2}}{e^{2}}=-\min\left\{\frac{(\mathfrak{z}^{2}-\mathfrak{m})^{2}}{32 \mathsf{v}^{2}},\frac{\mathfrak{z}^{2}-\mathfrak{m}}{16e(\kappa\vee\|\Sigma \|^{2})}\right\} \tag{31}\]
and
\[\mathfrak{z}^{2}\lambda=\mathfrak{m}\lambda+(\mathfrak{z}^{2}-\mathfrak{m}) \lambda=\mathfrak{m}\lambda+4e\log(2/\delta)\geqslant 4e\log 2.\]
As a consequence, we obtain that
\[g(\mathfrak{z}^{2}\lambda)\leqslant g\left(4e\log 2\right)\leqslant 1.1, \tag{32}\]
because, according to the definition of \(g\), it is a non-increasing function on \((0,+\infty)\).
Let us show that the inequalities (25) and (26) are fulfilled. Since \(\mathfrak{m}\lambda\leqslant 0.25\,\mathfrak{m}/(\kappa\vee\|\Sigma\|^{2})\), we have
\[\mathfrak{z}\sqrt{\lambda}\|\Sigma\|+\sqrt{2}\mathrm{Tr}(\Sigma) \leqslant\|\Sigma\|\left(\sqrt{\frac{\mathfrak{m}}{4(\kappa\vee\| \Sigma\|^{2})}+4e\log(2/\delta)}+\sqrt{2}\mathbf{r}(\Sigma)\right)\] \[\leqslant\|\Sigma\|\left(\frac{1}{2}\sqrt{\frac{\mathfrak{m}}{ \kappa\vee\|\Sigma\|^{2}}}+2\sqrt{e\log(2/\delta)}+\sqrt{2}\mathbf{r}(\Sigma) \right).\]
The following lemma allows us to simplify the expression in the right-hand side.
**Lemma 5.5**.: _Suppose that Assumption 2.4 holds with some \(\alpha>0\). Then \(\mathfrak{m}=n\,\,\mathbb{E}\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\) satisfies the inequalities_
\[(\mathrm{Tr}(\Sigma))^{2}-\mathrm{Tr}(\Sigma^{2})\leqslant\mathfrak{m} \leqslant(\mathrm{Tr}(\Sigma))^{2}+(\alpha-1)\,\,\mathrm{Tr}(\Sigma^{2}).\]
A reader can find the proof of Lemma 5.5 in Appendix E. We would like to note that Assumption 2.4 is weaker than Assumption 2.1 and, under the conditions of Theorem 2.3, we have that Assumption 2.4 is fulfilled with \(\alpha=\tau\), which yields
\[\mathfrak{m}\leqslant(\mathrm{Tr}(\Sigma))^{2}+(\tau-1)\,\,\mathrm{Tr}( \Sigma^{2}).\]
Applying Lemma 5.5, we obtain that
\[\mathfrak{z}\sqrt{\lambda}\|\Sigma\|+\sqrt{2}\mathrm{Tr}(\Sigma) \leqslant\|\Sigma\|\left(\frac{1}{2}\sqrt{\frac{\mathfrak{m}}{ \kappa\vee\|\Sigma\|^{2}}}+2\sqrt{e\log(2/\delta)}+\sqrt{2}\mathbf{r}(\Sigma)\right)\] \[\leqslant\|\Sigma\|\left(\frac{1}{2}\sqrt{\mathbf{r}(\Sigma)^{2} +\tau\mathbf{r}(\Sigma^{2})}+2\sqrt{e\log(2/\delta)}+\sqrt{2}\mathbf{r}( \Sigma)\right)\] \[\leqslant\|\Sigma\|\left(\frac{1}{2}\mathbf{r}(\Sigma)+\frac{1} {2}\sqrt{\tau\mathbf{r}(\Sigma^{2})}+2\sqrt{e\log(2/\delta)}+\sqrt{2}\mathbf{r }(\Sigma)\right)\] \[\leqslant\|\Sigma\|\left(2\mathbf{r}(\Sigma)+\frac{1}{2}\sqrt{ \tau\mathbf{r}(\Sigma^{2})}+2\sqrt{e\log(2/\delta)}\right)=\|\Sigma\|\,\, \mathbb{R}(\Sigma,\delta),\]
and then (25) is fulfilled due to (7). Moreover, the inequality (7) implies that
\[G(\lambda,\mathfrak{z}\|\Sigma\|)\operatorname{Tr}(\Sigma)^{2} =\frac{\lambda^{2}\|\Sigma\|^{4}\operatorname{\mathbf{r}}(\Sigma) ^{2}\operatorname{\mathtt{R}}(\Sigma,\delta)^{2}}{36n}\left(\tau^{3}\lambda\| \Sigma\|^{2}\operatorname{\mathtt{R}}(\Sigma,\delta)^{2}+3\tau^{2}\right)\] \[\leqslant\frac{\|\Sigma\|^{4}\operatorname{\mathbf{r}}(\Sigma) ^{2}\operatorname{\mathtt{R}}(\Sigma,\delta)^{2}}{144n(\kappa^{2}\vee\| \Sigma\|^{4})}\left(\frac{\tau^{3}\|\Sigma\|^{2}\operatorname{\mathtt{R}}( \Sigma,\delta)^{2}}{4(\kappa\vee\|\Sigma\|^{2})}+3\tau^{2}\right) \tag{33}\] \[\leqslant\frac{\operatorname{\mathbf{r}}(\Sigma)^{2} \operatorname{\mathtt{R}}(\Sigma,\delta)^{2}}{144n}\left(\frac{\tau^{3} \operatorname{\mathtt{R}}(\Sigma,\delta)^{2}}{4}+3\tau^{2}\right)\leqslant \frac{1}{4}.\]
Similarly, it holds that
\[G(\lambda,\mathfrak{z}\|\Sigma\|)\left\|\Sigma\right\|_{\mathrm{F}}^{2}=G( \lambda,\mathfrak{z}\|\Sigma\|)\operatorname{Tr}(\Sigma^{2})\leqslant G( \lambda,\mathfrak{z}\|\Sigma\|)\operatorname{Tr}(\Sigma)^{2}\leqslant\frac{1}{4} \tag{34}\]
and
\[G(\lambda,\mathfrak{z}\|\Sigma\|)\left\|\Sigma\right\|^{2}\leqslant\frac{1}{4}.\]
The last inequality means that (26) is fulfilled, because
\[\lambda\kappa+G(\lambda,\mathfrak{z}\|\Sigma\|)\|\Sigma\|^{2}=\frac{\kappa}{4( \kappa\vee\|\Sigma\|^{2})}+G(\lambda,\mathfrak{z}\|\Sigma\|)\|\Sigma\|^{2} \leqslant\frac{1}{4}+\frac{1}{4}=\frac{1}{2}.\]
Finally, taking into account (29), (32), (33), and (34), we obtain that
\[\mathbb{P}\left(\|\Sigma^{1/2}H\Sigma^{1/2}\|_{\mathrm{F}}\geqslant \mathfrak{z}\right)\] \[\leqslant g(\lambda\mathfrak{z}^{2})\exp\left\{-\frac{\lambda( \mathfrak{z}^{2}-\mathfrak{m})}{2e}+\frac{2\lambda^{2}\mathsf{v}^{2}}{e^{2}}+ \frac{1}{2}G\left(\lambda,\mathfrak{z}\|\Sigma\|\right)\operatorname{Tr}( \Sigma)^{2}+2G\left(\lambda,\mathfrak{z}\|\Sigma\|\right)^{2}\|\Sigma\|_{ \mathrm{F}}^{4}\right\}\] \[\leqslant 1.1\exp\left\{-\frac{\lambda(\mathfrak{z}^{2}-\mathfrak{ m})}{2e}+\frac{2\lambda^{2}\mathsf{v}^{2}}{e^{2}}+\frac{1}{2}\cdot\frac{1}{4}+2 \cdot\frac{1}{16}\right\}\] \[<2\exp\left\{-\frac{\lambda(\mathfrak{z}^{2}-\mathfrak{m})}{2e} +\frac{2\lambda^{2}\mathsf{v}^{2}}{e^{2}}\right\},\]
and then the choice of \(\lambda\) and \(\mathfrak{z}\) (see (30) and (31)) ensures that
\[\mathbb{P}\left(\|\Sigma^{1/2}H\Sigma^{1/2}\|_{\mathrm{F}} \geqslant\mathfrak{z}\right) <2\exp\left\{-\frac{\lambda(\mathfrak{z}^{2}-\mathfrak{m})}{2e} +\frac{2\lambda^{2}\mathsf{v}^{2}}{e^{2}}\right\}\] \[\leqslant 2\exp\left\{-\min\left\{\frac{(\mathfrak{z}^{2}- \mathfrak{m})^{2}}{32\mathsf{v}^{2}},\frac{\mathfrak{z}^{2}-\mathfrak{m}}{16e (\kappa\vee\|\Sigma\|^{2})}\right\}\right\}=\delta. \tag{35}\]
Step 5: a bound on \(\mathsf{v}^{2}\) and \(\kappa\).The inequality (35) yields that
\[n\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}-n\ \mathbb{E}\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2} =\left\|\Sigma^{1/2}H\Sigma^{1/2}\right\|_{\mathrm{F}}^{2}- \mathbb{E}\left\|\Sigma^{1/2}H\Sigma^{1/2}\right\|_{\mathrm{F}}^{2}\] \[=\left\|\Sigma^{1/2}H\Sigma^{1/2}\right\|_{\mathrm{F}}^{2}- \mathfrak{m}\] \[\leqslant\max\left\{4\mathsf{v}\sqrt{2\log(2/\delta)},\ 16e(\kappa \vee\|\Sigma\|^{2})\log(2/\delta)\right\}\]
with probability at least \((1-\delta)\). Here we used the definition of \(\mathfrak{m}\) (see eq. (24)). The goal of the final step is to obtain upper bounds on \(\mathsf{v}\) and \(\kappa\), using Assumption 2.1. In Appendix F, we prove the following result.
**Lemma 5.6**.: _Let Assumption 2.4 be satisfied with some \(\alpha>0\). Then it holds that_
\[\mathsf{v}^{2}=\left\|\mathbb{E}\mathbf{vec}(\mathbf{XX}^{\top}-\Sigma)\mathbf{ vec}(\mathbf{XX}^{\top}-\Sigma)^{\top}\right\|_{\mathrm{F}}^{2}\leqslant\alpha \left(\mathrm{Tr}(\Sigma^{2})\right)^{2}+(\alpha^{2}-\alpha)\;\mathrm{Tr}( \Sigma^{4})\]
_and_
\[\kappa=\sup_{\|U\|_{\mathrm{F}}=1}\mathbb{E}\left[\mathbf{X}^{\top}U\mathbf{X }-\mathrm{Tr}(U^{\top}\Sigma)\right]^{2}\leqslant\alpha\,\|\Sigma\|^{2}.\]
Let us recall that, under the conditions of Theorem 2.3, Assumption 2.4 is fulfilled with \(\alpha=\tau\). Hence, we have
\[\mathsf{v}^{2}\leqslant\tau\left(\mathrm{Tr}(\Sigma^{2})\right)^{2}+(\tau^{2}- \tau)\;\mathrm{Tr}(\Sigma^{4})\quad\text{and}\quad\kappa\leqslant\tau\,\| \Sigma\|^{2}.\]
Applying Lemma 5.6, we get the assertion of the theorem: with probability at least \((1-\delta)\) it holds that
\[n\left\|\widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}-n\;\mathbb{E}\left\| \widehat{\Sigma}-\Sigma\right\|_{\mathrm{F}}^{2}<4\|\Sigma\|^{2}\max\left\{ \sqrt{2\big{(}\tau\mathtt{r}(\Sigma^{2})^{2}+\tau^{2}\mathtt{r}(\Sigma^{4}) \big{)}\log(2/\delta)},\;4e\tau\log(2/\delta)\right\}.\]
The proof is finished.
## 6 Proof of Theorem 2.5
Since, for any \(t>0\) and any \(\lambda>0\), it holds that
\[\mathbb{P}\left(n\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}-n \,\mathbb{E}\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\leqslant-t\right)\] \[\leqslant\exp\left\{-\frac{\lambda t}{2}+\frac{\lambda n\, \mathbb{E}\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}}{2}\right\}\cdot \mathbb{E}\exp\left\{-\frac{\lambda n\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}} ^{2}}{2}\right\} \tag{36}\]
due to the Markov inequality, we are interested in upper bounds on the exponential moment
\[\mathbb{E}\exp\left\{-\frac{\lambda n\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F} }^{2}}{2}\right\}.\]
We would like to note that, in contrast to Theorem 2.3, the exponential moment of interest exists for all \(\lambda>0\) in this case. This slightly simplifies the proof. In what follows, we will show that, under the conditions of Theorem 2.5, for any \(t>0\), it holds that
\[\mathbb{P}\left(n\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}-n\,\mathbb{E}\| \widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\leqslant-t\right)\leqslant 5\exp \left\{-\left(\frac{t^{2}}{8\mathsf{s}^{2}}\wedge\frac{t}{8\|\Sigma\|^{2}} \right)\right\}\]
with
\[\mathsf{s}^{2}=4\mathsf{v}^{2}+\frac{15\alpha^{2}\|\Sigma\|_{\mathrm{F}}^{2} \mathrm{Tr}(\Sigma)^{2}\left[\mathtt{r}(\Sigma)^{2}+\alpha\mathtt{r}(\Sigma^ {2})^{2}\right]}{n},\]
where \(\mathsf{v}\) is defined in (19). This will yield the desired high probability bound (10). Similarly to the proof of Theorem 2.3, we split our derivations in several steps for the ease of presentation.
Step 1: linearization.As before, let us denote
\[H=\sqrt{n}\ \Sigma^{-1/2}\left(\widehat{\Sigma}-\Sigma\right)\Sigma^{-1/2}= \frac{1}{\sqrt{n}}\sum_{i=1}^{n}\left(\boldsymbol{\xi}_{i}\boldsymbol{\xi}_{i }^{\top}-I\right),\]
where \(\boldsymbol{\xi}_{i}=\Sigma^{-1/2}\mathbf{X}_{i}\) for all \(i\in\{1,\ldots,n\}\) are the standardized vectors. Let \(\Gamma\in\mathbb{R}^{d\times d}\) be a random matrix with i.i.d. standard Gaussian entries. Applying the same linearization trick as in the proof of Theorem 2.3 (Step 1), we obtain that
\[\mathbb{E}\exp\left\{-\frac{\lambda n\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}} ^{2}}{2}\right\}=\mathbb{E}\exp\left\{-\frac{\lambda\|\Sigma^{1/2}H\Sigma^{1/ 2}\|_{\mathrm{F}}^{2}}{2}\right\}=\mathbb{E}\exp\left\{\mathtt{i}\sqrt{ \lambda}\ \mathrm{Tr}(\Gamma^{\top}\Sigma^{1/2}H\Sigma^{1/2})\right\}.\]
Hence, we have to focus on the properties of the characteristic function of \(H\). For any \(U\in\mathbb{R}^{d\times d}\), let
\[\psi(U)=\log\mathbb{E}e^{\mathtt{i}\boldsymbol{\xi}^{\top}U}\boldsymbol{\xi}\]
be the logarithm of the Fourier transform of \(\boldsymbol{\xi}^{\top}U\boldsymbol{\xi}\). Applying the Fubini theorem, we get that
\[\mathbb{E}\exp\left\{\mathtt{i}\sqrt{\lambda}\ \mathrm{Tr}( \Gamma^{\top}\Sigma^{1/2}H\Sigma^{1/2})\right\} =\mathbb{E}\exp\left\{\mathtt{i}\sqrt{\frac{\lambda}{n}}\ \sum_{j=1}^{n}\mathrm{Tr}\left(\Gamma^{\top}\Sigma^{1/2}\left( \boldsymbol{\xi}_{j}\boldsymbol{\xi}_{j}^{\top}-I\right)\Sigma^{1/2}\right)\right\}\] \[=\mathbb{E}_{\Gamma}\exp\left\{n\psi\left(\sqrt{\frac{\lambda}{n} }\ \Sigma^{1/2}\Gamma\Sigma^{1/2}\right)-\mathtt{i}\sqrt{n\lambda}\ \mathrm{Tr}(\Sigma^{1/2}\Gamma \Sigma^{1/2})\right\}.\]
Thus, we proved that
\[\mathbb{E}\exp\left\{-\frac{\lambda n\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}} ^{2}}{2}\right\}=\mathbb{E}_{\Gamma}\exp\left\{n\psi\left(\sqrt{\frac{\lambda} {n}}\ \Sigma^{1/2}\Gamma\Sigma^{1/2}\right)-\mathtt{i}\sqrt{n\lambda}\ \mathrm{Tr}(\Sigma^{1/2}\Gamma \Sigma^{1/2})\right\}.\]
On the other hand, the absolute value of \(\mathbb{E}\exp\left\{-\lambda n\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}/2\right\}\) is equal to the exponential moment itself. Then, due to the Jensen inequality, we have
\[\mathbb{E}\exp\left\{-\frac{\lambda n\|\widehat{\Sigma}-\Sigma\| _{\mathrm{F}}^{2}}{2}\right\} =\left|\mathbb{E}\exp\left\{-\frac{\lambda n\|\widehat{\Sigma}- \Sigma\|_{\mathrm{F}}^{2}}{2}\right\}\right|\] \[=\left|\mathbb{E}_{\Gamma}\exp\left\{n\psi\left(\sqrt{\frac{ \lambda}{n}}\ \Sigma^{1/2}\Gamma\Sigma^{1/2}\right)-\mathtt{i}\sqrt{n\lambda}\ \mathrm{Tr}(\Sigma^{1/2}\Gamma \Sigma^{1/2})\right\}\right|\] \[\leqslant\mathbb{E}_{\Gamma}\left|\exp\left\{n\psi\left(\sqrt{ \frac{\lambda}{n}}\ \Sigma^{1/2}\Gamma\Sigma^{1/2}\right)-\mathtt{i}\sqrt{n\lambda}\ \mathrm{Tr}(\Sigma^{1/2}\Gamma \Sigma^{1/2})\right\}\right|.\]
Let us introduce a positive number \(\rho>0\), such that
\[\rho^{2}=3\|\Sigma\|_{\mathrm{F}}^{2}\left(\mathtt{r}(\Sigma)^{2}+\alpha \mathtt{r}(\Sigma^{2})\right), \tag{37}\]
and define an event \(\mathcal{E}_{\rho}\) as follows:
\[\mathcal{E}_{\rho}=\left\{|\mathrm{Tr}(\Gamma\Sigma)|\leqslant\rho\ \text{and}\ \left\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\right\|_{\mathrm{F}}\leqslant\rho\right\}.\]
Then it holds that
\[\mathbb{E}_{\Gamma}\left|\exp\left\{n\psi\left(\sqrt{\frac{\lambda}{n} }\ \Sigma^{1/2}\Gamma\Sigma^{1/2}\right)-\mathtt{i}\sqrt{n\lambda}\ \mathrm{Tr}(\Sigma^{1/2}\Gamma\Sigma^{1/2})\right\}\right|\] \[=\mathbb{E}_{\Gamma}\left[\left|\exp\left\{n\psi\left(\sqrt{ \frac{\lambda}{n}}\ \Sigma^{1/2}\Gamma\Sigma^{1/2}\right)-\mathtt{i}\sqrt{n\lambda}\ \mathrm{Tr}(\Sigma^{1/2}\Gamma\Sigma^{1/2})\right\}\right| \mathtt{1}\left(\mathcal{E}_{\rho}\right)\right]\] \[\quad+\mathbb{E}_{\Gamma}\left[\left|\exp\left\{n\psi\left(\sqrt{ \frac{\lambda}{n}}\ \Sigma^{1/2}\Gamma\Sigma^{1/2}\right)-\mathtt{i}\sqrt{n\lambda}\ \mathrm{Tr}(\Sigma^{1/2}\Gamma\Sigma^{1/2})\right\}\right| \mathtt{1}\left(\mathcal{E}_{\rho}^{c}\right)\right],\]
where \(\mathcal{E}_{\rho}^{c}\) stands for the complement of \(\mathcal{E}_{\rho}\). On \(\mathcal{E}_{\rho}^{c}\), we apply the bound
\[\mathbb{E}_{\Gamma}\left[\left|\exp\left\{n\psi\left(\sqrt{\frac{ \lambda}{n}}\ \Sigma^{1/2}\Gamma\Sigma^{1/2}\right)-\mathtt{i}\sqrt{n\lambda}\ \mathrm{Tr}(\Sigma^{1/2}\Gamma\Sigma^{1/2})\right\}\right| \mathtt{1}\left(\mathcal{E}_{\rho}^{c}\right)\right]\] \[=\mathbb{E}_{\Gamma}\left[\left|\mathbb{E}_{\xi}\exp\left\{ \mathtt{i}\sqrt{\lambda}\ \mathrm{Tr}(\Gamma^{\top}\Sigma^{1/2}H\Sigma^{1/2})\right\}\right| \mathtt{1}\left(\mathcal{E}_{\rho}^{c}\right)\right]\] \[\leqslant\mathbb{E}_{\Gamma}\left[1\cdot\mathtt{1}\left(\mathcal{ E}_{\rho}^{c}\right)\right]=\mathbb{P}\left(\mathcal{E}_{\rho}^{c}\right).\]
On the other hand, on \(\mathcal{E}_{\rho}\) we use the inequality
\[\mathbb{E}_{\Gamma}\left[\left|\exp\left\{n\psi\left(\sqrt{\frac{ \lambda}{n}}\ \Sigma^{1/2}\Gamma\Sigma^{1/2}\right)-\mathtt{i}\sqrt{n\lambda}\ \mathrm{Tr}(\Sigma^{1/2}\Gamma\Sigma^{1/2})\right\}\right| \mathtt{1}\left(\mathcal{E}_{\rho}\right)\right]\] \[\leqslant\mathbb{E}_{\Gamma}\left[\exp\left\{n\ \mathrm{Re}\left[\psi\left(\sqrt{\frac{ \lambda}{n}}\ \Sigma^{1/2}\Gamma\Sigma^{1/2}\right)\right]\right\}\mathtt{1}\left( \mathcal{E}_{\rho}\right)\right].\]
Hence, we showed that
\[\mathbb{E}\exp\left\{-\frac{\lambda n\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}} ^{2}}{2}\right\}\leqslant\mathbb{E}_{\Gamma}\left[\exp\left\{n\ \mathrm{Re}\left[\psi\left(\sqrt{\frac{\lambda}{n}}\ \Sigma^{1/2}\Gamma\Sigma^{1/2}\right)\right]\right\}\mathtt{1}\left( \mathcal{E}_{\rho}\right)\right]+\mathbb{P}\left(\mathcal{E}_{\rho}^{c}\right). \tag{38}\]
In what follows, we bound the terms in the right-hand side of (38) one by one, starting with the second summand.
Step 2: a bound on the remainder term.According to the definition of \(\mathcal{E}_{\rho}\), it is enough to bound the probabilities \(\mathbb{P}\left(|\mathrm{Tr}(\Gamma\Sigma)|>\rho\right)\) and \(\mathbb{P}\left(\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\|_{\mathrm{F}}>\rho\right)\). Note that \(\mathrm{Tr}(\Gamma\Sigma)\) is a centered Gaussian random variable with variance \(\|\Sigma\|_{\mathrm{F}}^{2}\). Then, due to the Hoeffding inequality, it holds that
\[\mathbb{P}\left(|\mathrm{Tr}(\Gamma\Sigma)|>\rho\right)\leqslant 2\exp\left\{- \frac{\rho^{2}}{2\|\Sigma\|_{\mathrm{F}}^{2}}\right\}. \tag{39}\]
The bound on \(\mathbb{P}\left(\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\|_{\mathrm{F}}>\rho\right)\) follows from the standard results on large deviations of Gaussian quadratic forms. Let us represent the squared Frobenius norm of \(\Sigma^{1/2}\Gamma\Sigma^{1/2}\) in the following form, using (47):
\[\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\|_{\mathrm{F}}^{2}=\mathrm{Tr}\left(\Sigma^{1 /2}\Gamma^{\top}\Sigma\Gamma\Sigma^{1/2}\right)=\mathrm{Tr}\left(\Gamma^{ \top}\Sigma\Gamma\Sigma\right)=\mathbf{vec}(\Gamma)^{\top}\left(\Sigma\otimes \Sigma\right)\mathbf{vec}(\Gamma).\]
Then, according to Laurent and Massart (2000) (see the proof of Lemma 1), for any \(\mu\in(0,1/2)\), it holds that
\[\mathbb{P}\left(\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\|_{\mathrm{F}}>\rho\right) =\mathbb{P}\left(\mathbf{vec}(\Gamma)^{\top}\left(\Sigma\otimes \Sigma\right)\mathbf{vec}(\Gamma)>\rho^{2}\right)\] \[\leqslant\exp\left\{-\mu\rho^{2}+\mu\mathrm{Tr}(\Sigma\otimes \Sigma)+\frac{\|\Sigma\otimes\Sigma\|_{\mathrm{F}}^{2}\ \mu^{2}}{1-2\|\Sigma\otimes\Sigma\|\,\mu}\right\}.\]
Due to the properties of the Kronecker product (see (45)), we have
\[\mathrm{Tr}(\Sigma\otimes\Sigma)=\left(\mathrm{Tr}(\Sigma)\right)^{2},\quad \|\Sigma\otimes\Sigma\|_{\mathrm{F}}=\|\Sigma\|_{\mathrm{F}}^{2}\quad\text{and} \quad\|\Sigma\otimes\Sigma\|=\|\Sigma\|^{2}.\]
Hence,
\[\mathbb{P}\left(\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\|_{\mathrm{F}}>\rho\right) \leqslant\exp\left\{-\mu\left(\rho^{2}-\left[\mathrm{Tr}(\Sigma)\right]^{2} \right)+\frac{\|\Sigma\|_{\mathrm{F}}^{4}\ \mu^{2}}{1-2\|\Sigma\|^{2}\mu}\right\}\quad\text{for all }\mu \in(0,1/2). \tag{40}\]
Let us take \(\mu\in(0,1/2)\), satisfying the condition
\[1-2\|\Sigma\|^{2}\mu=\frac{\|\Sigma\|_{\mathrm{F}}^{4}}{\|\Sigma\|_{\mathrm{F }}^{4}+\|\Sigma\|^{2}(\rho^{2}-\left[\mathrm{Tr}(\Sigma)\right]^{2})},\quad \text{that is,}\quad\mu=\frac{(\rho^{2}-\left[\mathrm{Tr}(\Sigma)\right]^{2})/ 2}{\|\Sigma\|_{\mathrm{F}}^{4}+\|\Sigma\|^{2}(\rho^{2}-\left[\mathrm{Tr}( \Sigma)\right]^{2})}.\]
Then it is straightforward to check that
\[-\mu\left(\rho^{2}-\left[\mathrm{Tr}(\Sigma)\right]^{2}\right)+ \frac{\|\Sigma\|_{\mathrm{F}}^{4}\ \mu^{2}}{1-2\|\Sigma\|^{2}\mu} =-\mu\left(\rho^{2}-\left[\mathrm{Tr}(\Sigma)\right]^{2}\right)+ \mu^{2}\left(\|\Sigma\|_{\mathrm{F}}^{4}+\|\Sigma\|^{2}\rho^{2}\right)\] \[=-\frac{(\rho^{2}-\left[\mathrm{Tr}(\Sigma)\right]^{2})^{2}/4}{ \|\Sigma\|_{\mathrm{F}}^{4}+\|\Sigma\|^{2}(\rho^{2}-\left[\mathrm{Tr}(\Sigma) \right]^{2})}.\]
Substituting this equality into (40), we finally obtain that
\[\mathbb{P}\left(\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\|_{\mathrm{F}}>\rho\right) \leqslant\exp\left\{-\frac{(\rho^{2}-\left[\mathrm{Tr}(\Sigma)\right]^{2})^{2 }/4}{\|\Sigma\|_{\mathrm{F}}^{4}+\|\Sigma\|^{2}(\rho^{2}-\left[\mathrm{Tr}( \Sigma)\right]^{2})}\right\}.\]
This inequality and (39) immediately yield that
\[\mathbb{P}\left(\mathcal{E}_{\rho}^{c}\right) \leqslant\mathbb{P}\left(|\mathrm{Tr}(\Gamma\Sigma)|>\rho\right)+ \mathbb{P}\left(\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\|_{\mathrm{F}}>\rho\right)\] \[\leqslant 2\exp\left\{-\frac{\rho^{2}}{2\|\Sigma\|_{\mathrm{F}}^{2} }\right\}+\exp\left\{-\frac{(\rho^{2}-\left[\mathrm{Tr}(\Sigma)\right]^{2})^{ 2}/4}{\|\Sigma\|_{\mathrm{F}}^{4}+\|\Sigma\|^{2}(\rho^{2}-\left[\mathrm{Tr}( \Sigma)\right]^{2})}\right\}. \tag{41}\]
Step 3: Taylor's expansion.Next, we elaborate on the first term in the right-hand side of (38). For any \(U\in\mathbb{R}^{d\times d}\), whenever \(\psi(U)\) is well-defined, the gradient \(\nabla\psi(U)\) is given by
\[\langle\nabla\psi(U),V\rangle=\frac{\mathtt{i}\,\mathbb{E}\left[\boldsymbol{ \xi}^{\top}V\boldsymbol{\xi}\ e^{\mathtt{i}\boldsymbol{\xi}^{\top}U \boldsymbol{\xi}}\right]}{\mathbb{E}e^{\mathtt{i}\boldsymbol{\xi}^{\top}U \boldsymbol{\xi}}}=\mathtt{i}\,\mathbb{P}_{\mathtt{i}U}\,\boldsymbol{\xi}^{ \top}V\boldsymbol{\xi}.\]
Here we extended the notation \(\mathsf{P}_{U}f(\mathbf{\xi},U)\) (see Section 2) to the complex-valued matrices. Note that the extension is formal and \(\mathsf{P}_{\mathtt{1}U}\) is not a probability measure anymore. Nevertheless, we still can compute the higher-order derivatives of \(\psi(U)\) in similar way as in the proof Lemma 5.2:
\[\left\langle\nabla^{2}\psi(O),V^{\otimes 2}\right\rangle =-\mathbb{E}\left(\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathbb{E}\mathbf{\xi}^{ \top}V\mathbf{\xi}\right)^{2},\] \[\left\langle\nabla^{3}\psi(O),V^{\otimes 3}\right\rangle =-\mathtt{i}\ \mathbb{E}\left(\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathbb{E}\mathbf{\xi}^{ \top}V\mathbf{\xi}\right)^{3},\] \[\left\langle\nabla^{4}\psi(U),V^{\otimes 4}\right\rangle =\mathsf{P}_{\mathtt{1}U}\left(\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathsf{ P}_{\mathtt{1}U}\mathbf{\xi}^{\top}V\mathbf{\xi}\right)^{4}-3\left(\mathsf{P}_{ \mathtt{1}U}\left(\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathsf{P}_{\mathtt{1}U}\mathbf{\xi}^{ \top}V\mathbf{\xi}\right)^{2}\right)^{2}.\]
As before, \(O\in\mathbb{R}^{d\times d}\) stands the matrix with zero entries. Applying Taylor's formula with the Lagrange remainder term, we obtain that
\[\psi\left(\sqrt{\frac{\lambda}{n}}\ \Sigma^{1/2}\Gamma\Sigma^{1/2}\right) =\sqrt{\frac{\lambda}{n}}\left\langle\nabla\psi(O),\Sigma^{1/2} \Gamma\Sigma^{1/2}\right\rangle+\frac{\lambda}{2n}\left\langle\nabla^{2}\psi(O ),\left(\Sigma^{1/2}\Gamma\Sigma^{1/2}\right)^{\otimes 2}\right\rangle\] \[+\frac{1}{6}\left(\frac{\lambda}{n}\right)^{3/2}\left\langle \nabla^{3}\psi(O),\left(\Sigma^{1/2}\Gamma\Sigma^{1/2}\right)^{\otimes 3}\right\rangle+ \frac{\lambda^{2}}{24n^{2}}\left\langle\nabla^{4}\psi(\Theta),\left(\Sigma^{1/ 2}\Gamma\Sigma^{1/2}\right)^{\otimes 4}\right\rangle\]
for some \(\Theta\), such that
\[\|\Theta\|_{\mathrm{F}}\leqslant\sqrt{\frac{\lambda}{n}}\ \left\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\right\|_{ \mathrm{F}}\leqslant\rho\sqrt{\frac{\lambda}{n}}\quad\text{on }\mathcal{E}_{\rho}.\]
Taking into account that \(\langle\nabla\psi(O),\Sigma^{1/2}\Gamma\Sigma^{1/2}\rangle\) and \(\langle\nabla^{3}\psi(O),(\Sigma^{1/2}\Gamma\Sigma^{1/2})^{\otimes 3}\rangle\) are imaginary, we immediately obtain
\[\mathrm{Re}\left[\psi\left(\sqrt{\frac{\lambda}{n}}\ \Sigma^{1/2}\Gamma \Sigma^{1/2}\right)\right]=\frac{\lambda}{2n}\left\langle\nabla^{2}\psi(O), \left(\Sigma^{1/2}\Gamma\Sigma^{1/2}\right)^{\otimes 2}\right\rangle+\frac{ \lambda^{2}}{24n^{2}}\left\langle\nabla^{4}\psi(\Theta),\left(\Sigma^{1/2} \Gamma\Sigma^{1/2}\right)^{\otimes 4}\right\rangle.\]
Thus, our next goal is to bound the fourth derivative of \(\psi\). To do so, we use the following lemma.
**Lemma 6.1**.: _Grant Assumption 2.4 and suppose that \(|\mathbb{E}e^{\mathtt{i}\mathbf{\xi}^{\top}U\mathbf{\xi}}|\geqslant 1/\beta\) for some \(\beta>1\). Then, for any \(k\in[1,4]\), it holds that_
\[\left|\mathsf{P}_{\mathtt{1}U}\left(\mathbf{\xi}^{\top}V\mathbf{\xi}-\mathsf{P}_{ \mathtt{1}U}\mathbf{\xi}^{\top}V\mathbf{\xi}\right)^{k}\right|\leqslant 2^{k-1}\beta \left(1+\beta^{k}\right)\alpha^{k/2}\|V\|_{\mathrm{F}}^{k}\quad\text{for all }V\in\mathbb{R}^{d\times d}.\]
The proof of Lemma 6.1 is deferred to Appendix G. Note that on the event \(\mathcal{E}_{\rho}\), we have
\[\left|\mathbb{E}e^{\mathtt{i}\mathbf{\xi}^{\top}\Theta\mathbf{\xi}}-1\right| =2\left|\mathbb{E}\sin\left(\mathbf{\xi}^{\top}\Theta\mathbf{\xi}/2\right) e^{\mathtt{i}\mathbf{\xi}^{\top}\Theta\mathbf{\xi}/2}\right|\leqslant\mathbb{E}\left|\mathbf{\xi}^{ \top}\Theta\mathbf{\xi}\right|\] \[\leqslant|\mathrm{Tr}(\Theta)|+\sqrt{\alpha}\|\Theta\|_{\mathrm{ F}}\leqslant\rho\left(1+\sqrt{\alpha}\right)\sqrt{\frac{\lambda}{n}}.\]
In what follows, we will choose \(\lambda\leqslant 0.5/\|\Sigma\|^{2}\). Invoking the definition of \(\rho\) (see eq. (37)) and using the condition (9), we observe that
\[\rho^{2}\left(1+\sqrt{\alpha}\right)^{2}\frac{\lambda}{n} \leqslant\frac{3\left(1+\sqrt{\alpha}\right)^{2}\|\Sigma\|_{ \mathrm{F}}^{2}\left(\mathtt{r}(\Sigma)^{2}+\alpha\mathtt{r}(\Sigma^{2})\right) }{2\|\Sigma\|^{2}n}\] \[=\frac{3\left(1+\sqrt{\alpha}\right)^{2}\mathtt{r}(\Sigma^{2}) \left(\mathtt{r}(\Sigma)^{2}+\alpha\mathtt{r}(\Sigma^{2})\right)}{2n}\leqslant \frac{1}{64}.\]
Hence, the conditions of Lemma 6.1 are fulfilled with
\[\beta=\left(1-\rho\left(1+\sqrt{\alpha}\right)\sqrt{\frac{\lambda}{n}}\right)^{-1 }\leqslant(1-1/8)^{-1}=\frac{8}{7}<2^{1/5},\]
and consequently, on \(\mathcal{E}_{\rho}\), it holds that
\[\left\langle\nabla^{4}\psi(\Theta),\left(\Sigma^{1/2}\Gamma\Sigma ^{1/2}\right)^{\otimes 4}\right\rangle \leqslant\left(8\beta(1+\beta^{4})+12\beta^{2}\left(1+\beta^{2} \right)^{2}\right)\alpha^{2}\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\|_{\mathrm{F}}^{4}\] \[<111\alpha^{2}\rho^{2}\|\Sigma^{1/2}\Gamma\Sigma^{1/2}\|_{ \mathrm{F}}^{2}.\]
This implies that
\[\mathrm{Re}\left[\psi\left(\sqrt{\frac{\lambda}{n}}\;\Sigma^{1/2 }\Gamma\Sigma^{1/2}\right)\right] =\frac{\lambda}{2n}\left\langle\nabla^{2}\psi(O),\left(\Sigma^ {1/2}\Gamma\Sigma^{1/2}\right)^{\otimes 2}\right\rangle+\frac{\lambda^{2}}{24n^{2}} \left\langle\nabla^{4}\psi(\Theta),\left(\Sigma^{1/2}\Gamma\Sigma^{1/2} \right)^{\otimes 4}\right\rangle\] \[\leqslant-\frac{\lambda}{2n}\mathbb{E}_{\mathbf{\xi}}\left( \mathbf{\xi}^{\top}\Sigma^{1/2}\Gamma\Sigma^{1/2}\mathbf{\xi}-\mathrm{Tr}( \Gamma\Sigma)\right)^{2}+\frac{111\alpha^{2}\lambda^{2}}{24n^{2}}\left\|\Sigma^ {1/2}\Gamma\Sigma^{1/2}\right\|_{\mathrm{F}}^{4}\] \[\leqslant-\frac{\lambda}{2n}\mathbb{E}_{\mathbf{X}}\left( \mathbf{X}^{\top}\Gamma\mathbf{X}-\mathrm{Tr}(\Gamma\Sigma)\right)^{2}+\frac{5 \alpha^{2}\lambda^{2}\rho^{2}}{n^{2}}\left\|\Sigma^{1/2}\Gamma\Sigma^{1/2} \right\|_{\mathrm{F}}^{2}\]
on the same event. Hence, it holds that
\[\exp\left\{n\;\mathrm{Re}\left[\psi\left(\sqrt{\frac{\lambda}{n}} \;\Sigma^{1/2}\Gamma\Sigma^{1/2}\right)\right]\right\}\mathbbm{1}\left(\| \Sigma^{1/2}\Gamma\Sigma^{1/2}\|_{\mathrm{F}}\leqslant\rho\right)\] \[\leqslant\exp\left\{-\frac{\lambda}{2}\;\mathbb{E}_{\mathbf{X}} \left(\mathbf{X}^{\top}\Gamma\mathbf{X}-\mathrm{Tr}(\Gamma\Sigma)\right)^{2}+ \frac{5\alpha^{2}\lambda^{2}\rho^{2}}{n}\left\|\Sigma^{1/2}\Gamma\Sigma^{1/2} \right\|_{\mathrm{F}}^{2}\right\}.\]
The logarithm of the right-hand side is quadratic in \(\Gamma\), and thus, we can easily bound its exponential moment. The precise statement is given in the following lemma.
**Lemma 6.2**.: _Assume that_
\[\frac{10\alpha^{2}\lambda^{2}\rho^{2}\|\Sigma\|^{2}}{n}\leqslant\frac{1}{2}.\]
_Then it holds that_
\[\mathbb{E}_{\Gamma}\exp\left\{-\frac{\lambda}{2}\;\mathbb{E}_{ \mathbf{X}}\left(\mathbf{X}^{\top}\Gamma\mathbf{X}-\mathrm{Tr}(\Gamma\Sigma) \right)^{2}+\frac{5\alpha^{2}\lambda^{2}\rho^{2}}{n}\left\|\Sigma^{1/2}\Gamma \Sigma^{1/2}\right\|_{\mathrm{F}}^{2}\right\}\] \[\leqslant\exp\left\{-\frac{\lambda\mathfrak{m}}{2}+2\lambda^{2} \mathsf{v}^{2}+\frac{5\alpha^{2}\lambda^{2}\rho^{2}}{n}\left(\mathrm{Tr}( \Sigma)\right)^{2}+\frac{200\alpha^{4}\lambda^{4}\rho^{4}\|\Sigma\|_{\mathrm{F }}^{4}}{n^{2}}\right\}.\]
We present the proof of Lemma 6.2 in Appendix H. Hence, on the third step, we proved that
\[\mathrm{Re}\left[\psi\left(\sqrt{\frac{\lambda}{n}}\;\Sigma^{1/2}\Gamma \Sigma^{1/2}\right)\right]\leqslant\exp\left\{-\frac{\lambda\mathfrak{m}}{2}+ 2\lambda^{2}\mathsf{v}^{2}+\frac{5\alpha^{2}\lambda^{2}\rho^{2}}{n}\left( \mathrm{Tr}(\Sigma)\right)^{2}+\frac{200\alpha^{4}\lambda^{4}\rho^{4}\|\Sigma \|_{\mathrm{F}}^{4}}{n^{2}}\right\}. \tag{42}\]
**Step 4: final bound.** Summing up the inequalities (36), (38), (41) and (42), we obtain that the following bound holds for any positive \(t\) and \(\lambda\):
\[\mathbb{P}\left(n\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}-n \,\mathbb{E}\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\leqslant-t\right) \tag{43}\] \[\leqslant\exp\left\{-\frac{\lambda t}{2}+2\lambda^{2}\mathbbm{v} ^{2}+\frac{5\alpha^{2}\lambda^{2}\rho^{2}}{n}\left(\mathrm{Tr}(\Sigma)\right) ^{2}+\frac{200\alpha^{4}\lambda^{4}\rho^{4}\|\Sigma\|_{\mathrm{F}}^{4}}{n^{2}}\right\}\] \[\quad+e^{-\lambda t/2+\lambda\mathbbm{m}/2}\left(2\exp\left\{- \frac{\rho^{2}}{2\|\Sigma\|_{\mathrm{F}}^{2}}\right\}+\exp\left\{-\frac{(\rho ^{2}-[\mathrm{Tr}(\Sigma)]^{2})^{2}/4}{\|\Sigma\|_{\mathrm{F}}^{4}+\|\Sigma\| ^{2}(\rho^{2}-[\mathrm{Tr}(\Sigma)]^{2})}\right\}\right).\]
In what follows, we will take \(\lambda\) from \([0,0.5/\|\Sigma\|^{2}]\). In view of Lemma 5.5, this implies that
\[\frac{\lambda\mathbbm{m}}{2}\leqslant\frac{1}{4}\left(\mathtt{r}(\Sigma)^{2}+ (\alpha-1)\mathtt{r}(\Sigma^{2})\right).\]
The next lemma helps us to simplify the inequality (43) substantially.
**Lemma 6.3**.: _Let \(\rho>0\) be as defined in (37). Then it holds that_
\[\frac{2\rho^{2}}{\|\Sigma\|_{\mathrm{F}}^{2}}\wedge\frac{(\rho^{2}-[\mathrm{ Tr}(\Sigma)]^{2})^{2}}{\|\Sigma\|_{\mathrm{F}}^{4}+\|\Sigma\|^{2}(\rho^{2}-[ \mathrm{Tr}(\Sigma)]^{2})}\geqslant\mathtt{r}(\Sigma)^{2}+(\alpha-1)\mathtt{ r}(\Sigma^{2}).\]
We defer the proof of Lemma 6.3 to Appendix I. With this lemma at hand, we can conclude that
\[\mathbb{P}\left(n\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}-n \,\mathbb{E}\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\leqslant-t\right)\] \[\leqslant e^{-\lambda t/2}\left(3+\exp\left\{2\lambda^{2}\mathbbm{ v}^{2}+\frac{5\alpha^{2}\lambda^{2}\rho^{2}}{n}\left(\mathrm{Tr}(\Sigma) \right)^{2}+\frac{200\alpha^{4}\lambda^{4}\rho^{4}\|\Sigma\|_{\mathrm{F}}^{4}} {n^{2}}\right\}\right).\]
This expression can be simplified even further, because
\[\frac{10\alpha^{2}\lambda^{2}\rho^{2}\|\Sigma\|_{\mathrm{F}}^{2}} {n}\leqslant\frac{10\alpha^{2}\cdot 0.25\|\Sigma\|^{-4}\cdot 3\|\Sigma\|_{ \mathrm{F}}^{2}\left(\mathtt{r}(\Sigma)^{2}+\alpha\mathtt{r}(\Sigma^{2}) \right)\cdot\|\Sigma\|_{\mathrm{F}}^{2}}{n}\] \[=\frac{7\alpha^{2}\mathtt{r}(\Sigma^{2})^{2}\left(\mathtt{r}( \Sigma)^{2}+\alpha\mathtt{r}(\Sigma^{2})\right)}{2n}\leqslant\frac{1}{2}< \sqrt{\frac{\log 2}{2}}\]
due to the condition of the theorem. Hence,
\[\mathbb{P}\left(n\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}-n \,\mathbb{E}\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\leqslant-t\right) \leqslant e^{-\lambda t/2}\left(3+2\exp\left\{2\lambda^{2} \mathbbm{v}^{2}+\frac{5\alpha^{2}\lambda^{2}\rho^{2}}{n}\left(\mathrm{Tr}( \Sigma)\right)^{2}\right\}\right)\] \[\leqslant 5\exp\left\{-\frac{\lambda t}{2}+2\lambda^{2}\mathbbm{v}^{2} +\frac{5\alpha^{2}\lambda^{2}\rho^{2}}{n}\left(\mathrm{Tr}(\Sigma)\right)^{2} \right\}.\]
It remains to minimize the expression
\[-\frac{\lambda t}{2}+2\lambda^{2}\mathbbm{v}^{2}+\frac{5\alpha^{2}\lambda^{2 }\rho^{2}}{n}\left(\mathrm{Tr}(\Sigma)\right)^{2}\quad\text{over }\lambda\in[0,0.5/\|\Sigma\|^{2}].\]
For this purpose, we take
\[\lambda=\frac{t}{4\mathbbm{v}^{2}+10\alpha^{2}\rho^{2}\mathrm{Tr}(\Sigma)^{2} /n}\wedge\frac{1}{2\|\Sigma\|^{2}}.\]
It is straightforward to check that, with such value of \(\lambda\), we have
\[-\frac{\lambda t}{2}+2\lambda^{2}\mathsf{v}^{2}+\frac{5\alpha^{2}\lambda^{2} \rho^{2}}{n}\left(\mathrm{Tr}(\Sigma)\right)^{2}=-\min\left\{\frac{t^{2}}{32 \mathsf{v}^{2}+40\alpha^{2}\rho^{2}\mathrm{Tr}(\Sigma)^{2}/n},\frac{t}{8\| \Sigma\|^{2}}\right\}.\]
Thus, we proved that
\[\mathbb{P}\left(n\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}-n\,\mathbb{E}\| \widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}\leqslant-t\right)\leqslant 5\exp \left\{-\left(\frac{t^{2}}{32\mathsf{v}^{2}+40\alpha^{2}\rho^{2}\mathrm{Tr}( \Sigma)^{2}/n}\wedge\frac{t}{8\|\Sigma\|^{2}}\right)\right\},\]
as we announced in the beginning of the proof. In other words, for any \(\delta>0\), with probability at least \((1-\delta)\), it holds that
\[n\,\mathbb{E}\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}-n\|\widehat{\Sigma} -\Sigma\|_{\mathrm{F}}^{2}<\left(8\|\Sigma\|^{2}\log(5/\delta)\right)\vee \sqrt{\left(32\mathsf{v}^{2}+\frac{40\alpha^{2}\rho^{2}\mathrm{Tr}(\Sigma)^{2 }}{n}\right)\log(5/\delta)}.\]
Taking into account that
\[\mathsf{v}^{2}\leqslant\alpha\left(\mathrm{Tr}(\Sigma^{2})\right)^{2}+( \alpha^{2}-\alpha)\,\,\mathrm{Tr}(\Sigma^{4})\]
due to Lemma 5.6 and substituting \(\rho\) according to (37), we get the desired bound:
\[\mathbb{E}\|\widehat{\Sigma}-\Sigma\|_{\mathrm{F}}^{2}-\|\widehat{\Sigma}- \Sigma\|_{\mathrm{F}}^{2}<\frac{4\|\Sigma\|^{2}}{n}\left(2\log(5/\delta)\lor \sqrt{2\mathfrak{R}(\Sigma)\log(5/\delta)}\right),\]
where
\[\mathfrak{R}(\Sigma)=\alpha\mathsf{r}(\Sigma^{2})^{2}+\alpha^{2}\mathsf{r}( \Sigma^{4})+\frac{15\alpha^{2}\mathsf{r}(\Sigma)^{2}\mathsf{r}(\Sigma^{2}) \left(\mathsf{r}(\Sigma)^{2}+\alpha\mathsf{r}(\Sigma^{2})\right)}{4n}.\]
|
2310.19731 | ViR: Towards Efficient Vision Retention Backbones | Vision Transformers (ViTs) have attracted a lot of popularity in recent
years, due to their exceptional capabilities in modeling long-range spatial
dependencies and scalability for large scale training. Although the training
parallelism of self-attention mechanism plays an important role in retaining
great performance, its quadratic complexity baffles the application of ViTs in
many scenarios which demand fast inference. This effect is even more pronounced
in applications in which autoregressive modeling of input features is required.
In Natural Language Processing (NLP), a new stream of efforts has proposed
parallelizable models with recurrent formulation that allows for efficient
inference in generative applications. Inspired by this trend, we propose a new
class of computer vision models, dubbed Vision Retention Networks (ViR), with
dual parallel and recurrent formulations, which strike an optimal balance
between fast inference and parallel training with competitive performance. In
particular, ViR scales favorably for image throughput and memory consumption in
tasks that require higher-resolution images due to its flexible formulation in
processing large sequence lengths. The ViR is the first attempt to realize dual
parallel and recurrent equivalency in a general vision backbone for recognition
tasks. We have validated the effectiveness of ViR through extensive experiments
with different dataset sizes and various image resolutions and achieved
competitive performance. Code: https://github.com/NVlabs/ViR | Ali Hatamizadeh, Michael Ranzinger, Shiyi Lan, Jose M. Alvarez, Sanja Fidler, Jan Kautz | 2023-10-30T16:55:50Z | http://arxiv.org/abs/2310.19731v2 | # ViR: Vision Retention Networks
###### Abstract
Vision Transformers (ViTs) have attracted a lot of popularity in recent years, due to their exceptional capabilities in modeling long-range spatial dependencies and scalability for large scale training. Although the training parallelism of self-attention mechanism plays an important role in retaining great performance, its quadratic complexity baffles the application of ViTs in many scenarios which demand fast inference. This effect is even more pronounced in applications in which autoregressive modeling of input features is required. In Natural Language Processing (NLP), a new stream of efforts have proposed parallelizable models with recurrent formulation that allows for efficient inference in generative applications. Inspired by this trend, we propose a new class of computer vision models, dubbed Vision Retention Networks (ViR), with dual parallel and recurrent formulations, which strike an optimal balance between fast inference and parallel training with competitive performance. In particular, ViR scales favorably for image throughput and memory consumption in tasks that require higher-resolution images due to its flexible formulation in processing large sequence lengths. The ViR is the first attempt to realize dual parallel and recurrent equivalency in a general vision backbone for recognition tasks. We have validated the effectiveness of ViR through extensive experiments with different dataset sizes and various image resolutions and achieved competitive performance. Our code and pretrained models will be made publicly available.
## 1 Introduction
During the recent years, Transformers (Vaswani et al., 2017) and their variants (Devlin et al., 2019; Dosovitskiy et al., 2020) have shown competitive performance across multiple domains such as Natural Language Processing (NLP) and Computer vision. The main building block of Transformers is self-attention which allows for cross interaction among all input sequence tokens with each other. This scheme is effective in capturing both short and long-range spatial dependencies but also imposes time and space quadratic complexity in terms of the input sequence length. The training parallelism of Transformers allow for competitive performance. However, the inference is slow and expensive due to the computational complexity.
Recently, Retentive Network (RetNet) (Sun et al., 2023) and Recentgence Weighted Key Value (RWKV) (Peng et al., 2023) independently proposed novel model architectures that include the training parallelism of transformers and fast recurrent inference. The RWKV model uses a linear channel-wise attention to relax the pairwise dot product bottleneck of vanilla self-attention. The RetNet on the other hand proposes the concept of retention with dual form parallel and recurrent representations. It is noteworthy to mention that both RWKV and RetNet models are primarily proposed for autoregressive text generation.
Although Convolutional Neural Networks (CNNs) have been commonly used as the de-facto architecture for various applications, the introduction of Vision Transformers (Dosovitskiy et al., 2020) (ViT) demonstrated the possibility of achieving State-of-the-Art (SOTA) performance with a similar model to the Transformers for NLP applications. As opposed to the autoregressive formulation in which tokens from left to right are processed at each step to predict the next value, ViT uses the entire token representations.
In the case of long token sequences (_e.g._ high-resolution images), processing the entire tokens may create a bottleneck due to the quadratic complexity of the self-attention layers. As a result, despite the competitive performance of ViT models, this limits their usage for applications that require real-time processing of high-resolution images (_e.g._ autonomous vehicles).
In this work, inspired by the success of RetNet, we explore the possibility of leveraging the duality of parallel and recurrent formulations to enable fast and memory efficient deployment while maintaining the training parallelism with a competitive performance.
In particular, the combination of parallel and recurrent modes, referred to as chunk-wise formulation, enables optimal combination of both modes based on specific run-time hyper-parameters (_e.g._ batch size) and hardware requirements. Due to this formulation, the memory consumption in ViR model can then be decoupled from the sequence length, hence making it easier to process high-resolution images in an efficient manner.
In order to improve the efficiency, we have redesigned the retention mechanism by removing the gated function. In addition, the proposed retention formulation is also generic and does not rely on any specific relative position embedding formulations (_e.g._ xPos (Sun et al., 2022) as in RetNet. Our proposed ViR is the first attempt beyond generative applications for leveraging autoregressive vision-friendly retentive networks for recognition tasks (_e.g._ image classification)
The summary of our specific contributions in this work is as follows:
* We introduce ViR, which is the first attempt in leveraging autoregressive retentive network with dual parallel and recurrent formulations for vision recognition tasks. We demonstrate that ViR can scale favorably to larger image resolutions in terms of image throughput and memory consumption.
* We propose a general vision backbone with a redesigned retention mechanism. The new retention mechanism is free of any gating function and does not rely on any specific relative position embedding formulations.
* We have validated the effectiveness of ViR by pretraining and finetuning on both ImageNet-21K and ImageNet-1K datasets for different models sizes to demonstrate the scalability of our proposed model as a general computer vision backbone.
## 2 Related Work
Vision TransformersViT (Dosovitskiy et al., 2020) introduced a new paradigm to move away from the convolutional inductive biases towards a simpler model with minimal priors. The effectiveness of self-attention in modeling long-range spatial dependencies and scalability of ViTs make them a great candidate as a backbone model for various vision tasks. However, the quadratic complexity of self-attention creates a bottleneck for fast deployment, especially for high-resolution images with longer sequence lengths. Swin Transformers (Liu et al., 2021) proposed to compute self-attention in smaller partitioned windows to address this problem.
Although this scheme improves the efficiency, the limited cross-region interactions across local windows may impact the performance. Independently, Pyramid Vision Transformer (PVT) (Wang et al., 2021) introduced a hierarchical architecture, similar to Swin Transformer, that employ a patch embedding layer at the beginning of each stage and reduces the spatial dimension to improve the computational efficiency.
On the other hand, Twins Transformer (Chu et al., 2021) introduced a spatially separable self-attention mechanism that consisted of global sub-sampling and locally-grouped modules that can model both short and long-range interactions in an efficient manner. Several follow up efforts proposed to address this issue by introducing global (Hatamizadeh et al., 2023) or carrier (Hatamizadeh et al., 2023) tokens and multi-axis grid attention (Tu et al., 2022).
In addition to these works, a stream of hybrid models (_i.e._ CNN and ViT) (Graham et al., 2021; Wu et al., 2021; Yuan et al., 2021) were proposed to improve the data efficiency and achieve competitive performance without considerably larger model sizes. Convolutional vision Transformer (CvT) (Wu et al., 2021) proposes the concept of convolutional token embedding layer which is integrated with a Transformer block in a hierarchical architecture to improve the data efficiency and performance of the ViT models. In addition, Tokens-To-Token Vision Transformer (T2T-ViT) (Yuan et al., 2021)
introduced a tailored transformation layer for aggregating nearby tokens which can be ten used as image priors for leveraging spatial correlations.
Cross-covariance Image Transformer (XCiT) (Ali et al., 2021) proposed a transposed self-attention block for capturing the token interactions in feature channels space. In addition, by conditioning the position encoding on localized patch tokens, Conditional Position encoding Vision Transformer (CPVT) (Chu et al., 2021) achieved better performance on different recognition tasks such as image classification and object detection. Our proposed contributions in this work are orthogonal to these recent advances as ViR can benefit from a hybrid architecture as well as a window-based retention. Please see Sec. 5.3 for discussion on the effect of hybrid architectures on the performance of ViR models.
Autoregressive ModelsDeep Autoregressive models Greff et al. (2016); Van Den Oord et al. (2016); Van den Oord et al. (2016); Chen et al. (2018); Radford et al. (2018) have primarily been used for generative application and achieved great success in this domain. Most notably, PixelCNN (Van den Oord et al., 2016) and PixelRNN (Van Den Oord et al., 2016) demonstrated that sequential pixel-by-pixel prediction can be an effective in learning the explicit probability distribution for both discrete and continuous data while having better training stability compared to Generative Adversarial Networks (GANs) (Goodfellow et al., 2014). With the emergence of Transformers (Vaswani et al., 2017), several efforts (Parmar et al., 2018; Chen et al., 2020; Cao et al., 2021; Chang et al., 2022) demonstrated the capability of autoregressive modeling at scale. However, the sequential nature of autoregressive decoding, which requires access to previously generated tokens for future predictions, hinders the efficiency of such models.
Self-attention AlternativesTo address the quadratic computation complexity of self-attention, many efforts have proposed various approaches such as approximation of the \(\mathrm{softmax}\) activation function (Joulin et al., 2017; Gao et al., 2020), linear attention by using other kernels (Wang et al., 2020; Katharopoulos et al., 2020) to estimate the attention scores or computing the attention in the channel feature space (Ali et al., 2021). However, the improved efficiency negatively impacts the performance of the model. Other efforts (Zhai et al., 2021; Gu et al., 2021) have also proposed to entirely replace the self-attention with other mechanisms.
In particular, recently in NLP, RWKV (Peng et al., 2023) and RetNet (Sun et al., 2023) proposed to redefine the Transformers to leverage the duality of parallel and recurrent formulation for training and inference. RWKV follows an attention-free formulation (Zhai et al., 2021) but employs an exponential decay to enable the recurrent formulation. RetNet proposes to use multi-scale gated retention to maintain the expressivity of the contextual information and achieve competitive performance. Although our work is inspired by RetNet, it is aimed for computer vision, in particular recognition, and has a tailored retention mechanism and architecture redesign for optimal performance.
## 3 Methodology
### Retention Mechanism
In this section, we discuss the retention mechanism and its different formulations (Sun et al., 2023). Consider an input sequence \(\mathbf{X}\in\mathbb{R}^{|X|\times D}\) that will be encoded in an autoregressive manner. Given the query (\(\mathbf{q_{n}}\)), key (\(\mathbf{k_{n}}\)) and value (\(\mathbf{v_{n}}\)) in state \(\mathbf{s_{n}}\), this sequence-to-sequence mapping can be written as
\[\mathbf{s_{n}}=\alpha\mathbf{s_{n-1}}+\mathbf{k_{n}}^{\top}\mathbf{v_{n}}, \tag{1}\] \[\mathrm{Ret}(\mathbf{X_{n}})=\mathbf{q_{n}}\mathbf{s_{n}},\]
where \(\mathrm{Ret}\) and \(\alpha\) denote retention and decay mask, respectively. In essence, \(\mathbf{s_{n}}\) conveniently maintains the previous internal states. As shown in (Sun et al., 2023), retention can also be defined in a parallel formulation
\[\mathrm{Ret}(\mathbf{X})=(\mathbf{q}\mathbf{k}^{\top}\odot\mathbf{M})\mathbf{ v}, \tag{2}\]
Where \(\mathrm{M}\) denotes a mask with a decay factor \(\alpha\) as in
\[\mathbf{M_{ij}}=\begin{cases}\alpha^{i-j},&i\geqslant j\\ 0,&i<j\end{cases} \tag{3}\]
This dual representation of the retention in parallel and recurrent modes enable many desired properties such as training parallelism and fast inference. For longer sequences the recurrent mode can become inefficient. As a result, a hybrid approach, referred to as chunkwise, which combines recurrent and parallel formulation is desired. Specifically, the input \(\mathbf{X}\) is split into smaller sequences with chunksize \(C\), in which \(\mathbf{x}_{[m]}=[\mathbf{x}_{(m-1)C+1},\cdots,\mathbf{x}_{mC}]\) represents the \(m\)-th chunk. The chunkwise query, key and values can be defined as
\[\mathbf{q}_{[m]}=\mathbf{q}_{Cm:C(m+1)},\quad\mathbf{k}_{[m]}=\mathbf{k}_{Cm: C(m+1)},\quad\mathbf{v}_{[m]}=\mathbf{v}_{Cm:C(m+1)}, \tag{4}\]
The chunkwise retention formulation is as follows
\[\begin{split}\mathbf{R}_{m}&=\mathbf{k}_{[m]}^{ \top}(\mathbf{v}_{[m]}\odot\zeta)+\gamma^{\mathbf{B}}\mathbf{R}_{m-1},\quad \zeta_{mt}=\gamma^{\mathbf{B}-m-1}\\ \mathrm{Ret}(\mathbf{X}_{[m]})&=(\mathbf{q}_{[m]} \mathbf{k}_{[m]}^{\top}\odot\mathbf{M})\mathbf{v}_{[m]}+(\mathbf{q}_{[m]} \mathbf{R}_{m-1})\odot\xi,\quad\xi_{mt}=\alpha^{m+1}\end{split} \tag{5}\]
The underlying motivation of the chunkwise formulation is to employ the parallel mode in each chunk, while processing cross-chunk representations in the recurrent mode. For high resolution images with long sequences, the chunkwise formulation allows for faster processing of tokens and decoupling the memory. In Sec. 5.2, we demonstrate how ViRs compare more favorably to ViTs due to the chunkwise formulation for efficient processing of longer sequences.
### ViR Model
In the following, we discuss the components of ViR in more details. Fig. 1 illustrates an overview of our proposed model. Given an input image \(\mathbf{X}\in\mathbb{R}^{H\times W\times C}\) with height \(H\) and width \(W\), it is partitioned into patches and flattened into a sequence of tokens. This is similar to the tokenization scheme which was previously proposed by ViT (Dosovitskiy et al., 2020). The tokenized patches are then projected into a patch embedding \(Z=[\mathbf{z}_{1},\cdots,\mathbf{z}_{[z]}]\in\mathbb{R}^{[z]\times D}\) with dimension \(D\). Different from ViT, we first add the position embedding to the patch embedding and then append a [class] token (\(\mathbf{Z}_{n}^{0}=\mathbf{X}_{\text{class}}\)).
The output of the ViR encoder with \(L\) layers (\(\mathbf{Z}_{L}^{n}\)) is used in a classification Mult-Layer Perceptron (MLP) head during both pre-training and finetuning. Due to the autoregressive nature of the ViR model, the position of the [class] plays an important role as appending to the end of embedding sequence acts as a summarizing of all the previous tokens.
In lieu of self-attention, we use retention to enforce a recurrent formulation via masking. However, our formulation does not depend on gated retention or specific relative position embeddings (_e.g._ xPos (Sun et al., 2022) or RoPE (Su et al., 2021)) and achieves numerical equivalency between parallel, recurrent and hybrid (_i.e._ mixture of local recurrent and global parallel) formulations. Specifically, the parallel retention formulation solely depends on query \(\mathbf{q}\), key \(\mathbf{k}\), value \(\mathbf{v}\) and a decay Mask \(M\) and defined according to
\[\mathbf{q},\mathbf{k},\mathbf{v}=\mathbf{z}\mathbf{A}_{qkv} \tag{6}\]
\[\mathrm{Ret}(\mathbf{z})=(\frac{\mathbf{q}\mathbf{k}^{\top}}{\sqrt{D_{h}}} \odot\mathbf{M})\mathbf{v} \tag{7}\]
where \(\mathrm{Ret}\) represents retention and \(D_{h}\) is a scaling factor to balance the compute and parameter counts. Note that the retention formulation is free of softmax activation function which is commonly used in self-attention to improve performance and maintain training stability at the cost of reduced efficiency. In addition, the original retention formulation, as proposed in RetNet (Sun et al., 2023), increases the number of parameters due to the addition of the learnable gated function, and a result decreases the image throughput under the same network layout.
The retention (\(\mathrm{Ret}\)) is further extended to Multi-Head Retention (MHR). The retention is computed across each head with a constant decay factor and normalized with LayerNorm (Ba et al., 2016) (LN) according to
\[\mathbf{Y}=\mathrm{LN}([\mathrm{Ret}_{1}(\mathbf{z});\mathrm{Ret}_{2}( \mathbf{z});\cdots\mathrm{Ret}_{k}(\mathbf{z})]) \tag{8}\]
A \(\mathrm{GELU}\) activation function is then employed on the concatenated outputs and before projecting them with a linear layer
\[\mathrm{MHR}(\mathbf{z})=\mathrm{GELU}(\mathbf{Y})\mathbf{A}_{mhr} \tag{9}\]
We use alternating MHR and MLP blocks with LayerNorm (LN) and residual connections as the building blocks of the encoder according to
\[\begin{split}\mathbf{Z^{\prime}}^{l}&=\mathrm{MHR}( \mathrm{LN}(\mathbf{Z^{\prime}}))+\mathbf{Z}^{l-1}\\ \mathbf{Z}^{l}&=\mathrm{MLP}(\mathrm{LN}(\mathbf{Z^{ \prime}}^{l}))+\mathbf{Z^{\prime}}^{l}\end{split} \tag{10}\]
## 4 Experiments
### Setup
We trained all ViR model variants on ImageNet-1K dataset (Deng et al., 2009) except for ViR-L/14. This model was first pre-trained on ImageNet-21K dataset on \(224\times 224\) resolution. The pretraining was conducted for 90 epochs with a global batch size of \(4096\) and an initial learning rate of \(1e^{-3}\) with a cosine decay learning rate scheduler. The model was subsequently finetuned on both \(224\times 224\) and \(448\times 448\) resolutions with a learning rate of \(5e^{-5}\). In addition, the models on ImageNet-1K were trained for 600 epochs with a learning rate of \(3e^{-3}\), weight decay of \(5e^{-2}\) and global batch size of \(4096\).
We used moderate data augmentation techniques such as mixup and cutmix. For Hybrid ViR models, we used a 4-stage hierarchical architecture in which the first 2 stages comprise of residual CNN-based blocks, while the rest of stages contain ViR-based blocks. In between each stage, the resolution is decreased by a factor of two with strided CNN layers.
### Image Classification
We present image classification benchmarks for all models in Table 1. The ViR models demonstrate competitive performance across different model variants. Specifically, ViR variants outperform ViT counterparts by considerable margins different models, validating the effectiveness of our proposed approach. The ViR-L/14 model also achieves competitive performance when pretrained and finetuned on ImageNet-21K and ImageNet-1K datasets, respectively.
Figure 1: Overview of the architecture of ViR model. Similar to ViT, Flattened patches are linearly projected into a patch embedding. The position embedding are then added to the patch embedding and a class token is appended to this sequence. The retention encoder comprises of alternating Multi-Head Retention and MLP blocks. The MHR blocks use a causal decay mask. Best viewed in color.
Increasing the image resolution from \(224\times 224\) to \(448\times 448\) during the finetuning results in a considerable +1.1% improvement in terms of Top-1 accuracy. Hence, these benchmarks demonstrates the scalability of ViR models to larger training dataset and higher image resolutions.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Method & Param (M) & FLOPs (G) & Image Size & Top-1 (\%) \\ \hline ResMLP-S12 (Touvron et al., 2021a) & 15 & 3.0 & 224\({}^{2}\) & 76.6 \\ PVT-v2-B1 (Wang et al., 2022) & 13 & 2.1 & 224\({}^{2}\) & 78.7 \\ GC VitT-XXT (Hatamizadeh et al., 2023b) & 12 & 2.1 & 224\({}^{2}\) & 79.8 \\ \hline DeiT-Small/16 (Touvron et al., 2021b) & 22 & 4.6 & 224\({}^{2}\) & 79.9 \\ T2T-ViT-14 (Yuan et al., 2021) & 22 & 5.2 & 224\({}^{2}\) & 81.5 \\ CPVT-Small-GAP (Chun et al., 2021b) & 23 & 4.6 & 224\({}^{2}\) & 81.5 \\ \hline ResNet50 (He et al., 2016) & 25 & 4.1 & 224\({}^{2}\) & 76.1 \\ CrossViT-S (Chen et al., 2021) & 26 & 5.6 & 224\({}^{2}\) & 81.0 \\ PVT-Small (Wang et al., 2021) & 24 & 3.8 & 224\({}^{2}\) & 79.8 \\ Twins-PCPVT-S (Chun et al., 2021) & 24 & 3.8 & 224\({}^{2}\) & 81.2 \\ Swin-T (Liu et al., 2021) & 29 & 4.5 & 224\({}^{2}\) & 81.3 \\ CoAtNet-0 (Dai et al., 2021) & 25 & 4.2 & 224\({}^{2}\) & 81.6 \\ PVT-v2-B2 (Wang et al., 2022) & 25 & 4.0 & 224\({}^{2}\) & 82.0 \\ ConvNeXt-T (Liu et al., 2022b) & 29 & 4.5 & 224\({}^{2}\) & 82.1 \\ Focal-T (Yang et al., 2021) & 29 & 4.9 & 224\({}^{2}\) & 82.2 \\ CSwin-T (Dong et al., 2022) & 23 & 4.3 & 224\({}^{2}\) & 82.7 \\ \hline ResNet-101 (He et al., 2016) & 44 & 7.9 & 224\({}^{2}\) & 77.4 \\ ResMLP-S24 (Touvron et al., 2021a) & 30 & 6.0 & 224\({}^{2}\) & 79.4 \\ PVT-Medium (Wang et al., 2021) & 44 & 6.7 & 224\({}^{2}\) & 81.2 \\ T2T-ViT-19 (Yuan et al., 2021) & 39 & 8.9 & 224\({}^{2}\) & 81.9 \\ Twins-PCPVT-B (Chu et al., 2021a) & 44 & 6.7 & 224\({}^{2}\) & 82.7 \\ Swin-S (Liu et al., 2021) & 50 & 8.7 & 224\({}^{2}\) & 83.0 \\ ConvNeXt-S (Liu et al., 2022b) & 50 & 8.7 & 224\({}^{2}\) & 83.1 \\ PVT-v2-B3 (Wang et al., 2022) & 45 & 6.9 & 224\({}^{2}\) & 83.2 \\ \hline ViT-L/32 (Dosovitskiy et al., 2020) & 328 & 15.3 & 224\({}^{2}\) & 71.2 \\ ViT-B/32 (Dosovitskiy et al., 2020) & 88 & 4.4 & 224\({}^{2}\) & 73.4 \\ ViT-L/16 (Dosovitskiy et al., 2020) & 304 & 59.7 & 224\({}^{2}\) & 76.5 \\ ResNet-152 (He et al., 2016) & 60 & 11.6 & 224\({}^{2}\) & 78.3 \\ ViT-B/16 (Dosovitskiy et al., 2020) & 86 & 17.6 & 224\({}^{2}\) & 77.9 \\ ResMLP-B24 (Touvron et al., 2021a) & 116 & 23.0 & 224\({}^{2}\) & 81.0 \\ PVT-Large (Wang et al., 2021) & 61 & 9.8 & 224\({}^{2}\) & 81.7 \\ DeiT-Base16 (Touvron et al., 2021b) & 86 & 17.6 & 224\({}^{2}\) & 81.8 \\ CrossViT-B (Chen et al., 2021) & 104 & 21.2 & 224\({}^{2}\) & 82.2 \\ T2T-ViT-24 (Yuan et al., 2021) & 64 & 14.1 & 224\({}^{2}\) & 82.3 \\ CVPV-B (Chu et al., 2021b) & 88 & 17.6 & 224\({}^{2}\) & 82.3 \\ Twins-PCPVT-L (Chu et al., 2021a) & 61 & 9.8 & 224\({}^{2}\) & 83.1 \\ Swin-B (Liu et al., 2021) & 88 & 15.4 & 224\({}^{2}\) & 83.3 \\ PVT-v2-B4 (Wang et al., 2022) & 62 & 10.1 & 224\({}^{2}\) & 83.6 \\ Twins-SVT-L (Chu et al., 2021a) & 99 & 15.1 & 224\({}^{2}\) & 83.7 \\ ConvNeXt-B (Liu et al., 2022b) & 89 & 15.4 & 224\({}^{2}\) & 83.8 \\ ViT-L/16\({}^{4}\) (Dosovitskiy et al., 2020) & 86 & 17.6 & 224\({}^{2}\) & 85.1 \\ PVT-v2-B5 (Wang et al., 2022) & 82 & 11.8 & 224\({}^{2}\) & 83.8 \\ \hline
**ViR-B/32** & 88 & 4.3 & 224\({}^{2}\) & 75.7 \\
**ViR-S/16** & 22 & 4.2 & 224\({}^{2}\) & 78.3 \\
**Hybridi****R-S/16** & 31 & 3.3 & 224\({}^{2}\) & 80.3 \\
**ViR-B/16** & 86 & 16.8 & 224\({}^{2}\) & 81.3 \\
**Hybridi****R-B/16** & 75.8 & 8.8 & 224\({}^{2}\) & 82.4 \\
**ViR-L/14\({}^{4}\)** & 304 & 77.8 & 224\({}^{2}\) & 84.9 \\
**ViR-L/14\({}^{4}\)** & 304 & 310.3 & 448\({}^{2}\) & 86.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Image classification benchmarks on **ImageNet-1K**(Deng et al., 2009) validate set. Models with \({}^{\ddagger}\) are pretrained on ImageNet-21K dataset.
## 5 Ablation
### Component Study
In this section, we study the effect of different component design choices on the overall performance by examining the Top-1 and throughput trade-off. As the base model, we use a ViR-B/16 with a Top-1 accuracy of 81.3% on ImageNet-1K dataset.
First, we studied the effect of [class] token by removing it and using a global average pooling layer before the classification head. In this case, the Top-1 accuracy decreases by 0.4%. As discussed in Sec.3.2, the [class] plays an important role as it encapsulates global information from the preceding tokens that can be useful for the task of image classification. In addition, the throughput decreased by 1.90%.
We also investigated the effect of adding a gated function to the retention. For fair comparison, we reduced the number of layers to match the same number of parameters as the base model. However, this configuration decreased the image throughput and Top-1 accuracy by 2.91% and 0.3% respectively. Furthermore, we replaced the proposed GELU activation function with a Swish activation function, as originally proposed in RetNet. This configuration slightly decreased the image throughput by 1.04% while also lowering the Top-1 accuracy by 0.2%.
We also investigated effect of scaling the key tensor, in lie of the query. Although image throughput and Top-1 accuracy remain roughly unchanged, we observed some instabilities with sudden changes in loss values during training. In addition, as opposed to an autoregressive formulation, we also studied the possibilities of using multipass encoding by providing both left and right token orders.
Our results show that although Top-1 accuracy is slightly improved by +1.0%, the throughput is severly impacted and reduced by half. Hence, multipass encoding does not provide an optimal performance vs. efficiency tradeoff in our case.
### Throughput Analysis
The primary motivation behind ViR is to find an attention formulation that allows for high inference throughput without sacrificing model quality. In (Sun et al., 2023) the authors provide a brief overview of attention methods, comparing scaling complexity, memory complexity, and resulting model quality, to arrive at the conclusion that the RetNet formulation achieves the best results in the "impossible triangle" of (1) inference cost, (2) training parallelism, and (3) model quality.
Related to computer vision, the sequence length \(N\) is derived from the input height \(H\), width \(W\), and patch size \(P\) ((Dosovitskiy et al., 2020)), forming \(N=\frac{HW}{P^{2}}\). Of note, because compute and memory complexity scales quadratically with sequence length, for regular attention, we see a scaling rule of \(O\left(\frac{H^{2}W^{2}}{P^{4}}\right)\), which strongly inhibits pursuing higher resolution image processing.
Typical methods for working around this involve eliminating global attention in favor of local attention ((Liu et al., 2022), (Li et al., 2022)), approximating the attention matrix ((Choromanski et al., 2021), (Wang et al., 2020), (Kitaev et al., 2020)), or choosing a different formulation of attention that has better scaling behavior ((Katharopoulos et al., 2020), (Bolya et al., 2022), (Sun et al., 2023)).
Adopting the RetNet formulation allows us to understand the inference cost in three different modes: Recurrent, Chunkwise, and Parallel. Because the recurrent formulation only depends on the previous token to compute the next, the compute complexity wrt the input is \(O(N)\). Parallel mode can process all tokens simultaneously, but comes with the quadratic scaling complexity of \(O(N^{2})\).
\begin{table}
\begin{tabular}{l c c} \hline \hline Design Component & Throughput (im/sec) & Top-1 (\%) \\ \hline No class token & 1525 & 80.9 \\ Gated retention & 1516 & 81.0 \\ Swish activation & 1538 & 81.1 \\ Key (k) scaling & 1550 & 81.2 \\ Multipass encoding & 774 & 81.4 \\
**Base Model** & 1554 & 81.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study on the effect of different design choices on ImageNet Top-1 accuracy vs throughput performance tradeoff. The throughput is measured on an A100 80GB NVIDIA GPU with a batch size of 128. The base model is ViR-B/16.
Chunkwise is a hybrid mode where one chunk only depends on the previous chunk, and within a chunk, we adopt the parallel formulation. Let \(C\) be the chunk size, then the number of chunks is \(\left[\frac{N}{C}\right]\), the per-chunk complexity is \(O(C^{2})\), resulting in an overall complexity of \(O\left(\frac{N}{C}C^{2}\right)=O(NC)\).
Since modern inference hardware is able to simultaneously perform numerous math operations, the chunkwise formulation is compelling because it allows us to trade-off saturating the compute hardware (larger \(C\)) with computational complexity (smaller \(C\)). In addition to throughput improvements, recurrent and chunkwise also adopt desirable memory properties. If the downstream application doesn't require patch features (e.g. a classification task), then the memory complexity for recurrent is \(O(1)\) and for chunkwise is \(O(C^{2})\). If patch features are required, then it becomes \(O(N)\) and \(O(N+C^{2})\) respectively.
It can be seen in figure 2 how throughput varies between ViT-B and ViR-B at different image sizes, and particularly how ViR shows better scaling characteristics as resolution increases. At very high resolution, only ViR-chunkwise is able to run on an A100 80GB NVIDIA GPU, as the parallel variants run out of memory.
Due to the different compute complexity scaling rules between parallel and chunkwise, it is apparent how chunkwise eventually matches parallel throughput, and then surpasses it at high resolution. Refer to appendix A.1 and figure S.1 for how scaling works for ViT-L and ViR-L variants. Unsurprisingly, parallel mode runs out of memory at lower resolution (768) whereas chunkwise is able to operate under all settings.
### Hybrid Architecture
Due to lack of inductive biases such as locality of weight sharing in CNNs, ViTs often require more training data or comprehensive data augmentation to achieve the same accuracy in relatively small to medium-sized datasets (_e.g._ ImageNet-1K). The proposed ViR also face the same challenges in such benchmarks. As a result, we have presented Hybrid ViR-S/16 and ViR-B/16 variants to demonstrate the feasibility of integrating CNN-based encoders with ViR.
As presented in Table 1, Hybrid ViR-S/16 (80.3%) outperforms the counterpart ViT-S/16 (78.3%) by a considerable +2.0% margin. Similarly, Hybrid ViR-B/16 (82.4%) surpasses the ViT-B/16 (81.3%) by +1.1% in terms of Top-1 accuracy. These results confirm the possibility of achieving highly competitive performance under in small-scale data regimes by combining CNN and ViR models. We leave investigation of more advanced hybrid architectures to future efforts.
Figure 2: Comparison of image throughput for ViT-B and ViR-B networks. Throughput is measured on an A100 80GB NVIDIA GPU with batch sizes of 16 and 128. With batch size of 128, the parallel mode went OOM for both ViT and ViR. At the 768 image size, chunkwise matches the throughput of parallel mode, and is also the only configuration capable of processing 128 batch size at 1024 resolution.
## 6 What Does Retention See?
In Fig. 3, we illustrate retention maps that are obtained from an ImageNet-1K pretrained ViR-S/16 model. Specifically, the retention maps are extracted from the last layer of the encoder, without using any post-processing or normalization layers. We observe that high-intensity response regions correspond to salient image features. For elongated objects, the long-range spatial dependencies have been effectively captured. We observe similar trends in other ViR variants that are trained on both ImageNet-1K and ImageNet-21K datasets.
In addition, in Fig. 4, we show the relationship between a patch (red border) and all of the other patches it is allowed to attend to. Because of the auto-regressive nature of retention, we can see how the receptive field can only attend to previously encountered patches within the image. Additionally, the strength of the connection between two patches is decayed based on the distance between them. Since we read out images as scanlines, the distance is based on the number of patches processed, and not on any concept of two-dimensional distance.
## 7 Outlook
In this work, we demonstrated the first attempt in leveraging autoregressive vision transformers, with dual parallel and recurrent representations, for image recognition tasks. We believe that the proposed ViR can be further explored for other applications such as dense prediction tasks in which ViTs struggle with high-resolution images due to the quadratic complexity of its self-attention layers. Other tasks such as autoregressive image generation can also benefit from this new formulation that allows for fast inference of considerably longer token sequences.
Figure 4: In (a) we visualize the set of patches that the red-border cell is able to attend to. In (b) we visualize the corresponding row of the retention mask for the highlighted cell. Cell opacity is based on the decay strength given the distance between the highlighted cell and each of the colored in cells. A black cell means no attention.
Figure 3: Visualization of : (a) input images (b) retention maps. Salient image features are localized in the retention maps. In addition, both short and long-range spatial dependencies have been captured effectively.
## 8 Conclusion
In this work, we introduced a new class of computer vision models, referred to as Vision Retention Networks (ViR), with dual parallel and recurrent formulations. The equivalency of these formulations allow for desired properties such as training parallelism and fast inference while maintaining a great performance. In addition, a hybrid formulation, denoted as chunkwise, enables processing of longer sequences with considerably more efficient time and space complexities. We have trained and tested the proposed ViR on ImageNet-1K and ImageNet-21K datasets with different resolutions and achieved competitive performance. Hence, this validates the effectiveness of the proposed ViR in different data regimes and image resolutions. We believe the proposed ViR could be the foundation of a new class of efficient vision-friendly models that offer training and inference flexibility for a variety of applications.
|
2302.06400 | On the first test of the Weak Equivalence Principle in low Earth orbit | The Weak Equivalence Principle is the founding pillar of General Relativity
and as such should be verified as precisely as possible. The Microscope
experiment tested it in low Earth orbit, finding that Pt and Ti test masses
fall toward Earth with the same acceleration to about 1e-15, an improvement of
about two orders of magnitude over ground tests. Space missions, even if small,
are expensive and hard to replicate; yet, the essence of physics is
repeatability. This work is an assessment of the Microscope results based on
the laws of physics and knowledge from previous experiments, focusing on the
limiting thermal noise and the treatment of acceleration outliers. Thermal
noise reveals anomalies that we explain by stray sub-microVolt potentials
caused by patch charges, giving rise to an unstable zero. The measurements were
affected by numerous acceleration spikes occurring at the synodic frequencies
relative to the Earth (the signal frequency) and the Sun, which we interpret as
evidence of a thermal origin. In Microscope authors' analysis, the spikes were
removed and the resulting gaps replaced with artificial data (up to 35, 40 per
cent of the sessions data), which retain memory of the gaps and may simulate or
cancel an effect (signal or systematic). An alternative approach based
exclusively on real measured data would avoid any ambiguity. The lessons of
Microscope are crucial to any futures improved mission. | Anna M. Nobili, Alberto Anselmi | 2023-02-13T14:37:46Z | http://arxiv.org/abs/2302.06400v5 | # Anomalies and open issues of the MICROSCOPE Space Test
###### Abstract
MICROSCOPE's final results report no violation of the Weak Equivalence Principle (Universality of Free Fall) for Pt and Ti test masses quantified by an Eotvos parameter \(\eta\simeq 10^{-15}\), an improvement by about two orders of magnitude over the best ground tests. The measurement is limited by random noise with \(1/\sqrt{\nu}\) frequency dependence attributed to thermal noise from internal damping occurring in the grounding wires. From information available and the physics of internal damping we calculate the differential acceleration noise spectral density at the signal frequency, and show it varies widely between experiment sessions. Such large variations are inexplicable if translated into physical quantities such as the quality factor. While calibrations interspersed with measurement sessions may cause some such changes, they cannot explain jumps between consecutive sessions without recalibration. A potential explanation is conjectured related to a fluctuating zero depending on measurement initialization errors. The experiment was severely affected by "glitches" -anomalous acceleration spikes related to radiation from the Earth- injecting significant power at the signal frequency and its harmonics. The procedure used to deal with the glitches depends on introducing artificial data and leaves spurious effects potentially mimicking a violation signal or canceling a real one. An alternative procedure, relying only on real measured data, is proposed, already used in ground tests of the Weak Equivalence Principle by the _Eot-Wash_ group. Future experiments aiming to exploit the full potential of space must resolve these issues, rely solely on measured data, and, more generally, readdress the experiment design.
## I Introduction
MICROSCOPE is the first experiment on the Weak Equivalence Principle (WEP) performed in low Earth orbit. A potential violation of the WEP is quantified by the Eotvos ratio \(\eta\), the fractional differential acceleration between two test masses of different composition as they fall in the gravitational field of a source body, the Earth in this case.
The instrument complement included two sensor units (SU), one with test masses (TM) of different composition for the WEP test (SUEP), the other with equal composition TM's for control (SUREF). Each SU includes two test masses configured as coaxial hollow cylinders, the common axis being the sensitive axis. Each TM forms part of an independent accelerometer. A set of electrodes is used for both measuring the position of the TM and applying the voltages that maintain it "motionless" with respect to its cage. A thin gold wire provides electric grounding and polarization of the electrostatically levitated mass. TM motion is detected by capacitive sensing (displacements induce capacitance variations). In orbit, the axis of symmetry of the SU's is in the orbit plane and a putative violation signal is an Earth pointing vector whose size oscillates at the orbit frequency of \(1.6818\times 10^{-4}\,\mathrm{Hz}\) plus or minus the satellite spin rate. In the valid experiment sessions, spin rates opposite the orbit motion were used, resulting in signal frequencies of \(\nu_{{}_{E{}_{PV2}}}=0.92499\times 10^{-3}\,\mathrm{Hz}\) and \(\nu_{{}_{E{PV3}}}=3.11133\times 10^{-3}\,\mathrm{Hz}\), the synodic frequencies relative to the Earth in V2 and V3 spin mode respectively.
The measurements were taken in sessions of 120 orbits (8.26 days) or less, the upper limit being driven by periodic correction of clock drift. The experiment was affected by anomalous acceleration peaks (named "glitches") occurring simultaneously in the four test masses. The issue arose in past geodesy missions and was solved in GOCE, mostly thanks to a stiffer solution for the multi-layer insulation. In MICROSCOPE glitches are found to produce large accelerations at the same frequency as the signal and its harmonics. In order to cope with glitches large portions of data were removed and replaced with artificial data.
Early results published in 2017 [1] were based, for the SUEP sensor, on the analysis of one measurement session (#218) lasting 8.26 days and reported a fractional difference of \(\simeq 10^{-14}\) in the accelerations of Pt and Ti test bodies. A complete analysis of 94 days in 19 sessions reports no violation at the level of \(\eta\simeq 10^{-15}\)[2; 3].
Establishing the nature and source of the noise limiting the measurement is crucial for any experiment and a prerequisite for future improvements. The early report [1] showed plots of the square root of the power spectral density (PSD) of the differential acceleration measured in two sessions, and the comment explicitly pointed out, in the frequency region of interest, a \(1/\sqrt{\nu}\) trend attributed to random thermal noise from internal damping occurring in the gold wires. Two years later [4], an improved analysis of the same two sessions was interpreted in the same way. The final report [2] does not mention the limiting noise and no longer shows any PSD plot. In the batch of companion papers [3] only a couple of plots are provided [5], which appear to confirm the \(1/\sqrt{\nu}\) trend. In one case [6] a potential minor contribution from elec
trostatic patch noise is conjectured, with unknown frequency dependence. In general [3], wire damping is mentioned as the most likely cause but the authors seem to refrain from drawing any firm conclusions as to the nature of the noise. Lacking any contrary information, we assume the wire damping hypothesis in our analysis.
Systematic effects that were a matter of concern [7] turned out to be dominated by temperature variations; after calibration and a posteriori correction of all sessions, residual systematic errors were much smaller than random errors (SUEP) or close to them (SUREF) [5; 8].
The stiffness of the gold wires, whose damping gives rise to the dominant thermal noise, exceeds expectations based on ground tests by two orders of magnitude, in addition to showing unexplained features (Sec. II).
We first focus on the thermal noise and the physical parameters it depends upon: the stiffness and the quality factor of the wires. They are properties of the physical plant that must remain constant unless specific actions are undertaken which alter the instrument setup. Alterations may occur during calibrations, since they may offset the TM and affect the stiffness (Sec. II). However, an anomalous behavior of thermal noise occurs also in some sequential sessions not affected by calibrations, and even within the same session. We conjecture an explanation in terms of initialization errors (Sec. III).
Glitches are very numerous, between 14000 and 40000 in 120 orbits. They produce large effects at the frequency of the signal and its harmonics. In the procedure used to deal with glitches, many data points are removed and replaced with artificially reconstructed data. Such a procedure introduces new potential sources of error and is not mandatory. The WEP ground tests by the _Eot-Wash_ group [9] provide a successful example of data analysis in the presence of missing data carried out without introducing artificial data, exploiting the fact that the frequency and phase of the target signal are known. We suggest that such an analysis is the most appropriate and would definitively settle the experiment result (Sec. IV).
Conclusions are drawn in (Sec. V).
## II Thermal noise from internal damping with a hundredfold higher stiffness
Once systematics have been taken care of -and if readout noise is not an issue- the measurement is ultimately limited by random thermal noise. It sets the length of the integration time required for the signal to emerge. A thermal noise 10 times larger requires an integration time 100 times longer to detect the same signal [10].
In both sensors each TM constitutes an independent accelerometer. Unlike common mode effects, random noise is not reduced by taking the difference of the individual accelerations. Assuming that they are uncorrelated, the differential acceleration noise is obtained by adding them in quadrature.
In the case of internal damping, each test cylinder of mass \(m\) is subjected to a thermal acceleration with a PSD whose square root is [11]:
\[S_{a_{th}}^{{}^{\prime}2}(\nu_{{}_{EP}})=\sqrt{4k_{{}_{B}}T\cdot\frac{k}{m^{2}Q }}\cdot\frac{1}{\sqrt{2\pi\nu_{{}_{EP}}}} \tag{1}\]
with \(k_{{}_{B}}T\) (\(k_{{}_{B}}\) the Boltzmann constant) the random disordered energy of thermal equilibrium at temperature \(T\), \(k\) the stiffness and \(Q\) the quality factor of the wire, \(\nu_{{}_{EP}}\) the frequency of the target WEP violation signal.
WEP tests with rotating torsion balances operate close to the thermal noise limit due to dissipation in the suspension wire with quality factor \(Q\simeq 6000\)[12]. In [13] various sources of thermal noise have been investigated for the GG proposed space test of the WEP, finding that gas damping (frequency independent) would contribute more than internal damping because the frequency of the signal is upconverted from the orbital frequency to the much higher spin frequency (1 Hz) in order to exploit the \(1/\sqrt{\nu}\) dependence of internal damping noise [14].
Along the symmetry axis of MICROSCOPE test cylinders (the only one sensitive to WEP violation) displacements of the TM from the "zero" position are measured with the "area variation" principle rather than the "gap variation" principle used for the other two linear axes [15]. In the former case there should be no electrostatic stiffness, except for small boundary effects depending on the gap and larger for larger gaps. Instead, with "gap variation" the "zero" is a point of unstable equilibrium, the (negative) electrostatic stiffness exceeding the (positive) stiffness of the wire by far. A stiffness of \(5\times 10^{-5}\) Nm\({}^{-1}\) was measured for the "area variation" CAESAR accelerometer [15].
For each TM the total stiffness was measured in orbit by applying a large displacement signal and measuring the acceleration in response to it [16]. The acceleration over displacement ratio is \(k_{tot}/m\) and yields the total stiffness \(k_{tot}>0\), showing that the system behaves like a harmonic oscillator with natural frequency of oscillation \(\omega_{\circ}=\sqrt{k_{tot}/m}\) forced at \(\omega<\omega_{\circ}\). The stiffness measured is attributed entirely to the wire because it is found unaffected by the electric voltages [4]. Alas, it is two orders of magnitude bigger than in ground measurements [17].
An _ad-hoc_ electrostatic torsion pendulum measured the stiffness of a wire by applying a force perpendicular to it, the theoretical prediction being \(k_{\perp}=3\pi Er^{4}/\ell^{3}\) for a beam of radius \(r\) and length \(\ell\) with \(E=7.85\times 10^{10}\) Nm\({}^{-2}\) the elastic Young modulus of gold. The agreement was good and on that basis the expectation for the similar wire of the space experiment (3.5 \(\mu\)m radius, 2.5 cm length) was \(k_{\perp}=7\times 10^{-6}\) Nm\({}^{-1}\), while the measured value is \(\simeq 10^{-3}\) Nm\({}^{-1}\). Furthermore, although the four wires are all made of gold and all have the same geometry, their measured stiffnesses differ up to 7 times.
If the wire is attached parallel to the cylinder's \(X\) axis -as we guess- only the radial stiffness (in any direction of the plane perpendicular it) obeys the \(k_{\perp}\) formula, contributing a positive stiffness which is negligible if compared to the much larger electrostatic negative stiffness,
as it is confirmed by the measurements along the \(Y,\,Z\) axes [16]. Instead, for the sensitive \(X\) axis what matters is the stiffness under the effect of a force along the wire. For a beam with section \(\pi r^{2}\) the theoretical stiffness under the effect of a longitudinal force is \(k_{\parallel}=\pi\,r^{2}E/\ell\), which in this case would be a huge factor \(1.7\times 10^{7}\) bigger than the expected \(k_{\perp}\). This is not the case, hinting that the wire is, to some extent, slack.
It is argued [4] that the very large measured values may be due to a contribution from the stiffness along the axis of the wire, which comes into play if the wire is pushed or pulled along its axis. However, there is no indication as to how large this contribution would be and why it can vary so much. In [17] a sketch shows the wire under test as if it were slack, but the effect is not discussed. The fact is that the pendulum measures \(k_{\perp}\), while what is needed is \(k_{\parallel}\). Its dependence on the wire being more or less slack/taut is likely to make it vary between sessions if a calibration occurs in between, because in the calibration the quadratic acceleration term must be kept sufficiently small, which is achieved by offsetting the TM to a new "zero", hence affecting \(k_{\parallel}\). Since losses mostly occur at the two ends of the wire in correspondence of the glued points of attachment, they are unlikely to undergo substantial changes after launch, hence we expect the quality factor not to vary with calibrations.
Within the mission team, measurements [16] have been questioned by [18]. However, their model is incorrect: for a wire along the \(X\) axis it gives zero stiffness in the \(Y,Z\) directions, while it is \(k_{\perp}=3\pi Er^{4}/\ell^{3}\).
A higher stiffness by \(\lambda\simeq 100\) yields a higher thermal acceleration noise by \(\sqrt{\lambda}\) (Eq. (1)), hence an integration time \(\lambda\) times longer to detect the same signal. The acceleration of the signal and systematics are unchanged, but produce \(\lambda\) times smaller displacements, the displacement being the physical quantity measured by each capacitance sensor. With the measured positive stiffness each electrostatically levitated TM is a harmonic oscillator with natural frequency of \(\simeq 10^{-2}\,\)Hz.
## III Analysis of thermal noise and observed anomalies
For each sensor with TM's \(m_{1},m_{2}\), stiffnesses \(k_{1},k_{2}\), and the same \(Q\) for both masses (if not, the lower dominates), the square root of the PSD of the differential acceleration thermal noise is:
\[S_{\Delta a_{th}}^{{}^{1/2}}(\nu)=f_{{}_{TkQ}}\cdot\frac{1}{\sqrt{2\pi\nu}} \tag{2}\]
with:
\[f_{{}_{TkQ}}=\sqrt{4k_{{}_{B}}T\cdot\left(\frac{k_{1}}{m_{1}^{2}}+\frac{k_{2} }{m_{2}^{2}}\right)\cdot\frac{1}{Q}}\qquad. \tag{3}\]
For a mechanical suspension the value of \(Q\) is known to be frequency dependent (higher at higher frequencies). However, the frequencies of interest in V2 or V3 spin mode are too close to expect any relevant difference in \(Q\), and we therefore take \(f_{{}_{TkQ}}\) as frequency independent. Hence, Eq.(2) represents, in a typical log-log plot (e.g. Fig. 2 of [1]), a straight line with negative 1/2 slope whose position along the vertical axis is set by the factor \(f_{{}_{TkQ}}\) (in ms\({}^{-2}\)).
For a given value of \(f_{{}_{TkQ}}\) the measured noise is fitted by the same straight line. If the signal frequency increases, this noise decreases (as \(1/\sqrt{\nu}\)), but once the oscillator has been set up and launched, \(k\)'s and \(Q\)'s are fixed, and (except for a mild dependence on \(T\)) the value of \(f_{{}_{TkQ}}\) is fixed and so is the straight line (2). However, after a calibration we cannot exclude a variation of the stiffness (Sec. II), hence of \(f_{{}_{TkQ}}\) and of the spectral density of differential accelerations. To the contrary, in sequential sessions, or within the same session analyzed with equally valid methods, we expect no such variations.
In order to verify that the measured thermal noise does not change whenever it is expected not to, we need -for all measurement sessions- the spectral density of differential accelerations at the frequency of the signal. Since this quantity is not available in papers [2; 3] we derive it indirectly as follows.
At the frequency of the signal \(S_{\Delta a_{th}}^{{}^{1/2}}(\nu_{{}_{EP}})\) given by (2) also obeys the equation [13]:
\[S_{\Delta a_{th}}^{{}^{1/2}}(\nu_{{}_{EP}})=\sqrt{t_{int}}\cdot\delta\cdot g_{ {}_{drive}} \tag{4}\]
where \(g_{{}_{drive}}=7.9\,\)m s\({}^{-2}\) is the driving signal of WEP violation at the satellite orbit, \(t_{int}\) is the session duration and \(\delta\cdot g_{{}_{drive}}\) is a "nominal" violation signal with signal to noise ratio SNR=1 that would not be detected because of thermal fluctuations (\(\sigma\)), which decrease with the session duration as more data are available. For a given \(\delta\), the lower \(\sigma\), the lower the integration time for a violation signal to emerge and be detected.
Tables 6, 7 in [5] give \(\delta\) (and its \(\sigma\)) for all sessions based on two different methods of data analysis, named \(M\) and \(A\) (discussed below). In all sessions, except for SUREF sessions #294 and #380, the value of \(\delta\) is smaller than \(2\sigma\)[5]. With \(\delta\) and the duration of the session [19], Eq. (4) gives \(S_{\Delta a_{th}}^{{}^{1/2}}(\nu_{{}_{EP}})\); this value, using (2) at \(\nu=\nu_{{}_{EP}}\), yields \(f_{{}_{TkQ}}\), from which -given the equilibrium temperature [5], the masses and the measured stiffnesses- the quality factor \(Q\) is inferred. An extensive series of ground measurements is available for comparison and validation [17]. Since \(k_{1}\,,k_{2}\) may vary because of calibrations we compute also \(k/Q\) assuming the same unknown \(k\) for both TM's. These quantities are listed in Table 1 for SUEP and in Table 1 for SUREF, along with the percentage \(G\) of artificial data that have been introduced in each session after the elimination of glitches.
Tables 1 and 2 show that, contrary to expectations, large jumps in \(S_{\Delta a_{th}}^{{}^{1/2}}(\nu_{{}_{EP}})\), \(Q\) and \(k/Q\) occur between sequential sessions (sessions are numbered by even numbers) in three cases for SUEP and in three cases for SUREF.
Values of \(Q<1\) and close to critical damping observed in two SUEP sessions are unrealistic.
Two methods of analysis, \(M\)-\(ECM\) (\(M\)) and \(ADAM\) (\(A\)), have been employed to estimate the experiment parameters, including \(\delta\) and \(\sigma\). Both methods use artificially reconstructed data to account for the missing data resulting from the elimination of glitches.
Method \(M\) estimates in the time domain the missing data and then computes the least-squares estimate of the regression parameters, maximizing the likelihood conditional on the observed data. An estimation of the PSD is also produced in the process. Method \(A\) performs the parameter estimation in the frequency domain. As such it requires an uninterrupted, regularly spaced time series, which is obtained by filling the gaps left by the removal of glitches with the artificial data estimated by \(M\).
The two methods are stated to be equivalent, leading to the same results, but in a few cases the \(\delta\)'s calculated by \(M\) and \(A\) differ considerably (by a factor \(\simeq 2\) in sessions #218, #438 and by a factor 7 in #442), implying that the same data stream leads to a different PSD according to one or the other method. Replicating the analysis by two (partially) independent methods is meant to enhance the confidence in the results, but the confidence is undermined if the methods give different results and the difference is not explained.
SUEP session #442 is an instructive extreme case. As reported in [5], Table 7, it has: \((\delta\pm\sigma)_{M}=(-10.7\pm 19.0)\times 10^{-15}\), \((\delta\pm\sigma)_{A}=(-1.5\pm 19.1)\times 10^{-15}\). That is, the noise is essentially the same but the values of \(\delta\) differ by a factor 7; \(\delta\) representing a "nominal" violation with SNR=1. How is it possible that, for the same measurement session, in the same experimental conditions, the same data with the same level of noise can lead to almost one order of magnitude difference in the evaluation of \(\delta\)?
The difference is particularly evident when the results are interpreted in terms of the physics of the dominant thermal noise from internal damping (widely different \(Q\)'s, up to an embarrassing level in session #442).
SUREF session #778-1 is the only one, among all 32 measurement sessions (19 SUEP + 13 SUREF), not affected by glitches, hence it has no artificial data. In this case [5], Table 6, reports \((\delta\pm\sigma)_{M}=(-8.1\pm 4.5)\times 10^{-15}\), \((\delta\pm\sigma)_{A}=(-8.1\pm 4.7)\times 10^{-15}\). That is, the results are exactly the same for \(\delta\) and only slightly different for \(\sigma\), as one expects if the same data set is analyzed with two equally valid methods.
Sessions #442 and #778-1 show that the \(M\) and \(A\) methods are in agreement when applied to a time series of real measured data (no missing points and no reconstructed data), but give different results (in particular different \(\delta\)'s) in the presence of artificial reconstructed data. Since the artificial data in \(A\) are those estimated by \(M\), it is the way the two methods manipulate them that makes the difference. In any case, no matter how artificial data are generated and/or manipulated, they should not introduce any physical information, i.e. their effect on the \(\delta\) estimated in each session ought to be null; otherwise, the more artificial data are used in a session, the more "artificial" will be the value of \(\delta\) obtained for that session.
It is not clear whether the artificial data are included in the calculation of the the \(\sigma\)'s. If they are, all \(\sigma\)'s are underestimated (the more dummy data, the lower the noise), and this has an impact on the reported global result (weighted by \(\sigma^{-2}\)) whose noise would be underestimated too.
In SUEP session #218, the values of \(\delta\) differ by a factor of about 2 depending on the type of analysis. This session (120 orbits) was the basis of the early results [1], and was further elaborated two years later [4], ending up with a different value of the spectral density at the signal frequency. Even discarding the earlier results because systematics had not yet been reduced and/or glitches had not yet been treated, it is apparent that even the latest analysis cannot be considered conclusive. \(Q\) values differing by a factor 3.5, for the same oscillator in the same conditions, are inexplicable.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \(s\) & \(S_{\Delta\phi_{A}}^{1/2}\) (\(\nu_{EP}\)) & \(Q\) & \(k/Q\) & \(c\) \\ & \(10^{-15}\) & \(10^{-15}\) & \(10^{-15}\) & \(2^{-15}\) & \(7^{-15}\) & \(10^{-3}\)N/m & \% \\ \hline Session & \(M\) & \(A\) & \(M\) & \(M\) & \(M\) & \(A\) & \(M\) & \(A\) \\ \hline \({}^{*}\)120-1(V2) & \(3.1\) & \(-4.2\) & \(3.0\) & \(1.2\) & \(1.8\) & \(146\) & \(0.942\) & \(0.077\) & \(4\) \\ \hline
120-2(V2) & \(-16.8\) & \(-15.1\) & \(8.2\) & \(7.4\) & \(3.1\) & \(3.9\) & \(0.268\) & \(0.295\) \\ \hline
2174(V2) & \(7.8\) & \(5.0\) & \(4.4\) & \(4.5\) & \(11\) & \(10\) & \(0.101\) & \(0.112\) & \(25\) \\ \hline
2176(V2) & \(1.7\) & \(1.8\) & \(6.0\) & \(8.0\) & \(8.0\) & \(8.283\) & \(0.039\) & \(0.004\) & \(0.040\) \\ \hline \hline
294(V3) & \(-8.0\) & \(-7.7\) & \(4.2\) & \(4.1\) & \(3.5\) & \(3.7\) & \(0.330\) & \(0.307\) \\ \hline
376-1(V2) & \(-3.4\) & \(-4.1\) & \(1.2\) & \(1.5\) & \(135\) & \(93\) & \(0.0083\) & \(0.021\) & \(14\) \\ \hline
376-2(V2) & \(-5.7\) & \(-4.4\) & \(1.4\) & \(2.1\) & \(2.4\) & \(6.0\) & \(0.148\) & \(0.021\) & \(14\) \\ \hline
380-1(V3) & \(7.6\) & \(7.4\) & \(3.1\) & \(3.1\) & \(3.1\) & \(6.7\) & \(0.18\) & \(0.117\) & \(7\) \\ \hline
380-2(V3) & \(9.3\) & \(8.9\) & \(3.3\) & \(3.2\) & \(5.7\) & \(6.3\) & \(0.20\) & \(10.18\) & \(5\) \\ \hline
452(V2) & \(-3.3\) & \(-4.5\) & \(1.5\) & \(1.7\) & \(101\) & \(81\) & \(0.011\) & \(0.014\) & \(24\) \\ \hline
454(V2) & \(-3.1\) & \(-5.7\) & \(4.4\) & \(1.7\) & \(111\) & \(78\) & \(0.010\) & \(0.014\) & \(24\) \\ \hline
4778(V2) & \(-3.1\) & \(-8.1\) & \(-8.1\) & \(3.0\) & \(3.0\) & \(232\) & \(0.046\) & \(0.049\) & \(0\) \\ \hline
4778(V2) & \(-2.3\) & \(-3.2\) & \(0.5\) & \(0.8\) & \(559\) & \(300\) & \(0.015\) & \(0.0368\) & \(6\) \\ \hline \end{tabular}
\end{table}
Table 2: Same as Table 1 for SUREF sensor. A \(\bullet\) has been added to indicate sessions with \(Q\) values much larger than the largest value of 118 measured in ground tests [17].
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \(s\) & \(S_{\Delta\phi_{A}}^{1/2}\) (\(\nu_{EP}\)) & \(Q\) & \(k/Q\) & \(c\) & \(10^{-15}\) & \(10^{-15}\) \\ & \(10^{-15}\) & \(10^{-15}\) & \(10^{-15}\) & \(10^{-15}\) & \(10^{-15}\) & \(10^{-15}\)N/m & \% \\ \hline Session & \(M\) & \(A\) & \(M\) & \(A\) & \(M\) & \(M\) & \(A\) & \(A\) \\ \hline Session & \(M\) & \(A\) & \(M\) & \(A\) & \(M\) & \(A\) & \(M\) & \(A\) \\ \hline Session & \(M\) & \(A\)
The reference sensor SUREF behaves quite differently from SUEP. Thermal noise is about a factor \(\sqrt{2}\) smaller than for SUEP (assuming the same \(Q\)), due to a factor 2 larger (\(k_{1}/m_{1}^{2}+k_{2}/m_{2}^{2}\)) term, and residual systematic errors are comparable to the stochastic errors [5]. Moreover, 4 out of 9 sessions have been split because of sudden jumps in the mean value of the differential acceleration that required the two data segments to be treated as distinct experiments. Unlike glitches these jumps do not occur on all accelerometers, hence they cannot be attributed to the spacecraft but are likely to originate in the accelerometers themselves, and last much longer (tens of seconds). In SUEP only 1 session (#326) out of 18 has been split. In SUREF, in addition to the three cases mentioned above of anomalous jumps in \(S^{i/2}_{\Delta a_{th}}(\nu_{{}_{EP}})\), \(Q\) and \(k/Q\) in sequential sessions, we observe three sessions with large \(M\) vs \(A\) discrepancies (see Table 2). However, the values of \(Q\) are typically larger than in SUEP and in three cases much larger than the largest value of 118 reported in ground measurements [17].
Early results [1] were based on session #176 (62 orbits). The spectral density \(S^{i/2}_{\Delta a_{th}}(\nu_{{}_{EP}})\) reported in [1] was confirmed in [4] and attributed to \(1/\sqrt{\nu}\) thermal noise, yielding \(\delta=3.75\times 10^{-15}\) and \(Q=65\), consistent with ground measurements. Instead, the final values of \(\delta_{M}\) and \(\delta_{A}\) are much smaller (Table 2), and the corresponding \(Q\)'s of 317 and 283 are far too high to be realistic on the basis of ground tests. Note that session #176 has 40% of reconstructed data.
An explanation of the anomalies observed in sequential sessions may be conjectured as follows.
Accelerations are inferred (after calibration) from displacements measured with very high precision relative to a position assumed as a zero force point that the TM of each independent accelerometer is forced to maintain at all times. This capacitance zero is affected by inevitable uncertainties; it may change after calibration, and after initialization of a new session.
Within this design an elastic force much larger than expected was found to dominate the motion of each levitated TM. The zero of the elastic force and the capacitance zero that the TM is forced to, are unlikely to coincide, but the problem has not been investigated. With a large stiffness, and a dominant elastic force, the issue of an unstable zero is especially relevant because the elastic force is linear with the displacement from its physical zero to the capacitance zero (as in the case of the tidal force).
Aside from the gold wire, an unstable zero is an issue of its own in measuring the effects of extremely small forces. In WEP tests it is related to the problem of initial conditions (or release) errors that affect all experiments which are not intrinsically null experiments, including those with laser tracked satellites, celestial bodies and cold atoms [20; 21; 22; 23]. Instead, in torsion-balance tests and the proposed GG space experiment the physical observable is a true null [24].
## IV Glitches, gaps and reconstructed data
The experiment was plagued by anomalous short-duration (\(<5\)s) acceleration spikes originating in the spacecraft and occurring simultaneously in the four test masses.
Since 2001 five space missions carrying ONERA's ultra-sensitive accelerometers have been launched - CHAMP, GRACE, GOCE, GRACE Follow-On (GFO) and _MICROSCOPE_- with a total of seven satellites, and all but one have experienced such spikes (named "clanks" in CHAMP and GOCE, "twangs" in GRACE, "glitches" in MICROSCOPE). At the time of the design of GOCE, reports of such effects in CHAMP caused alarm, and countermeasures were adopted at design and test level, which were successful, and no spikes were seen. The succeeding missions discovered the problem anew.
Although as far as we know a comprehensive analysis across all the missions has not been carried out, there seems to be a consensus that the spikes are triggered by energy input from the Earth causing micro-vibration, e.g. release of stress energy in the spacecraft materials such as the multi-layer insulation (MLI) [25; 26]. Alternative hypotheses point to small electric discharges in the spacecraft surfaces, or changes in the solar array currents. The GOCE countermeasures program was based on the mechanical hypothesis, and in GFO a stiffer design of the nadir insulating foil has been adopted, thus, too, assuming a mechanical origin of the spikes.
Most important for the MICROSCOPE test of the WEP, the spacecraft is spinning, hence the occurrence of glitches is not random but it correlates with the synodic frequency of the spacecraft relative to the Earth (i.e. the frequency of WEP violation), and its harmonics.
In Ref. [27] it is concluded that glitches must be removed from the data because it is impossible to accurately model their effects. In the time series of every session glitches are identified as all outliers above \(4.5\,\sigma\) from the moving average, some time is allowed before and after each outlier to account for transient effects, and all data points identified in this way are removed, up to 35% in SUEP and up to 40% in SUREF. By comparison, in the _Eot-Wash_ experiment the amount of data removed due to sporadic spikes in the ion pump current, or due to abrupt changes of the turntable axis from the local vertical, was 7% of the total [9].
For MICROSCOPE, unlike _Eot-Wash_, the choice has been made to reconstruct the missing data.
An interesting comparison is reported in [27] between the FFT of the differential acceleration as obtained from the original measured data, and the FFT after glitches were removed and the missing data were reconstructed. This was done for SUREF session #380 lasting 120 orbits (of which two segments of 46 and 34 orbits each have been used in the final analysis) Fig. 1, copied from Fig. 10 in [27], shows the two plots, measured data in black, reconstructed data in red.
This session is in V3 mode, hence the synodic frequency relative to the Earth is \(\nu_{{}_{E_{V3}}}=3.1\times 10^{-3}\) Hz, at which no WEP violation is expected. At this frequency the black curve in Fig. 1 shows a differential acceleration line of 6 to \(7\times 10^{-14}\) m s\({}^{-2}\). Lines at \(2\nu_{{}_{E_{V3}}}\), \(3\nu_{{}_{E_{V3}}}\), \(4\nu_{{}_{E_{V3}}}\) and \(5\nu_{{}_{E_{V3}}}\) are visible too, indicating that glitches produce effects not only at the signal frequency but at its higher harmonics as well.
The largest effect occurs at \(2\nu_{{}_{E_{V3}}}\) and is dominated by the main component of the Earth tide; it must appear both in the original and in the reconstructed data, hence we guess that the black line is hidden behind the red line. The tidal effect at twice the signal frequency, being deterministic, is modelled to derive the offset between the TM's, which is then used to quantify and subtract the tidal component at the same frequency and phase as the signal. Therefore the glitch effect at twice the signal frequency must be sufficiently smaller than the tidal effect at the same frequency. If the physical origin of the glitches is related to thermal energy from the Earth, one expects a non-negligible glitch component with half the synodic period, as the spacecraft goes from facing deep space to facing the warm planet.
After glitches have been removed and gaps have been filled with reconstructed data (46% of the total according to [27]), the red curve in Fig. 1 shows a reduction of the lines previously ascribed to glitches. At twice the signal frequency the reduction cannot be quantified because of the dominant tidal effect.
More importantly, the pattern of lines related to glitches shows that the FFT based on the reconstructed data retains memory of the removed glitches. The reconstructed data, like the gaps, must necessarily follow the glitches, which have specific frequencies, hence the imprinting of glitches in the reconstructed data. Such a spurious effect at the signal frequency would mimic a violation signal or cancel a real one.
To demonstrate the correctness of the procedure, a fake violation signal was injected in the data before preprocessing, and two reconstructed data sets were built from the original set (one without and one with the fake signal) from which two values of \(\delta\) are recovered: if their difference yields the fake signal this is taken as evidence that the reconstructed data provide a reliable representation of the real measured data. However, the glitch spikes are so large that their removal is not affected by the fake signal; the resulting gaps follow the pattern of the frequencies at which glitches occur, and so do the data generated to fill them, as shown in Fig. 1. Thus, the two reconstructed data sets are the same except for the fake signal, which is obviously recovered.
The use of a fake signal also shows a specific anomaly. In the only case without glitches, SUREF session #778-1, when a fake signal was added with \(\delta_{fake}=3.4\times 10^{-14}\), it was recovered with the largest error of all other SUREF and SUEP sessions to which the same fake signal was added ([5], Table 8 and Sec. 5.6), somehow suggesting that real data may in fact be more noisy than those containing artificial data. To make things even more confusing, when the fake signal added to the same session is 10 times weaker, it is recovered correctly.
Filling the gaps is mandatory as long as the analysis is performed in the frequency domain. However, a different approach is possible because the frequency and phase of a differential acceleration due to WEP violation in the field of a source body (the Earth in this case) are known; only its amplitude and sign are unknown.
At any given time, in the reference frame rotating with the MICROSCOPE spacecraft, the position of the Earth and the phase of the sensitive axis are known, and the offset of the TM's can be derived from the tidal effect. Hence, in the time series of differential accelerations the violation signal is a sinusoid with the synodic period of the spacecraft relative to the Earth (at zero spin it would coincide with the orbital period) whose maximum size (unknown) occurs twice per synodic period when the sensitive axis of the sensor points towards or away from Earth (sign unknown). This demodulated phase lock-in signal can be fitted to the time series of the differential acceleration data -only the real measured data- in order to determine its amplitude and sign, which therefore would not be affected by whatever gaps.
In addition, lock-in detection makes it possible to separate spurious effects that have the same frequency of the signal, but not the same phase, as in the case of the glitch-induced effect (provided the spurious effect is not much bigger). Although both the glitch and the signal come from the Earth, it is very unlikely that the sensitive axes point exactly to the center of mass of the Earth when the glitch effect is maximum. Therefore, should a residual spurious effect remain after the elimination of glitches, it would not be confused with the signal.
A procedure of this kind has been successfully used in the _Eot-Wash_ ground tests of the WEP [9; 12]. The same
Figure 1: Same as Fig. 10 in [27], where the caption reads: “Spectra of SUREF’s \(x\) axis differential acceleration for session 380, before (black) and after (red) glitches masking and data reconstruction.” (\(x\) is the sensitive axis of the sensor)
procedure, applied to the MICROSCOPE data, would establish beyond question that the result of the experiment is not affected by the gaps and the artificial data generated to fill them.
## V Conclusions
A space test of the weak equivalence principle in the field of the Earth has enormous potential for a leap forward in precision, by building on a strong driving signal and better instrument isolation in space than in any ground laboratory, hence easier control of systematic effects from local disturbances. A satellite test of the WEP has been over 40 years in the making, a number of designs were proposed, one -MICROSCOPE- has made it to flight, and has reported two orders of magnitude improvement over the best laboratory experiments to date [2].
This work points at anomalies in the experiment and in the data analysis that need to be addressed for the experiment result to be consolidated, and to help designing a better space test of the future.
The MICROSCOPE experiment appears to be limited by thermal noise from damping in the grounding wires. Using the published information [3] we calculate the PSD for all sessions, and express it in terms of the dominant thermal noise and the physical quantities it depends upon. Anomalous jumps are observed in sequential sessions (and even within the same session); the corresponding quality factors vary widely and are sometimes unrealistically high or low. We suggest an unstable zero may be the cause of such fluctuations, an issue that would require careful investigation even if the grounding wire were to be replaced by an active discharger.
The experiment was plagued by a large number of "glitches", anomalous releases of energy, originating in the spacecraft, producing large differential accelerations at the signal frequency and its harmonics. We find evidence that the way they were treated (removed and replaced with artificial reconstructed data) leaves extant effects at the critical frequencies that could mimic a violation signal or cancel a real one. We suggest that an analysis as successfully employed in the _Eot-Wash_ laboratory tests, would be tolerant of the missing data without affecting the sensitivity of the experiment.
Any future WEP experiment in space must prove by its very design that it will not suffer from glitches (GOCE showed the way), and must avoid introducing artificial data potentially compromising the analysis. Over and above that, future experiments aiming to exploit the full potential of space must readdress the experiment design: null experiment, stable zero, minimal thermal noise by fast spin [7; 24].
|
2301.10732 | An Efficient Semi-Automated Scheme for Infrastructure LiDAR Annotation | Most existing perception systems rely on sensory data acquired from cameras,
which perform poorly in low light and adverse weather conditions. To resolve
this limitation, we have witnessed advanced LiDAR sensors become popular in
perception tasks in autonomous driving applications. Nevertheless, their usage
in traffic monitoring systems is less ubiquitous. We identify two significant
obstacles in cost-effectively and efficiently developing such a LiDAR-based
traffic monitoring system: (i) public LiDAR datasets are insufficient for
supporting perception tasks in infrastructure systems, and (ii) 3D annotations
on LiDAR point clouds are time-consuming and expensive. To fill this gap, we
present an efficient semi-automated annotation tool that automatically
annotates LiDAR sequences with tracking algorithms while offering a fully
annotated infrastructure LiDAR dataset -- FLORIDA (Florida LiDAR-based Object
Recognition and Intelligent Data Annotation) -- which will be made publicly
available. Our advanced annotation tool seamlessly integrates multi-object
tracking (MOT), single-object tracking (SOT), and suitable trajectory
post-processing techniques. Specifically, we introduce a human-in-the-loop
schema in which annotators recursively fix and refine annotations imperfectly
predicted by our tool and incrementally add them to the training dataset to
obtain better SOT and MOT models. By repeating the process, we significantly
increase the overall annotation speed by three to four times and obtain better
qualitative annotations than a state-of-the-art annotation tool. The human
annotation experiments verify the effectiveness of our annotation tool. In
addition, we provide detailed statistics and object detection evaluation
results for our dataset in serving as a benchmark for perception tasks at
traffic intersections. | Aotian Wu, Pan He, Xiao Li, Ke Chen, Sanjay Ranka, Anand Rangarajan | 2023-01-25T17:42:15Z | http://arxiv.org/abs/2301.10732v1 | # An Efficient Semi-Automated Scheme for Infrastructure LiDAR Annotation
###### Abstract
Most existing perception systems rely on sensory data acquired from cameras, which perform poorly in low light and adverse weather conditions. To resolve this limitation, we have witnessed advanced LiDAR sensors become popular in perception tasks in autonomous driving applications. Nevertheless, their usage in traffic monitoring systems is less ubiquitous. We identify two significant obstacles in cost-effectively and efficiently developing such a LiDAR-based traffic monitoring system: (i) public LiDAR datasets are insufficient for supporting perception tasks in infrastructure systems, and (ii) 3D annotations on LiDAR point clouds are time-consuming and expensive. To fill this gap, we present an efficient semi-automated annotation tool that automatically annotates LiDAR sequences with tracking algorithms while offering a fully annotated infrastructure LiDAR dataset--FLORIDA (Florida LiDAR-based Object Recognition and Intelligent Data Annotation)--which will be made publicly available. Our advanced annotation tool seamlessly integrates multi-object tracking (MOT), single-object tracking (SOT), and suitable trajectory post-processing techniques. Specifically, we introduce a human-in-the-loop schema in which annotators recursively fix and refine annotations imperfectly predicted by our tool and incrementally add them to the training dataset to obtain better SOT and MOT models. By repeating the process, we significantly increase the overall annotation speed by \(3-4\) times and obtain better qualitative annotations than a state-of-the-art annotation tool. The human annotation experiments verify the effectiveness of our annotation tool. In addition, we provide detailed statistics and object detection evaluation results for our dataset in serving as a benchmark for perception tasks at traffic intersections.
Point cloud annotation tool, Intelligent transportation systems, LiDAR dataset, infrastructure, deep learning.
## I Introduction
Currently, 55 percent of the global population lives in urban areas or cities, which is estimated to increase to 68 percent by 2050. As the world continues to urbanize, we have seen increased investment in building smart traffic infrastructure to achieve the goals of Vision Zero--zero deaths and no serious injuries on roads and streets. For example, the Infrastructure Investment and Jobs Act passed in 2021 by the U.S. government established the new Safe Streets and Roads for All (SS4A) program with an annual budget of one billion dollars from 2022 to 2026.
Solutions aimed at Vision Zero goals can be broadly divided into two categories: (i) onboard solutions [e.g., advanced driver assistance systems (ADAS) and autonomous vehicles] that rely on onboard sensing units on vehicles and drones. etc. and (ii) infrastructure solutions (e.g., traffic monitoring systems, traffic lights, speed bumps, streetlamps) that deploy a variety of sensors in transportation infrastructure. Most existing perception systems begin with sensory data acquired from cameras as they provide excellent image/video data streams at an affordable price. However, these solutions suffer from performance drops in low illumination or adverse weather conditions. Moreover, the monocular camera lacks depth information forcing object detection to be confined to 2D. Stereo cameras can obtain depth information via view interpolation but fail to give accurate depth at a distance. Considering the above limitations of cameras, LiDAR--a 3D sensing technology--has received increased attention, especially in creating next-generation in
Fig. 1: Overview of the semi-automated annotation pipeline.
frastructure. By capturing millions of points with precise 3D distance measurements per second through emitting and receiving light pulses (in wavelengths roughly ranging from 900 to 1500nm), LiDAR can support long-range object detection and, in principle, can perform well under various lighting and weather conditions. Due to these characteristics, LiDAR has been widely used in onboard solutions in autonomous driving applications. However, LiDAR-based infrastructure solutions for traffic monitoring systems are still in their infancy.
We identify several significant obstacles while exploring LiDAR-based infrastructure solutions at traffic intersections. To begin with, publicly available LiDAR datasets are, in the main, insufficient for perception tasks in infrastructure systems. Most existing perception tasks in the LiDAR space have relied on public datasets collected from autonomous vehicles in their quest to develop deep learning models for onboard solutions. Despite significant progress [1], these approaches fail to analyze complex, crowded, and safety-critical scenarios, such as at a busy intersection, due to a limited field of view and heavy occlusion. For these and related reasons, existing onboard solutions are inadequate for supporting the detection of pedestrians, who are more likely to get injured in a traffic accident: (i) popular autonomous driving datasets such as Waymo [2], NuScenes [3], and KITTI [4] only provide a limited set of pedestrians for training and evaluation of pedestrian perception algorithms; (ii) pedestrians are small and non-rigid with various poses, making it difficult for sensors to capture; (iii) pedestrians tend to walk in groups, adjust their speed and direction more frequently and unexpectedly (for a safe interpersonal distance), which leads to complex pedestrian behavior and often causing heavy sensor occlusion. On the other hand, infrastructure solutions have an overhead view of traffic and pedestrians with less occlusion. Perception systems in this space offer the promise of a better understanding of challenging and crowded traffic scenarios, leading to more reliability in spotting safety threats.
A serious challenge for infrastructure LiDAR is that 3D annotations of LiDAR point clouds are time-consuming and expensive. In the course of our initial annotations of an intersection LiDAR dataset, we discovered that annotating and adjusting a single 3D Bounding Box (BBox) around an object is challenging due to its seven degrees of freedom (DoF), namely, the 3D location, 3D size, and heading orientation. Although some annotation tools [5, 6] are equipped with one-click auto-fitting functions, they fail to accurately annotate under many circumstances, such as when the object is partially occluded, or when the point cloud is sparse. As a result, existing tools require significant effort in data annotation. For example, as stated in a recent pedestrian dataset STcrowd [7], it took 960 person-hours effort of 20 professional annotators to annotate 219K bounding boxes in the point clouds.
To fill this gap, we present an efficient semi-automated annotation tool that automatically annotates LiDAR sequences with human-in-the-loop initialization and correction. In this work, we construct a fully annotated infrastructure LiDAR dataset that will be made publicly available. Our development is motivated by several key observations. After annotating an object, a common annotation strategy is to propagate the bounding box of the target object to subsequent frames, thereby eliminating the need to label each frame. The strategy is particularly advantageous for 3D data collected at traffic intersections because the size of an object remains constant, e.g., a parked car, or only varies slightly, e.g., a walking pedestrian. Current annotation tools either track objects using Kalman filter-based algorithms [5] or regress the target's movement between two consecutive frames using registration algorithms [6]. The Kalman filter-based approach fails to locate the object precisely and necessitates multiple manual adjustment operations, thereby increasing annotation time. Additionally, the registration algorithm is susceptible to temporary occlusions and tends to lose track of an object after a few frames. Therefore, we seek to use Single Object Tracking (SOT)--a deep learning-based object tracking algorithm--for annotation propagation. Given an object's first-frame annotation, our algorithm can track it robustly in the subsequent frames while maintaining the flexibility of being trained on autonomous driving LiDAR datasets or infrastructure LiDAR datasets. Through extensive experiments, we find that it works well in practice. Furthermore, inspired by the work [8], we incorporated a Multi-Object Tracking (MOT) algorithm into our annotation tool. Unlike the SOT, which focuses on independently annotating and refining each object instance via labeling the first frame of each object and propagating it to subsequent frames followed by refinements, the MOT algorithm can automatically detect and track all object instances of a scene in a single shot. Once it generates the predicted annotation, human annotators may visually inspect and adjust the results. In practice, initial annotations are provided by a trained MOT model. If MOT fails to detect objects, one can annotate its first appearance and utilize an SOT model to propagate. Both SOT and MOT models may not initially give desirable predictions for annotation. Our human-in-the-loop schema allows us to fix and refine imperfectly predicted annotations and improve upon them to recursively obtain better annotations. We show through experiments that the model prediction accuracy is consistently enhanced by adding more qualitative annotations to the training set. As a result, our tool significantly accelerates the overall annotation speed. To summarize, we make the following contributions:
* We develop a semi-automated annotation tool that applies SOT and MOT models while using a human-in-the-loop concept.
* We obtain a large-scale fully-annotated infrastructure LiDAR dataset containing a variety of traffic participants and interesting scenarios.
* We provide baselines for 3D object detection, where the 3D AP for vehicles and pedestrians are 90.66% and 87.44%, respectively.
* Human annotation experiments demonstrate that our proposed annotation scheme and tool increase the annotation speed of pedestrians and vehicles by approximately a factor of three.
* We demonstrate the practical value of this approach and suggest how downstream applications can take advantage of the infrastructure dataset.
## II Background
### _3D Single Object Tracking on Point Clouds_
3D single object tracking on point clouds is a relatively new research area. In 2019, SC3D [9] introduced the 3D SOT problem and implemented a Siamese tracker that encodes the target and candidates into embeddings, followed by the cosine similarity measure to determine the best-matching candidate. In addition, it regularized the target embedding by imposing a shape completion loss. P2B [10] argued that SC3D's candidate generation is either time-consuming or performance-degraded. It then proposed an end-to-end Siamese tracker. Target and search areas are fed to a Pointnet backbone to obtain seeds with features. Then, each seed is projected to a potential target center using Deep Hough voting [11]. Finally, P2B clusters the projected target centers and generates the final proposals by choosing those with the highest targetness scores. Multiple successive works [12, 13, 14, 15] are built on top of P2B with additional innovations w.r.t. feature extraction, template and search area feature fusion, and detector heads. BAT [12] proposed a BoxCloud representation that captures the point-to-box relation between object points and their BBoxes. In addition, BAT developed a box-aware feature fusion module to aggregate the features of target points into search area points. MLVSNet [13] finds that the Hough voting in P2B generates very few vote centers for sparse objects and then proposes multi-level Hough voting as a remedy and a target-guided attention module for feature fusion. In V2B [14], the authors proposed a new voxel-to-BEV detection head. It regresses the target's 3D location in BEV feature maps. PTTR [15] tracks objects in a coarse-to-fine manner with the help of transformers. It utilized self-attention for template and search area features, respectively, followed by cross-attention for feature fusion, and a generation of coarse prediction builds upon those features. Another lightweight Prediction Refinement Module generates the final predictions. The trackers mentioned above all follow the Siamese paradigm and are essentially doing appearance matching between the target and search area. Recently, \(M^{2}\)-track [16] proposed a new paradigm, namely the motion-centric paradigm. First, it predicts the relative target motion between two consecutive frames. Then it refines the prediction by aggregating the two point clouds with motion compensation to create a denser point cloud. \(M^{2}\)-track achieved state-of-the-art performance on multiple benchmarks. In this paper, we adopt \(M^{2}\)-track as the SOT model in our annotation tool.
### _3D Multi-Object Tracking on Point Clouds_
The research community initially analyzed the MOT problem in 2D representations, where we track objects in a sequence of images. For 2D MOT, the same objects across frames are associated by appearance and motion cues. For 3D MOT in point clouds, appearance cues become less discriminative because of the sparsity of point clouds and lack of texture information. In contrast, motion cues become more reliable because the scale of an object remains constant, and there are no abrupt movements. Given these characteristics, most of the 3D MOT work employs the tracking-by-detection paradigm and focus on motion modeling for data association. Due to the rapid development of autonomous driving, a variety of LiDAR-based object detectors have been developed and made available, including representative works such as SECOND [17], PointPillars [18], PointRCNN [19], Part\(A^{2}\) Net [20], CenterNet3D [21], and PVRCNN [22]. For tracking, AB3DMOT [23] proposed a baseline approach that adopts the 3D Kalman Filter as the motion model and uses the Hungarian algorithm as the matching strategy. Follow-up work [24, 25] mainly improves upon its data association method and life cycle management strategy. SimpleTrack [26] encapsulates multiple 3D MOT methods (following the tracking-by-detection paradigm) into a unified framework with four configurable modules, namely detection result pre-processing, data association, motion modeling, and life cycle management. Given its flexibility and simplicity, we employ SimpleTrack as part of our annotation pipeline.
### _Smart Annotation Tools for Point Clouds_
3D BAT [27] is one of the earliest open-sourced point cloud annotation tools--a web-based application with multi-model data. The annotations for point clouds are automatically
\begin{table}
\begin{tabular}{p{71.1pt} p{71.1pt} p{71.1pt} p{71.1pt} p{71.1pt} p{71.1pt}} \hline \hline Dataset & \begin{tabular}{c} With crowded pedestrians \\ \end{tabular} & \begin{tabular}{c} Include all traffic \\ participants \\ \end{tabular} & \begin{tabular}{c} Release full dataset \\ with labels \\ \end{tabular} &
\begin{tabular}{c} Object detection \\ evaluation \\ \end{tabular} & Annotation method \\ \hline Ko-PER & ✗ & ✓ & ✓ & ✗ & Not mentioned \\ \hline PedX & ✓ & ✗ & ✓ & ✗ & 3D model fitting for auto 3D labeling from 2D segmentation and joint location labels \\ \hline IPS 300+ & ✓ & ✓ & ✗ & ✓ & By Datatang Co. Ltd. \\ \hline LUMPI & ✓ & ✓ & ✗ & ✗ & 1) Foreground segmentation using DBSCAN algorithm 2) Annotation propagation using Kalman Filter 3) 3D pose correction using ICP 4) Costly human refinement \\ \hline FLORIDA (Ours) & ✓ & ✓ & ✓ & ✓ & 3) Pedestrian Orientation auto-correction from moving direction 4) Human refinement in batch mode \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of FLORIDA with other popular infrastructure lidar benchmarks.
projected to different camera views. It also supports interpolation between keyframes to accelerate sequence annotation. LATTE [5] further implemented sensor fusion, smart one-click annotation, and integrated tracking into sequence annotation. LATTE used a clustering algorithm to achieve the one-click annotation to find all points for the target object, estimate the 2D bounding box (BBox), and convert to 3D BBox coordinates based on camera-LIDAR calibration. In addition, LATTE utilized the Kalman Filter algorithm for tracking objects. SAnE [28] improves one-click annotation by employing a denoising pointwise segmentation strategy that assigns a noise penalty for all boundary locations to better separate nearby objects. SAnE also proposed an improved tracking algorithm, namely a guided tracking algorithm. It consists of 3 stages: greedy search, backtracking, and refinement.
SUSTechPOINTS [6] is one of the best open-source point cloud annotation tools to the best of our knowledge. It has a handy interface for adjusting BBox in single frame or batch mode. Moreover, it implements a collection of functions, such as one-click annotation and annotation propagation. It employs a heuristic registration algorithm that calculates the relative geometric transformation between the target in consecutive frames to propagate the current BBox to subsequent frames. Unfortunately, the registration performance is imperfect, requiring a certain amount of effort in label refinement and correction. We built our work upon SUSTech POINTS with an improved annotation propagation algorithm. In addition, we extend its functions to include auto-labeling using an MOT tracker, orientation adjustment, trajectory smoothing, etc.
### _Point Cloud Benchmark Datasets_
Many point cloud benchmark datasets focus on autonomous driving applications in which the LiDAR is mounted on moving vehicles. For example, KITTI--a point cloud dataset released in 2013 and now a pioneering vision benchmark [4]--is widely used for evaluating perception tasks. Later, more autonomous driving datasets appeared, comprising more diverse scenes, larger sizes, and more fine-grained annotations. Argoverse [29], Nuscenes [3], and the Waymo Open Dataset [2] are some of the most well-known datasets.
Infrastructure-side point cloud benchmarks, on the other hand, are scarce. To our knowledge, the first infrastructure LiDAR dataset was released in 2014 and is referred to as the Ko-PER Intersection dataset [30]. It deploys 14 SICK LDMRS 8-layer research laser scanners to a four-way intersection in Aschaffenburg, Germany. Later in 2021, IPS300+ [31] released a high-density intersection dataset. It installs two 80-beam Robosense Ruby-Lite LiDAR scanners at the diagonal of a 4-way intersection. The two LIDAR cameras are calibrated and cover the entire intersection. However, only a total of 600-frame annotations are made available. Recently, LUMPI [32] proposed a multi-perspective intersection dataset in Hanover, Germany. It deployed three cameras and 5 LiDARs to cover the intersection with dense point clouds. And a total of 90K point clouds have been released. However, their labels are unavailable as of November 21, 2022.
Our proposed dataset is collected at a busy intersection near the University of Florida campus, comprising crowded vehicles, pedestrians, and a parking lot. We captured sequences covering diverse traffic behaviors such as pedestrian jaywalking, near-misses, vehicles lining up on the crosswalk, causing pedestrians to take detours, and people exiting vehicles while waiting at a red light. We demonstrated through our FLORIDA dataset that a single LiDAR can sufficiently capture most of the intersection traffic, except for a 5-meter blind spot beneath the LiDAR. And our semi-automated annotation algorithm performs well under the LiDAR-only setting.
## III Methodology
This section introduces the collected dataset, detailed statistics regarding performance and a qualitative comparison with other infrastructure LiDAR datasets. Then, we present an overview of the proposed semi-automated annotation scheme, followed by an explanation of the utilized deep-learning-based models. Finally, we discuss the pre- and post-processing algorithms designed to further improve annotation speed.
### _The FLORIDA Dataset_
#### Iii-A1 Data collection
We collected the dataset at a busy intersection--West University Avenue & Northwest 17th Street, Gainesville, FL--near the campus of the University of Florida. The LiDAR camera is mounted on a 5-meter post at the intersection. We used a Velodyne VLP-32C LiDAR with 32 channels, a 200-meter range, \(+15^{\circ}\) to \(-25^{\circ}\) vertical field of view (FOV), and 360\({}^{\circ}\) horizontal FOV. We manually selected 11 sequences, some of which included crowded pedestrians, abnormal behaviors, or near-misses. Henceforth, we refer to our dataset as **FLORIDA--**Florida LiDAR-based **O**bject **R**ecognition and **I**ntelligent **D**ata **A**nnotation.
#### Iii-A2 Dataset statistics and characteristics
As shown in Table II and Figure 2 (c), we first summarize the statistics for all categories and display the orientation histogram of vehicles and pedestrians, respectively. The orientation histogram indicates that most vehicles move in a 45\({}^{\circ}\)/225\({}^{\circ}\) direction, corresponding to West University Avenue. The 165\({}^{\circ}\) direction comes from a parking lot where most vehicles park in parallel. For pedestrians, most of them cross the streets in the zebra-crossings, resulting in spikes in 45\({}^{\circ}\), 225\({}^{\circ}\), 135\({}^{\circ}\), and 325\({}^{\circ}\) directions. As it is difficult to determine the orientations of pedestrians from the point cloud when they are waiting to cross the intersection, we utilized a heuristic approach, which we detail in Section III-E1. As depicted in Figure 3, the FLORIDA dataset captures crowded pedestrians and vehicles and several abnormal behaviors, which is beneficial for training and evaluating object detectors and trackers under challenging conditions, such as scenes with crowds with numerous occlusions.
The comparisons with some of the popular infrastructure LiDAR datasets are summarized in Table I. In brief, previous datasets either did not release the full dataset labels or did not report the evaluation performance, such as object detection. To the best of our knowledge, FLORIDA is the first dataset to include crowded pedestrians and diverse traffic participants which will be fully released and can be openly evaluated for object detection. In addition, we compared the annotation
approaches of all datasets. LUMPI is most comparable because they only annotate on LiDAR point clouds. Compared to LUMPI, our annotation approach employs trained deep-learning models that improve the accuracy and robustness of auto-labeling and provide more assistance to human annotators for post-correction and refinement.
### _Overview of the Semi-automated Annotation Algorithm_
A common strategy for annotating a new dataset is to annotate object instances one by one, from their first appearance to their exit from the scene. Typically, an annotation tool can leverage a tracking algorithm to track and annotate an object across multiple frames. In a similar vein, we propose to exploit a state-of-the-art deep-learning SOT tracker [16] to propagate annotations: we begin by providing the initial annotation of an object with a proper one-click function and then propagating the annotation across subsequent frames (e.g., up to 100 frames) using the SOT tracker. Of course, the auto-generated BBoxes might be imperfect; therefore, manual annotator refinement is necessary. Following [6], we leverage the function of batch-mode editing in which adjusting keyframes' annotations could trigger the interpolation of intermediate frames, which is proven to reduce refinement effort.
We further automate the annotation using a trained MOT, which generates tracklets for all objects. In contrast to SOT, MOT does not require first-frame annotation for each object. The MOT is iteratively trained. In the beginning, it is trained on one fully annotated sequence. As more sequences are annotated, its training set is expanded such that the detection and tracking accuracy improves accordingly. Nevertheless, the MOT algorithm will still miss some objects or provide imprecise annotations. The annotator will then check each tracklet and make necessary adjustments. Additionally, the SOT can be utilized as a remedy for objects missed by MOT.
The aforementioned annotation scheme is not limited to static LiDAR settings; it can also be used to speed up the annotation for onboard LiDAR datasets. Specific to the static LiDAR setting, we developed a collection of pre- and post-processing algorithms: ground height estimation, trajectory smoothing, orientation post-processing, and static object BBox averaging. We now describe these methods in detail.
### _Annotation Propagation by Single Object Tracking_
Given a point cloud sequence and a BBox of an object in the first frame as the input of a 3D SOT tracker, we aim to locate the same object in a sequence of frames. Specifically, given a point cloud sequence \(\{P^{t}\}_{t=1}^{T}\) of frame length \(T\) and the 3D BBox \(B^{1}\in\mathbb{R}^{7}\) of one object, parameterized by its location in 3D coordinates, height, length, width, and heading direction, at the first frame, a SOT tracker aims to find all 3D BBoxes of the object in subsequent frames denoted as \(\{B^{t}\}_{t=2}^{T}\).
In our setting, the annotator provides the initial BBox annotation, followed by the trained SOT tracker locating the object frame-by-frame. Because objects move continuously in 3D, their locations in two consecutive frames are close; therefore, the search area can be \(K\) meters around the object's last location. \(K\) is a hyper-parameter determined by the object's velocity and the frame rate of the LiDAR data. Following [16] and taking our static LiDAR setting into account, we empirically set \(K\) to \(2\) for vehicles and \(0.5\) for pedestrians.
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline Class & Vehicle & Pedestrian & Cyclist & Motorcycle & Bus & Truck \\ \hline The total number of instances & 143,941 & 80,220 & 999 & 17,397 & 4,170 & 2,640 \\ The average number of instances per frame & 21.81 & 12.15 & 0.15 & 2.64 & 0.63 & 0.40 \\ The maximal number of instances per frame & 38 & 34 & 2 & 7 & 3 & 3 \\ \hline \hline \end{tabular}
\end{table} TABLE II: The statistics of FLORIDA
Fig. 2: We collect crowded pedestrian sequences from a LiDAR installed at a busy intersection — West University Avenue & Northwest 17th Street, Gainesville, FL, near the campus of the University of Florida. (c) is the orientation histogram of vehicles and pedestrians, and the count numbers in the images are measured in thousands.
To regress the position offset of the object between two consecutive frames, we resort to the state-of-the-art 3D SOT model--\(M^{2}\)-Track [16]. It proposes a two-stage motion-centric paradigm in which the motion transformation between the same objects in two frames is first regressed, followed by a refinement based on the merged point cloud in two frames. In detail, \(M^{2}\)-Track initially segments the target points in two frames using a trained semantic segmentation network. Then a motion vector \(M=(\delta x,\delta y,\delta z,\delta\theta)\) is regressed by a motion estimation network, where \(\delta x,\delta y,\delta z\) represent the location offsets, and \(\delta\theta\) represents the heading direction angle offset. Adding the motion vector \(M\) to \(B^{t-1}\) gives us a coarsely predicted BBox \(\hat{B}^{t}\). In the second stage, \(M^{2}\)-Track refines \(\hat{B}^{t}\) by regressing a small relative offset and producing the final prediction \(B^{t}\). Specifically, \(M^{2}\)-Track aggregates the previous frame point clouds \(P^{t-1}\) into the current point cloud \(P^{t}\), compensating motion using the predicted \(M\), resulting in a denser point cloud \(P^{t}\). Another regression network is applied to \(\hat{P}^{t}\) to produce the refined BBox \(B^{t}\). We integrate the SOT model into SUSTechPOINTS [6]--a popular open-source annotation tool for point clouds--by replacing the original auto-labeling function of SUSTechPOINTS with \(M^{2}\)-Track, resulting in a more robust function for handling occlusion and sparsity and producing better accuracy for deformable objects like pedestrians. We utilize the SOT model to select a BBox to propagate to subsequent \(N\) frames, which returns \(N\) BBox predictions and displays them on the annotation tool. The parameter \(N\) can be changed by the annotator, depending on the scenario. For example, more adjustments from the annotator will be necessary when there is heavy occlusion or in a crowded area. Therefore, a smaller value for \(N\) is more practical in such a situation. By default, we set it to a fixed number (i.e., \(N=100\)) for typical cases. The annotator can switch to batch processing mode, where adjusting the keyframes will trigger interpolation for middle frames, which is beneficial in speeding up the annotation. Next, if the object is still visible after \(N\) frames, one can adjust the last-frame annotation and continue propagating the annotation to subsequent frames. To harmonize the SOT algorithm and interpolation, we set one annotation out of every ten as a keyframe such that the annotator can quickly refine the annotation by adjusting keyframes alone most of the time. Adjusting keyframes may be insufficient for turning vehicles, as the orientation change is non-linear. In this case, we can refine some annotations that are not keyframes. Once refined, a non-keyframe will change to a keyframe and accordingly trigger the interpolation.
### _Auto-annotation by Multi-Object Tracker_
Given a point cloud sequence, the goal of MOT is to localize and identify all objects in the sequence. Formally, given point cloud sequence \(\{P^{i}\}_{t=1}^{T}\), the MOT finds the BBoxes of all objects \(\{\{B_{j}^{t}\}_{t=1}^{T}\}_{j=1}^{N}\), where \(N^{t}\) is the number of objects in frame \(P^{t}\). Note that \(N^{t}\) varies over frames, as objects may enter or exit the scene at different times.
In our annotation scheme, an MOT model automatically generates tracklets for all objects in the scene. To achieve this, we follow a tracking-by-detection paradigm, where we detect all objects via a detector frame-by-frame and then use the tracker to associate boxes for the same object across frames.
We apply CenterPoint [21] for multi-object detection. It detects objects as key points and then regresses their other attributes, namely 3D location, 3D size, and 1D heading
Fig. 3: (a) A sample of a crowded scene. (b) A vehicle stopped at a pedestrian crosswalk. (c) People exiting a vehicle stopped at a red light. (d) A cyclist and a pedestrian passing through a small gap between two cars.
orientation. CenterPoint consists of a standard 3D backbone, a center heatmap head, and regression heads. The 3D backbone extracts bird-eye-view (BEV) feature maps fed to the heads to generate predictions. The head produces keypoint heatmaps, where each heatmap peak corresponds to a predicted object center. And another regression head regresses other properties for predicted key points, such as BBox sizes and orientations. We followed OpenPCDet's CenterPoint implementation. The readers can find more details about model architecture, training strategy, and model implementation in CenterPoint [21].
Given predicted boxes in each frame, the next stage is associating the BBox of the same object across frames, producing tracklets of objects. To this end, we adopted SimpleTrack [26], a top-performing multi-object tracking approach. Following the "tracking-by-detection" paradigm, SimpleTrack unifies the 3D MOT methods into a general framework. The framework consists of four main components: (i) detection pre-processing, (ii) BBox association across frames, (iii) object motion modeling, and (iv) tracklet lifecycle management. Given multiple options in each component, we adopted those matching our dataset's characteristics. The pre-processing module mainly processes the raw detection predictions into a cleaner input to the tracker. We follow SimpleTrack to apply a stricter non-maximum suppression (NMS) to the raw detection predictions to preserve recall while improving precision. It effectively removes low-confidence detections that overlap with others while preserving low-confidence detections likely from sparse or occluded regions. For motion modeling, we adopted the Kalman Filter, which predicts the location of an object with increasing precision in the next frame. The Kalman Filter performs exceptionally well on infrastructure datasets because the LiDAR is stationary, resulting in longer tracklets and no abrupt motions. The predicted location from the motion model is then used as a proposal to associate with detections in the next frame. Next, for BBoxes association across frames, we view the problem as a bipartite matching problem and employ the well-known Hungarian algorithm [33]. As objects enter and exit the LiDAR's field of view at different times, the life cycle of tracklets needs to be carefully maintained. Following SimpleTrack, we adopt the "two-stage association" strategy. The detection score threshold is higher in the first stage than in the second. The first stage ensures tracking precision, while the second stage extends the life of tracklets in occluded or sparse regions, thereby reducing the number of ID switches.
We further post-process the generated tracks with heuristic rules. First, we remove the tracklets that are too short because they are likely to be false positives. Second, we filter the tracklets whose speed is outside a reasonable range. For instance, the typical walking speed of a pedestrian is less than 2 meters per second. Therefore, predicted pedestrian tracklets with a higher average speed are more likely to be cyclists or motorcycles. Finally, because our dataset contains many parked cars, the bounding boxes of such tracklets vary slightly from frame to frame. Therefore, we average them to generate more accurate annotations.
There are cases where the MOT model makes mistakes. For instance, we notice missing detection, incorrect detection, track ID switches, reversed orientation, etc. We further develop functions to assist annotators in quickly correcting errors. We leverage the SOT model for missing detection to propagate annotations for completion. For incorrect detection, annotators could delete all annotations for a given ID. To handle track ID switches, annotators could correct the ID where the switch happens and synchronize the change to the following frames. Lastly, they could correct the reversed orientation via a single one-click or batch correction in batch mode.
### _Pre-processing and Post-processing Algorithms_
#### Iii-E1 Trajectory smoothing and orientation post-processing
When annotating pedestrians, we find it particularly challenging to determine their orientation from a single frame. For example, the point cloud on a pedestrian could be very sparse and incomplete. Often, the annotator has to examine the sequence surrounding the current frame to determine the orientation of a pedestrian based on movement. Therefore, we developed an orientation post-processing algorithm that imitates the annotator's behavior and significantly reduces the pedestrian annotation time. Specifically, after annotating a sequence, we first smooth the trajectory using a cubic smoothing spline algorithm [34] and then set the orientation at each timestamp as the pedestrian's moving direction. It is more tricky to set the orientation for stationary pedestrians. Therefore, we adopt a heuristic strategy: if the pedestrian starts moving later in the sequence, we set the orientation to match the direction of movement; otherwise, the orientation remains the same as the pedestrian's initial orientation.
#### Iii-E2 Ground height estimation
For small objects (i.e., pedestrians and cyclists) that are too close or too far from the LiDAR's center, there are only a few points on each object, and it is ambiguous to determine the object's \(z\) value, i.e., height. The ground height information is helpful in such cases, as it allows us to better spot objects on the ground using SOT and MOT algorithms. To obtain the ground height, we manually segment the ground points using Point Cloud Labeler [35]. Next, the ground points are interpolated into grids using the LinearNDInterpolator from the python SciPy library. However, interpolation does not work well for distant regions with sparse data points. To cover these regions, we estimate a ground plane given all segmented ground points using the RANSAC algorithm [36]. The interpolation captures subtle differences in ground height, such as the sidewalk being slightly higher than the road. Additionally, the ground plane captures the intersection's general elevation or the LiDAR's slight tilt angle. Note that the ground height of an intersection only needs to be estimated once.
## IV Experiments
In this section, we conduct several experiments to demonstrate the FLORIDA dataset's quality and the annotation scheme's usefulness. We evaluate the speed and accuracy of our developed annotation tool in Section IV-A. Section IV-B presents baseline detection results with a study of the trade-off between annotation quantity and detection accuracy. Section IV-C illustrates the improvement in annotation speed as more data is annotated. Finally, Section IV-D gives an example of a downstream application based on this work.
### _Annotator experiments_
One straightforward way to evaluate the efficiency and accuracy of an annotation tool is to conduct a human annotation experiment. Therefore, we record the annotation time of four annotators and evaluate their annotation quality. As annotators' annotation speed may vary, we conduct the experiment with two trained and two untrained annotators and separately report their average annotation times.
We select a 200-frame LiDAR sequence with crowded vehicles and pedestrians and ask annotators to label the same sequence using two different annotation tools--SUSTechPOINTS and ours. Ground truth labels are annotated and double-checked by an experienced annotator. When annotating the ground truth, we verify the annotation by observing a longer sequence. The annotation efficiency is measured by the average annotation time, and the annotation accuracy is measured by the average \(F_{1}\) score. We consider an annotation BBox to be accurate if the Intersection over Union (IoU) with a ground truth BBox exceeds a threshold. As the tightness of BBoxes differs between annotators, in the experiment, we set the IoU threshold at 0.3. Table IV summarizes our tool's annotation efficiency and accuracy against SUSTechPOINTS. It shows that our annotation tool nearly quadruples the speed of annotation for both trained and untrained annotators. Meanwhile, our tool's annotation quality is also better, especially for pedestrians. The main reason is that the MOT algorithm provides a template for the annotator, which largely improves the recall for pedestrians. As shown in Figure 5, the annotator using SUSTechPOINTS does not recognize the pedestrians waiting to cross the street as human annotators recognize objects primarily based on motion, whereas the pedestrians in the blue box are stationary. On the other hand, our MOT algorithm enables the annotator to recognize and accurately annotate these pedestrians.
### _3D detection in the FLORIDA Dataset_
In Table III, we show the 3D detection results for four categories on two 600-frame validation sequences. The validation sequences are collected on different days, without any days overlapping with the training sequences. We use the 3D Average Precision (3D AP) and Bird's Eye View Average Precision (BEV AP) as evaluation metrics, as defined by the KITTI benchmark [4]. We employ lower Intersection over Union (IoU) thresholds for smaller objects, such as Pedestrian and Motorcycle, and higher IoU thresholds for larger objects, such as Vehicle and Bus. Truck and Cyclist are annotated, but there are insufficient instances for evaluation, and therefore they are omitted from the table. We investigate the improvement of the detector's AP as more training data is gradually added. As shown in Figure 6, training on 3 600-frame sequences already gives a reasonably good result, whereas the AP improvement from 3 to 6, and 6 to 9 are less significant. Therefore, given the detection accuracy requirement for different downstream tasks, one can vary the amount of annotation.
### _Annotation time reduction with training on more data_
As shown in Figure 4, we record the annotation time for Vehicle and Pedestrian in the FLORIDA dataset to demonstrate the effectiveness of the human-in-the-loop concept. We train a new model for every three 600-frame sequences and calculate the average number of BBoxes per minute to measure annotation speed. As object density and moving patterns vary across different sequences (with the annotation of the crowded scene being more challenging), the resulting data points are fuzzy. However, we still observe a clear trend of increasing annotation speed. For Vehicle, annotation propagation with SOT and batch-mode interpolation already provide high annotation speed. For Pedestrian, MOT significantly increases speed. Through the experiment, the MOT model trained on three sequences increases the annotation speed from 27.05 to 72.13 BBoxes/min.
Fig. 4: Annotation speed improvement as the MOT is trained on more sequences. # training sequences = 0 means that we only use SOT for annotation.
Fig. 5: Example of annotations from an untrained annotator using our tool versus SUSTechPOINTS. The top one is annotated using our tool, while the bottom is annotated using SUSTechPOINTS. The pedestrians within the blue box are not recognized by the annotator.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & Vehicle & Pedestrian & Motorcycle & Bus \\ & IoU = 0.7 & IoU = 0.5 & IoU = 0.5 & IoU = 0.7 \\ \hline
3D AP (\%) & 90.66 & 87.44 & 82.32 & 91.99 \\ BEV AP (\%) & 96.62 & 87.76 & 96.96 & 95.17 \\ \hline \hline \end{tabular}
\end{table} TABLE III: 3D object detection result on FLORIDA dataset.
### _Application to traffic monitoring systems_
We demonstrate the effectiveness of the FLORIDA dataset with our semi-automated annotation suite through a downstream use case. We integrate the predicted object trajectories into a web-based visual analytics system, where one can check all trajectories in a given time period, obtain count statistics for traffic participants, observe abnormal behaviors, etc. Compared with video sensors, LIDAR performs well regardless of the lighting conditions, thereby enhancing the safety of intersections.
## V Conclusion and Future Direction
In this paper, we have developed a semi-automated annotation tool that applies SOT and MOT models integrated with the human-in-the-loop schema for speeding up data annotation of challenging intersection LiDAR datasets. We verify its effectiveness via conducting human annotator experiments and reporting qualitative and quantitative results on object detection. Our developed tool supports the creation of achievable and affordable LiDAR-based traffic monitoring systems. Besides, we have introduced a fully-annotated infrastructure LiDAR perception dataset--FLORIDA--consisting of diverse and crowded traffic participants and exciting traffic scenarios, to facilitate research on infrastructure-based object perception. In future work, we aim to enrich the dataset with video cameras and reduce the annotation time given the additional appearance information. We want to study how transfer learning from our dataset can benefit training a model on new scenes, leading to an even faster setup time for a new intersection or road segment.
## Acknowledgments
This work is supported by NSF CNS 1922782, by the Florida Dept. of Transportation (FDOT) and FDOT District 5. The opinions, findings and conclusions expressed in this publication are those of the author(s) and not necessarily those of the Florida Department of Transportation or the National Science Foundation.
|
2310.06611 | On the minimal mass of thermal dark matter and the viability of
millicharged particles affecting 21cm cosmology | Thermal freeze-out offers an attractive explanation of the dark matter
density free from fine-tuning of initial conditions. For dark matter with a
mass below tens of MeV, photons, electrons, and neutrinos are the only
available direct Standard Model annihilation products. Using a full
three-sector abundance calculation, we determine the minimal mass of dark
matter, allowing for an arbitrary branching into electrons/photons and
neutrinos that is compatible with current cosmological observations. The
analysis takes into account the heat transfer between the various sectors from
annihilation and elastic scattering, representing the first fully
self-consistent analysis that tracks the respective sectors' temperatures. We
thereby provide accurate thermal annihilation cross sections, particularly for
velocity-dependent cases, and deduce the sensitivity of current and upcoming
CMB experiments to MeV thermal dark matter. In the latter context, we also
establish the fine-tuned parameter region where a tiny admixture of neutrinos
in the final states rules in MeV-scale $p$-wave annihilating DM into electrons.
Finally, we show that a sub-% millicharged dark matter with an interaction
strength that interferes with 21 cm cosmology is still allowed when freeze-out
is supplemented with annihilation into neutrinos. For all cases considered, we
provide concrete particle physics models and supplement our findings with a
discussion of other relevant experimental results. | Xiaoyong Chu, Josef Pradler | 2023-10-10T13:28:38Z | http://arxiv.org/abs/2310.06611v2 | # On the minimal mass of thermal dark matter
###### Abstract
Thermal freeze-out offers an attractive explanation of the dark matter density free from fine-tuning of initial conditions. For dark matter with a mass below tens of MeV, photons, electrons, and neutrinos are the only available direct Standard Model annihilation products. Using a full three-sector abundance calculation, we determine the minimal mass of dark matter, allowing for an arbitrary branching into electrons/photons and neutrinos that is compatible with current cosmological observations. The analysis takes into account the heat transfer between the various sectors from annihilation _and_ elastic scattering, representing the first fully self-consistent analysis that tracks the respective sectors' temperatures. We thereby provide accurate thermal annihilation cross sections, particularly for velocity-dependent cases, and deduce the sensitivity of current and upcoming CMB experiments to MeV thermal dark matter. In the latter context, we also establish the fine-tuned parameter region where a tiny admixture of neutrinos in the final states rules in MeV-scale \(p\)-wave annihilating DM into electrons. Finally, we show that a sub-% millicharged dark matter with an interaction strength that interferes with 21 cm cosmology is still allowed when freeze-out is supplemented with annihilation into neutrinos. For all cases considered, we provide concrete particle physics models and supplement our findings with a discussion of other relevant experimental results.
+
Footnote †: preprint: UWThPh 2023-24
## I Introduction
Weakly interacting massive particles (WIMPs) make for attractive dark matter (DM) candidates: the combination of electroweak-scale mass and interactions--in strength reminiscent of the weak interactions in the Standard Model (SM)--allow for a broad experimental and observational program in their search. In this quest, the parameter space that predicts the correct relic abundance provides an important experimental target. In the early universe, WIMPs come into thermal equilibrium with the SM, and their non-relativistic chemical decoupling allow for an understanding of their density that is free from initial conditions.
The to-date absence of new physics at the electroweak scale, however, have motivated efforts to experimentally probe an increased range in DM mass. Particularly significant advances, both theoretically and experimentally, have allowed to push the sensitivity of directly detecting dark matter in the laboratory below the 100 MeV mass scale. This recently gained sensitivity is, on the other hand, not easily matched with cosmologically compatible models of thermal DM relics. The number of available annihilation channels for thermally regulating the DM abundance reduces drastically, while the demand on the size of the cross section increases. At the same time, freeze out happens close to the highly non-trivial epochs of neutrino decoupling and electron-positron annihilation. A thorough calculation of the relic density must hence relate to a three sector system, the electromagnetic sector (photons, electrons), neutrinos and DM. When this system is appropriately solved, it not only predicts the relic density but also the temperature ratio of neutrinos to photons, which, by itself is an important and sensitive cosmological observable and often in tension when considering a MeV-mass DM.
Previous treatments of thermal MeV-scale DM have mostly assumed instantaneous neutrino decoupling [1; 2; 3; 4; 5]. More recently, systematic efforts towards a full treatment of MeV-scale thermal DM decoupling and a study of cosmological observables were made in [6] and in [7; 8]. These works account for the energy transfer from the dark to the SM sector from annihilation with the aim to precisely predict \(N_{\rm eff}\) and/or light element abundances. The main caveat in the above mentioned works is that it had to be assumed that DM stays in thermal equilibrium with either photons or neutrinos, while classical Maxwell-Boltzmann statistics and an annihilation cross section independent of DM mass had to be adopted.
An important step towards a fully self-consistent treatment of the problem that allows for arbitrary branching ratios into neutrinos and photons/electrons was provided by the authors of Ref. [9]. There, the three-sector problem is formulated in such a way that it becomes computationally feasible to solve the coupled set of Boltzmann equations over a great numerical range of reaction rates while ensuring fulfillment of the detailed balancing conditions. Moreover, for the first time, it became possible to include the energy transfer between the sectors originating from the number-conserving _elastic_ scattering processes.
The purpose of this paper is to follow up on the introduced methodology in [9] and provide concrete examples of MeV-scale DM decoupling. For generic WIMP DM, the canonical thermal cross section is \(\langle\sigma v\rangle\approx 3\times 10^{-26}\) cm\({}^{3}\)/s, independent of the WIMP mass [10; 11]. However, as is now well known, this value is subject to changes, particularly for light DM [12; 13]. In this work, we compute the exact value of the required thermally averaged cross section that provides the correct relic density for a set of relative branching ratios of the annihilation channels into neutrinos vs. electrons, for \(s\)- and \(p\)-wave
annihilation cross sections in a DM mass regime where freeze-out overlaps with neutrino decoupling. By dialing through the branching ratios we further explore the minimal DM mass that is compatible with the cosmic microwave background (CMB) measurements and if a careful partition of branching ratios allows for an avoidance of this constraint while simultaneously maintaining the successful big bang nucleosynthesis (BBN) predictions.
In a second part, we apply our methods to millicharged dark states. The existence of such particles can have far-reaching consequences for phenomenology, astrophysics, and cosmology. Because the non-relativistic elastic scattering on baryons is enhanced with relative velocity \(v\) as \(v^{-4}\), such relic can also induce a cooling of the baryonic gas in the post-recombination Universe when DM is at its coldest temperature. This has been shown to affect the expected cosmological neutral hydrogen 21 cm absorption signal at the epoch of the cosmic dawn [14; 15; 16]. Although the observational status is unclear, with a putative detection of a global absorption feature by EDGES [17] but not confirmed by SARAS3 [18], the prospect of probing dark sector properties through 21 cm cosmology is exciting.
The proposal faces very stringent limits from direct detection, fixed target experiments, the abundance of satellite galaxies [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29], among other limits, and, jointly, they require millicharged DM to constitute only a sub-percent fraction of the total DM abundance and be situated in a particular corner of parameter space: MeV-scale mass and a millicharge \(Q\sim 10^{-5}-10^{-4}\).1 Even satisfying all those constraints, the cosmic viability remains questionable because the thermalization and guaranteed annihilation of these particles into electrons lowers the \(N_{\rm eff}\) value to unacceptable values [24]. In this work, we revisit the possibility of millicharged DM by supplying its interactions with an additional annihilation channel into neutrinos. The joint annihilation into electrons and neutrinos harbors the potential of alleviating the \(N_{\rm eff}\) constraint, hence widening or ruling in this possibility. This extension requires a three-sector treatment and is therefore a perfect application of our here-developed methodology.
Footnote 1: Additional constraints, in particular cosmological ones [30], arise when the millicharged DM is realized through an ultralight kinetically mixed dark photon. In this work, we do not consider this further possibility.
The paper is organized as follows. In Sec. II we produce the values of the thermal DM cross sections for DM masses below 20 MeV and find the minimum cosmologically compatible mass. In Sec. III we compute the \(N_{\rm eff}\) and nucleosynthesis predictions for millicharged DM when it is supplied with neutrino interactions in the coupling range where DM-baryon interactions affect the global 21 cm signal. We conclude in Sec. IV. Several appendices provide some semi-analytical solutions to the thermal DM cross sections, as well as the calculational results that go into the solution of the Boltzmann equations.
## II Thermal cross sections and \(N_{\rm eff}\)
The objective of this section is to provide a precise value of the thermally averaged cross section in the situation that annihilation occurs during or in the vicinity of the epoch of neutrino decoupling. This necessitates a simultaneous solution for three-sectors: the electromagnetic ("EM") one comprised of electrons, positrons and photons, the SM neutrino ("\(\nu\)") one, and the dark matter sector ("\(\phi\)" or "\(\chi\)").2
Footnote 2: If right-handed neutrinos are also light, and DM annihilates into both left- and right-handed neutrinos, our method can be applied to the evolution of a different three-sector scenario: “DM”, “SM”, and “right-handed \(\nu\);” see [31] for a two-sector case.
In a previous work we have developed the methodology [9] for such treatment. It is based on a reformulation that makes detailed balancing numerically manifest for quantum statistics, together with a factorization of neutrino and DM chemical potentials in the respective collision terms. A pictorial representation of the three-sector problem is given in Fig. 1. The dynamics is governed by a number of rates, where the most familiar ones are the rate of weak interactions, \(\Gamma_{\rm weak}\equiv n_{e}G_{F}^{2}T_{\gamma}^{2}\), determining neutrino decoupling and the total DM annihilation rate \(\Gamma_{\rm ann.}\equiv n_{\phi}\langle\sigma_{\rm ann.}v\rangle\) controlling the DM number density and determining the point of chemical decoupling from the SM bath; \(G_{F}\) is the Fermi constant, \(T_{\gamma}\) the photon temperature and \(n_{e/\phi}\) is the electron/DM number
Figure 1: Schematic depiction of the three coupled sectors (EM, \(\nu\), DM “\(\phi\)” or “\(\chi\)”) with the respective variables we solve for: the temperatures \(T_{\gamma}\), \(T_{\nu}\), \(T_{\phi}\) and chemical potentials \(\mu_{\nu}\) and \(\mu_{\phi}\) in the neutrino and DM sector. The sectors are kept in contact by various rates: SM weak interactions \(\Gamma_{\rm weak}\), DM annihilation into SM states \(\Gamma_{\rm ann.}\), energy exchange from annihilation \(\Gamma_{\rm exch.}\), and elastic scattering \(\Gamma_{\rm scatt.}\).
density. A related important rate controlling the temperature evolution of the various sectors is the energy exchange rate \(\Gamma_{\text{exch.},i}\equiv n_{\phi}^{2}\langle\sigma_{\text{ann.},i}v\delta E \rangle/\rho_{i}\) between DM and sector \(i\in\{\text{EM},~{}\nu\}\). Here, the thermal average is weighted by the energy \(\delta E\) that is transfer between the sectors. For example, when \(\Gamma_{\text{weak}}<H\) but \(\Gamma_{\text{exch.},i}>H\), \(T_{\nu}=T_{\gamma}\) remains possible even after neutrino decoupling in a standard cosmology. Tracking the energy exchange is hence of crucial importance in the determination of \(N_{\text{eff}}\). Finally, energy may also be transferred by number conserving _elastic_ scatterings with a rate given by \(\Gamma_{\text{scatt.}i}\equiv n_{\phi}n_{i}\langle\sigma_{\text{scatt.}}^{\phi,i}v\,\delta E\rangle/\rho_{i}\) and where the typical energy transfer is of the order of the temperature difference between the sectors. We are able to include this channel across the enormous dynamical range in particle densities and rate efficiencies. We refer the reader to [9] for a detailed discussion of the rates and the ensuing sequence of DM decoupling.
We now parameterize the annihilation cross section times the Moller velocity \(v_{M}\) in the usual non-relativistic expansion of relative velocity \(v_{\text{rel}}\),3
Footnote 3: For velocity-dependent annihilation it can make a difference at order \(v_{\text{rel}}^{2}\) if a velocity expansion of the Lorentz-invariant product \(\sigma v_{M}\) or of \(\sigma v_{\text{rel}}\) is considered; see [32] for a concrete \(p\)-wave example.
\[\sigma_{\text{ann}}v_{M}=a+bv_{\text{rel}}^{2}+\mathcal{O}(v_{\text{rel}}^{4}) \tag{1}\]
A non-relativistic thermal average then yields \(\langle\sigma v_{M}\rangle=a+6b/x+\dots\) where we have used that \(\langle v_{\text{rel}}^{2}\rangle\simeq 6T_{\phi}/m_{\phi}=6/x\). For orientation, the canonical values for a Majorana fermion with mass above 10 GeV are \(a\approx 2\times 10^{-26}\,\text{cm}^{3}/\text{s}=1.7\times 10^{-15}\,\text{ MeV}^{-2}\) for pure \(s\)-wave annihilation (\(b=0\)) and \(b\approx 1.5\times 10^{-25}\,\text{cm}^{3}/\text{s}=1.3\times 10^{-14}\,\text{ MeV}^{-2}\) for pure \(p\)-wave annihilation (\(a=0\)).
### Models and existing constraints
As benchmark cases, we consider a MeV complex scalar DM particle \(\phi\), and a Dirac fermion \(\chi\), interacting with charged leptons and neutrinos via a heavy mediator. For both, \(s\)- and \(p\)-wave annihilation cases are then easily constructed. For example, for a (pseudo-)scalar mediator, the DM annihilation of a complex scalar \(\phi\) is then \(s\)-wave, while for a vector mediator it is \(p\)-wave. For \(\chi\) a vector mediator can induce both, \(s\)- and \(p\)-wave annihilation.
Below, we discuss the \(s\)-wave annihilation of a complex scalar DM in detail; a detailed discussion on the \(p\)-wave counterpart is given in our companion paper [9]. An explicit example for \(s\)- and \(p\)-wave annihilating fermionic DM particle \(\chi\) is provided in App. E. Generally speaking, the results to other cases carry over in the following sense: the DM degrees of freedom, \(g_{\text{DM}}\), only enter logarithmically in the calculation of the freeze-out temperature, so the canonical thermal cross section is essentially the same for complex scalar and Dirac fermion DM. In contrast, the self-conjugate cases, i.e., real scalar and Majorana fermion DM approximately require half the annihilation cross section compared to the non-self-conjugate cases to obtain the same final relic abundance; see App. A. Therefore, our Fig. 2 is generalized to these other options of DM candidates with only mild losses in accuracy; see below when we compare the complex scalar and Dirac fermion cases.
A simple realization for \(s\)-wave annihilation of complex scalar DM is the exchange of an intermediate real heavy pseudoscalar \(A\) via the renormalizable interactions terms
\[\mathcal{L}_{\text{int}}^{(A)}=-i\sum_{l}y_{l}A(\bar{l}\gamma_{5}l)-\mu_{A}A( \phi^{*}\phi)\,, \tag{2}\]
where the sum runs over leptons \(l=e,\nu_{e},\dots\).4 This leads to \(s\)-wave DM annihilation to neutrinos and electrons, and a velocity-dependent elastic scattering between DM and SM fermions. At low energies the interaction is described by the effective mass dimension-5 operator \(i\eta_{l}(\phi^{*}\phi)(\bar{l}\gamma^{5}l)/\Lambda\) with \(\Lambda\equiv\mu_{A}/m_{A}^{2}\); the Yukawa couplings \(y_{l}\) and trilinear coupling \(\mu_{A}\) are taken as real.
Footnote 4: See [33; 34] for concrete examples of pseudoscalar portals to a dark sector. Note that throughout this work SM neutrinos are assumed to be of Majorana nature for pseudoscalar interactions. Additional pseudoscalar interactions of neutrinos can be induced via mixing with sterile neutrinos, see e.g. [35; 36].
The interactions in (2) give rise to annihilation processes such as \(\phi^{*}\phi\leftrightarrow e^{+}e^{-}\), \(\phi^{*}\phi\leftrightarrow\bar{\nu}\nu\) and elastic scattering processes such as \(\phi e\leftrightarrow\phi e\), \(\phi\nu\leftrightarrow\phi\nu\). Varying the ratio \(y_{\nu}/y_{e}\) then amounts to entertaining different combinations of branching ratios of annihilation into neutrinos, BR\({}_{\nu}\), and into electrons and/or photons, BR\({}_{\text{EM}}\) (here electrons). The tree-level \(\phi\phi^{*}\) annihilation cross sections in the non-relativistic limit read
\[\sigma_{e}v_{M} \simeq\frac{y_{e}^{2}}{4\pi\Lambda^{2}}\,\left(1-\frac{m_{e}^{2}} {m_{\phi}^{2}}\right)^{1/2}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(10^{-30}\) cm\({}^{3}\)/s [37], where BR\({}_{\rm EM}\) is the annihilation branching ratio into the EM sector; see also [38; 25; 39] for other indirect detection limits and prospects thereof. In practice, the CMB constraint demands that a thermal \(s\)-wave freeze-out in our considered mass range must obey BR\({}_{\rm EM}\leq 10^{-4}\), although we will consider arbitrary values of BR\({}_{\rm EM}\) for completeness below. In contrast, due to the velocity suppression factor, \(p\)-wave freeze-out with \(b\times{\rm BR}_{\rm EM}\sim 10^{-25}\) cm\({}^{3}\)/s is currently not constrained by CMB and low-redshift indirect searches; see e.g., [40; 41]. Constraints on DM annihilation into neutrinos are significantly weaker. For the considered DM mass range below 20 MeV, values of \(a\times{\rm BR}_{\nu}\lesssim(1\)-\(100)\times 10^{-24}\) cm\({}^{3}\)/s, are allowed by the combination of Borexino, KamLAND and Super-Kamiokande [42]. Thus it leaves enough parameter space for a \(s\)-wave thermal freeze-out, let alone a \(p\)-wave one.
The parameter regions concerned for DM thermal freeze-out are also easily allowed by other existing bounds, if one takes the mediator mass to be a few to tens of GeV. Taking the electron-mediator interaction described by Eq. (2) as an example, intensity-frontier and neutrino experiments provide the strongest constraints on such a GeV-scale mediator, limiting \(y_{e}/m_{A}\) to be below \(10^{-3}\)-\(10^{-4}\) GeV\({}^{-1}\), e.g., by searching for di-lepton resonances and/or neutrino-electron scattering signatures [43; 44]. On the other hand, the DM-mediator interaction in (2) is best bounded by DM self-scattering. Requiring \(\sigma_{\phi\phi\leftrightarrow\phi\phi}\lesssim 0.2\) barn/GeV [45; 46] leads to \(\mu_{A}/m_{A}\lesssim 0.05(m_{\phi}/{\rm MeV})^{3/4}\). Therefore the choice, say, \(\mu_{A}/m_{A}\lesssim 0.03\)--compatible with all the constraints discussed above--is always allowed for the DM mass range considered in this work. The situation is similar for other interactions adopted below, as long as the mediator mass is taken to be several GeV.
At last, general neutrino-DM interactions are much less constrained experimentally, like the annihilation channels discussed above. In fact, due to the weak interaction, a neutrino-philic mediator with a mass above the GeV-scale is hardly probed by current experiments. Note that further constraints from neutrino non-standard interactions induced by the mediator may enter too. A detailed discussion on neutrino-DM/mediator interactions will be deferred until Sec. III.3.
### Thermal cross section values
After solving the whole set of Boltzmann equations, we show in Fig. 2 the required thermal annihilation cross sections for \(\phi\) from the joint solution for the coupled sectors. The left panel is for \(s\)-wave annihilation. For \(m_{\phi}\geq 25\) MeV, DM freeze-out happens well before neutrino-electron decoupling, so varying the branching ratios has little impact. In this region, reducing the DM mass requires a "closer-to-relativistic" freeze-out, or, equivalently, a smaller value of \(x_{\rm f.o.}\equiv m_{\phi}/T_{\rm f.o.}\) where \(T_{\rm f.o.}\) is the chemical decoupling temperature. Therefore, the correct relic abundance requires the canonical annihilation cross section to become smaller with decreasing \(\phi\)-mass, in accordance with the well-known relation \(Y_{\phi}\propto x_{\rm f.o.}/\langle a_{\rm ann}v_{M}\rangle\)[47].
For \(m_{\phi}\lesssim 10\) MeV, the \(s\)-wave freeze-out goes through a period where the EM sector is being reheated by electron-positron annihilation. Because of the elevated photon temperature relative to the neutrino temperature, DM that is dominantly coupled to the EM sector enjoys a higher abundance relative to DM that is dominantly coupled to neutrinos. Consequently, in order to yield the correct relic abundance for DM coupled to elec
Figure 2: Parameters of annihilation cross sections for thermal freeze-out of a complex scalar \(\phi\) as a function of DM mass \(m_{\phi}\) that yields the correct relic DM density from numerically solving the coupled three-sector system. The left (right) panel is for \(s\)-wave (\(p\)-wave) annihilation. Curves for different branching ratios as labeled are shown with different colors and dashing.
trons, annihilation should last longer than compared to without reheating, requiring a larger annihilation cross section. In the opposite case that DM dominantly annihilates into neutrinos, the EM sector affects freeze-out only indirectly through the evolution of the effective degrees of freedom. The latter is, however, of little importance because electron-positron annihilation happens in an entropy-conserving manner (\(S=sa^{3}=const\)) so that the final DM yield \(Y_{\phi}=n_{\phi}a^{3}/S\) is affected only mildly through the term \(\frac{1}{3}\frac{d\ln q_{s}}{d\ln x}\) in the Boltzmann equation; see Eq. (10) in the Appendix.5 This explains Fig. 2 where the annihilation cross section into electrons has a stronger trend than the annihilation cross section into neutrinos.
Footnote 5: A similar effect also arises from DM annihilation, which may preferably reheat the EM or the neutrino sector thereby changing the entropy degrees of freedom with respect to the temperature of photons or neutrinos, respectively. In our code, this is taken into account.
The middle ground between the two extreme branching ratios is more involved, and is sensitive to the energy transfer among the three sectors. Concretely, the presence of a dark sector now causes two effects. One is it may maintain the kinetic equilibrium of neutrino and EM sectors, _i.e._, \(T_{\nu}=T_{\gamma}\), via the energy exchange between EM and neutrino sectors mediated by DM interactions even after neutrinos decouple from electrons. This effect thus tends to increase \(T_{\nu}/T_{\gamma}\) in comparison to a standard cosmology after electron decoupling. The other effect is DM annihilation after EM-neutrino kinetic decoupling, which may increase or decrease \(T_{\nu}/T_{\gamma}\), depending on the annihilation branching ratio \(\mathrm{Br}_{\mathrm{EM}}:\mathrm{Br}_{\nu}\). For most of the parameter space, the first effect dominates over the second one. More interestingly, we illustrate in the next subsection that one may tune the value of \(\mathrm{Br}_{\mathrm{EM}}:\mathrm{Br}_{\nu}\) to make the two effects cancel with each other, bringing the final \(T_{\nu}/T_{\gamma}\) ratio close to its standard cosmology value \(0.7164\). Note that the DM-induced energy transfer is dominated by DM pair creation/annihilation in the \(s\)-wave case, and the kinetic decoupling of neutrinos from the EM sector is mainly sensitive to the product \(\mathrm{Br}_{\nu}\,\mathrm{Br}_{\mathrm{EM}}\).
The right panel of Fig. 2 shows the result for \(p\)-wave annihilation. It generalizes our previous work [9] where we only entertained a fixed value \(\mathrm{Br}_{\mathrm{EM}}:\mathrm{Br}_{\nu}=2:3\). Here, we consider a broader variety in analogy to the \(s\)-wave case, with similar features observed. As established in [9], taking into account elastic scattering is crucial in the \(p\)-wave case. The reason is that elastic scattering is able to maintain the kinetic coupling between DM and neutrinos (or electrons, depending on the branching ratios) for a longer period after DM freeze-out. This leads to a mild heating of DM particles. In other words, elastic scattering affects the average DM velocity, and hence feeds into the calculation of the velocity-dependent cross section from the freeze-out point onward.
Moreover, in the \(p\)-wave case the effective DM annihilation cross section quickly decreases with time, so that the final DM abundance only marginally depends on the period \(x\gg x_{\mathrm{f.o.}}\) (little residual annihilation). In contrast, for the \(s\)-wave case, residual DM annihilation decreases the abundance by more than \(10\%\) from \(x=100\) until the CMB epoch. As a result, in the \(p\)-wave case, the EM sector reheating by electron annihilation only starts to play a role for somewhat lighter DM compared to the \(s\)-wave case (left panel of Fig. 2). Meanwhile, at \(m_{\phi}\sim 10\,\mathrm{MeV}\), the canonical freeze-out cross section into neutrinos-only can be larger than for annihilation into electrons-only, as DM annihilation affects the neutrino temperature stronger. At last, in each figure the curves with different branching ratios should gradually converge at \(m\geq 30\,\mathrm{MeV}\), when the freeze-out happens well before neutrino decoupling.
### \(N_{\mathrm{eff}}\) and minimal DM mass
We now study the predictions for \(N_{\mathrm{eff}}\) at the CMB epoch by adopting the established canonical values for the DM annihilation cross section. To this end, we take \(N_{\mathrm{eff}}^{\mathrm{SM}}=3.044\) as the standard cosmological history value (_e.g._, with conventional TeV-scale DM), consistent with our SM-only calculations. At 95% C.L. the combination of Planck and baryonic acoustic oscillation (BAO) measurements yields \(2.66\leq N_{\mathrm{eff}}\leq 3.33\)[48]. In terms of the deviation from the standard value,
\[\Delta N_{\mathrm{eff}}\equiv N_{\mathrm{eff}}-N_{\mathrm{eff}}^{\mathrm{SM} }\,, \tag{3}\]
the error-bar is expected to improve such that the expected sensitivity of the Simons Observatory reads \(|\Delta N_{\mathrm{eff}}|\lesssim 0.1\)[49], and that of CMB-S4 \(|\Delta N_{\mathrm{eff}}|\lesssim 0.06\)[50] when one assumes a standard cosmological history as the benchmark point.
The results on \(N_{\mathrm{eff}}\) are shown in Fig. 3. We find that for most of the branching ratios, the dark sector is able to transfer energy from the EM sector to the neutrino sector, increasing the value of \(N_{\mathrm{eff}}\). Moreover, as we mentioned above, the effect is mostly sensitive to \(\mathrm{Br}_{\mathrm{EM}}\,\mathrm{Br}_{\nu}\), so the two cases of \(\mathrm{Br}_{\mathrm{EM}}:\mathrm{Br}_{\nu}=10^{4}\) and \(10^{-4}\) lead to similar results. Obviously, for \(\mathrm{Br}_{\mathrm{EM}}:\mathrm{Br}_{\nu}=1\) the EM- and \(\nu\)-sector are most strongly connected via the DM "agent" so that this branching ratio results in pronounced \(N_{\mathrm{eff}}\) values, which are comparable to the case that DM only annihilates into neutrinos. These features are in broad agreement with previous works [6; 8], while we also take into account the exact canonical cross section for each DM mass and the sub-leading contribution of DM-SM elastic scattering. The exact canonical cross section is two-to-three times the often assumed value of one pico-barn which modifies the final bounds on the DM mass for \(\mathrm{Br}_{\mathrm{EM}}\,\mathrm{Br}_{\nu}\neq 0\), depending on the exact values of branching ratio. In the cases of \(\mathrm{Br}_{\mathrm{EM}}\,\mathrm{Br}_{\nu}=0\), the final value of \(N_{\mathrm{eff}}\) is simply decided by entropy conservation after neutrino decoupling, regardless of the exact canonical cross section or whether it is \(s\)- or \(p\)-wave dominated.
This can be seen from the similarity of the respective solid red and blue curves of the two panels of Fig. 3.
We also investigate the canonical annihilation cross sections, as well as the associated CMB \(N_{\rm eff}\) values, for Dirac fermion DM \(\chi\) with both \(s\)-wave and \(p\)-wave annihilation. The results are shown in Fig. 4. While there are great similarities with the complex scalar DM case, some differences exist at DM mass below \(10\,\mathrm{MeV}\) due to the fact that a Dirac fermion has 4 degrees of freedom (or \(7/8\times 4=3.5\) effective bosonic degrees of freedom), which contributes to the total energy density of the Universe at \(T_{\gamma}\sim\mathrm{MeV}\). As a result, a slightly larger canonical annihilation cross section is needed to compensate for a larger Hubble rate. Its energy density also has a stronger impact on \(N_{\rm eff}\) in cases of \(\mathrm{Br}_{\rm EM}\,\mathrm{Br}_{\nu}=0\), leading to stronger bounds on the DM mass. For non-negligible values of \(\mathrm{Br}_{\rm EM}\,\mathrm{Br}_{\nu}\) where the final value of \(N_{\rm eff}\) is mainly determined by the prolonged EM-\(\nu\) kinetic equilibrium, the d.o.f. of the DM particle only mildly affects the bounds on the DM mass.
With those results at hand, Tab. 1 lists the minimal thermal DM masses that are compatible with an otherwise standard cosmological history in the absence of additional particles and/or other "model-building tricks." The limits vary from \(m_{\chi}=2\) MeV to \(11.2\) MeV depending on the model and branching ratio. Only the first column with _no_ annihilation into neutrino leads to a decrease in \(N_{\rm eff}\), whereas all other cases increase \(N_{\rm eff}\) from the standard value. The weakest limit (in terms of the lowest allowed DM mass) is attained for \(\mathrm{Br}_{\rm EM}:\mathrm{Br}_{\nu}=10^{4}\). As can also be observed the differences between complex scalar and Dirac fermion are rather mild, with \(p\)-wave annihilation leading to slightly stronger limits.
We close this discussion by commenting on the case of self-conjugate DM candidates. A Majorana fermion has two spin degrees of freedom and effectively \(7/8\times 2\) bosonic degrees of freedom so that the case likely closely resembles the complex scalar one. This is also suggested in [8]. Finally, a real scalar only has one bosonic degree of freedom and consequently relaxed bounds. For the remainder of this section, we now address the question about fine-tuned scenarios (in branching ratios) that largely evade the cosmological limits.
### Fine-tuned branching ratio \(\mathrm{Br}_{\nu}\)
As mentioned, the dark sector modifies \(N_{\rm eff}\) with two effects: first, EM-\(\nu\) kinetic equilibrium mediated by DM always increases \(N_{\rm eff}\) with respect to \(N_{\rm eff}^{\rm SM}\) while, second, DM annihilation after EM-\(\nu\) kinetic decoupling either increases or decreases \(N_{\rm eff}\). The latter depends on the relative branching ratios \(\mathrm{Br}_{\rm EM}:\mathrm{Br}_{\nu}\). For \(\mathrm{Br}_{\rm EM}:\mathrm{Br}_{\nu}\gg 1\)
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline \(\mathrm{Br}_{\rm EM}:\mathrm{Br}_{\nu}\) & 1:0 & 10\({}^{4}\) & 1 & 10\({}^{-4}\) & 0:1 \\ \hline Complex scalar (\(s\)-wave) & 6.5 & 2.0 & 4.8 & 3.8 & 8.2 \\ Complex scalar (\(p\)-wave) & 7.0 & 2.9 & 5.2 & 3.9 & 8.8 \\ Dirac fermion (\(s\)-wave) & 8.2 & 2.5 & 5.0 & 4.1 & 11.2 \\ Dirac fermion (\(p\)-wave) & 9.8 & 3.0 & 5.7 & 4.3 & 11.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Minimal DM mass (\(m_{\phi,\,\chi}/\mathrm{MeV}\)) for various DM candidates and branching ratios into the EM and neutrino sector compatible with the 95% C.L. limit \(2.66\leq N_{\rm eff}\leq 3.33\)[48].
Figure 3: _Left panel_: \(N_{\rm eff}\) values obtained for complex scalar DM with the thermal annihilation cross-section that matches the DM abundance. The left (right) panel is for \(s(p)\)-wave annihilation. The calculation accounts for the energy transfer from annihilation _and_ elastic scattering among the various sectors. Shaded regions are excluded from combining \(\mathrm{Planck}+\mathrm{BAO}\) data, while the CMB-S4 sensitivity (standard cosmology value \(N_{\rm eff}=3.044\)) illustrated by the dot-dashed (dotted) horizontal lines.
a fine-tuned parameter region exists where the two effects largely cancel. We illustrate this cancellation as the solid black line in Fig. 5 for complex scalar DM with \(p\)-wave annihilation. The case of \(s\)-wave annihilation with similar parameters is already excluded by DM indirect searches. Note that for even heavier DM, with mass above \(40\,\mathrm{MeV}\), which freezes out well before \(T_{\gamma}\sim m_{e}\) but after SM neutrino decoupling around \(T_{\gamma}\sim 3\,\mathrm{MeV}\), the optimal ratio of \(\mathrm{Br}_{\mathrm{EM}}:\mathrm{Br}_{\nu}\) should eventually converge to \((g_{\gamma}+7/8g_{e}):7/8g_{\nu}=21/22\). The situation is very similar in the Dirac DM model, which would require even stronger fine-tuning to accommodate the bounds on \(N_{\mathrm{eff}}\) as it has more effective d.o.f. than a complex scalar.
For branching ratios that result in exact cancellation, we also provide the neutrino temperature evolution for scalar DM mass \(m_{\phi}=1,\,3,\,5,\,7\,\mathrm{MeV}\) as a function the photon temperature in Fig. 6. One observes from its lower panel that with the EM-neutrino kinetic coupling induced by DM (\(T_{\nu}/T_{\gamma}=1\)), the neutrino sector is hotter w.r.t. standard cosmology case. When the two sectors kinetically decouple (\(T_{\nu}/T_{\gamma}\neq 1\)), DM dominantly annihilates into the EM sector, reducing the ratio of \(T_{\nu}/T_{\gamma}\). Figures 5 and 6 show that one can always tune the branching ratios to satisfy the \(N_{\mathrm{eff}}\) bounds from CMB. However, MeV-scale thermal DM additionally contributes to the total energy-density of the Universe before its density gets Boltzmann suppressed. The latter modifies the predicted abundances of primordial light elements, and may thus receive constraints from BBN. To check this, we feed the neutrino- and photon-temperature as well as the DM energy density evolution into our BBN code [51].
Before including DM, we obtain a standard BBN (SBBN) deuterium abundance of \(\mathrm{D/H}=2.49\times 10^{-5}\) and a helium mass fraction abundance of \(Y_{p}=0.2475\) in good agreement with literature [52]; a neutron lifetime of 879.5 s [53] has been assumed. Recent measurements of the deuterium and helium abundances broadly agree with the SBBN predictions. Over the years, the helium values have ranged between \(0.24\lesssim Y_{p}\lesssim 0.26\)[54, 55, 56, 57]. The most aggressive 95% C.L. constraint \(Y_{p}\leq 0.251\) results when employing recent observations with claimed
Figure 4: Same as Figs. 2 and 3 above, but for Dirac fermion DM. The left (right) panel gives and \(s\)-wave (\(p\)-wave) results.
small error bar, \(Y_{p}=0.247\pm 0.0020\)[57]. In turn, observations of deuterium abundances are now at the percent-level, \(\mathrm{D/H}=(2.527\pm 0.030)\times 10^{-5}\)[58; 59], allowing for a departure by \(\pm 2.4\%\) from its central value at 95% C.L.. Tab. 2 shows the results when DM is included with a branching ratio such that the \(N_{\mathrm{eff}}\) constraint is evoked. We observe that while the increase in (D/H) remains below or at 2%, the helium abundance increases with decreasing DM mass. This is the effect from the DM energy density itself and it excludes finely-tuned masses \(m_{\chi}\leq 2\) MeV.
## III Millicharged DM and its connection to 21 cm cosmology
A well suited application of our three-sector treatment are MeV-scale millicharged particles, being entertained in a variety of contexts. An exciting prospect that has emerged in recent years is the potential of 21 cm cosmology as a probe of the Universe at high redshift \(30\lesssim z\lesssim 6\)[60]. The emission of 21 cm radiation from neutral hydrogen during that epoch is sensitive to the baryon temperature, which in turn may be altered by dark matter-baryon or dark matter-electron interactions [14; 15; 16]. The current state of observations is controversial. Whereas the EDGES collaboration has claimed the observation of an absorption feature at redshifted 21 cm wavelength that is stronger than expected from standard cosmology [17], the signal is contested by more recent observation by the SARAS3 instrument [18]. Nevertheless, 21 cm cosmology is a new window probing light dark sector physics.
The required strong effective DM-baryon interaction cross section can be mediated by the Coulomb-like \(v^{-4}\) velocity enhancement for which millicharged DM is a prime candidate. Currently, light dark states with a millicharge between \(5\times 10^{-6}\lesssim\epsilon\lesssim 10^{-4}(m_{\chi}/5\,\mathrm{MeV})^{0.6}\) are allowed by both the SLAC mQ experiment and SN cooling constraints [21]. Moreover, MeV-scale DM par
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline \((\mathrm{Br}_{\mathrm{EM}}\simeq 1)\) & \multicolumn{2}{c}{\(\mathrm{Br}_{\nu}\ \Delta(D/H)\)} & \(10\times Y_{p}\) & \(\Delta Y_{p}\) viable? \\ \hline SBBN & – & – & 2.478 & – & ✓ \\ \(m_{\chi}=7\) MeV & \(5.9\times 10^{-5}\) & \(-0.1\%\) & 2.479 & +0.1\% & ✓ \\ \(m_{\chi}=5\) MeV & \(2.7\times 10^{-5}\) & +0.1\% & 2.485 & +0.3\% & ✓ \\ \(m_{\chi}=3\) MeV & \(6.7\times 10^{-6}\) & +0.5\% & 2.502 & +1.0\% & ✓ \\ \(m_{\chi}=2\) MeV & \(2.6\times 10^{-6}\) & +1.1\% & 2.525 & +1.9\% & ✗ \\ \(m_{\chi}=1\) MeV & \(4.8\times 10^{-7}\) & +2.0\% & 2.568 & +3.7\% & ✗ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Cosmological observables for the fine-tuned branching ratios that evade the \(N_{\mathrm{eff}}\) constraint. The last column indicates the cosmological compatibility and it is driven by the helium abundance.
Figure 5: Illustration of fine-tuned branching ratios that give small \(N_{\mathrm{eff}}\) at recombination, as the increase of \(N_{\mathrm{eff}}\) caused by delayed kinetic decoupling of EM and neutrino sectors is canceled by the subsequent heating of the EM sector by DM annihilation after kinetic decoupling. The observed DM relic abundance fixes total cross sections. The increase on (D/H) remains below 2% on the black line, saturating the value at \(m_{\chi}=1\) MeV. An elevated helium abundance excludes points along the black line for \(m_{\phi}<3\) MeV; see Tab. 2.
Figure 6: Evolution of \(T_{\nu}/T_{\gamma}\) for tuned parameters that result in negligible additional radiation at recombination with complex scalar DM and \(p\)-wave canonical freeze-out. The black dashed horizontal lines correspond to its CMB value predicted by Standard cosmology, 0.7164, as well as the projected CMB-S4 sensitivity \(|\Delta N_{\mathrm{eff}}|\leq 0.06\). Blue-shaded regions indicate the current Planck+BAO bounds at recombination (\(T_{\gamma}\sim 0.3\) eV.) The lower panel shows the deviation of the \(T_{\nu}/T_{\gamma}\) ratio of the respective cases in percent from the standard cosmological evolution.
ticles with \(\epsilon\lesssim 10^{-5}\) become able to reach underground XENON detectors to trigger recoil signals, and thus be excluded [26]. Similarly, to avoid the stringent constraint from the surface run of the SENSEI experiment, one needs even larger millicharges, \(\epsilon\gtrsim 8\times 10^{-5}\)[23; 61].6 The combination of mQ and direct detection experiments thus requires \(m_{\chi}\gtrsim 3\) MeV. On the flip side, to explain EDGES (or more generally, to have an influence in 21 cm cosmology), heavier \(\chi\) particles either need to make a larger portion of the total DM abundance, or one needs to introduce larger values of \(\epsilon\). Since detailed investigations of CMB spectra demand a DM mass fraction below 0.4% for the \(\epsilon\) values discussed above [63; 21; 64; 28], there exist also upper bounds on \(m_{\chi}\) values of interest. So, taken together, the parameter range of interest becomes \(m_{\chi}\) from \(3-30\) MeV and \(\epsilon\) of \((8-20)\times 10^{-5}\), making up 0.1%-0.4% of the total DM abundance.
Footnote 6: Note that the result of [23] can not be trivially re-scaled for a sub-leading MeV DM component, as mentioned in their paper. Besides, the upscattering of the incident DM flux from the solar corona still offers an avenue to probe this parameter space [62].
Nevertheless, the scenario remains challenged by early Universe cosmology: the sizable value of \(\epsilon\) suggests that such MeV dark state, \(\chi\), comes into thermal equilibrium with radiation during/before BBN. The annihilation into electron-positron pairs7 heats the EM sector relative to the \(\nu\)-sector. The parameter region of interest is then shown to be largely excluded because of a too low \(N_{\rm eff}\) value [24]. With our developed method of treating three sectors during freeze-out, we may now check to which degree the above conclusion holds when the millicharged DM candidate \(\chi\) instead dominantly annihilates into neutrinos via additional interactions.
Footnote 7: Annihilation into a photon-pair is higher order in \(Q\) and hence subleading for \(m_{\chi}>m_{e}\).
### Model and cross sections
In order to make contact with preceding literature [21], we also take \(\chi\) to be a Dirac fermion. We may then consider the millicharge interaction being supplemented by a pseudoscalar mediator, \(A\), similar to what was done in the previous section,
\[\mathcal{L}_{\rm int}=-i\epsilon eA^{\mu}(\bar{\chi}\gamma_{\mu}\chi)-iy_{\nu }A\sum_{l}(\bar{\nu}_{l}\gamma_{5}\nu_{l})-iy_{A}A(\bar{\chi}\gamma_{5}\chi)\,, \tag{4}\]
where \(A^{\mu}\) is the SM photon, \(e\) is the electric charge with \(Q=\epsilon e\), and \(l=e,\mu,\tau\). The pseudoscalar only leads to velocity-suppressed DM self-scattering, and thus is safe from DM self-interaction bounds.
The annihilation cross section of \(\bar{\chi}\chi\) with squared center-of-mass energy \(s\) into a pair of electrons, mediated by the millicharge, is given by
\[\sigma_{e}v_{M} =\frac{8\pi\epsilon^{2}\alpha^{2}}{3s}\left(1+\frac{2m_{e}^{2}}{ s}\right)\left(1+\frac{2m_{\chi}^{2}}{s}\right)\sqrt{1-\frac{4m_{e}^{2}}{s}}\] \[\simeq\frac{\pi\epsilon^{2}\alpha^{2}}{m_{\chi}^{2}}\left(1+ \frac{m_{e}^{2}}{2m_{\chi}^{2}}\right)\sqrt{1-\frac{m_{e}^{2}}{m_{\chi}^{2}}}. \tag{5}\]
As a benchmark value we take \(\epsilon=10^{-4}\) and \(m_{\chi}=6\)-\(10\) MeV, which yields \(\sigma_{e}v_{M}\simeq(2\)-\(6)\times 10^{-25}\) cm\({}^{3}\)/s. This satisfies the CMB and indirect search bounds as \(\chi\) only makes up sub-percent fraction of the total DM abundance.
For the dominant annihilation channel, into neutrinos, we fix the corresponding coupling so that \(\chi\) and its antiparticles amount for 0.2% of the observed DM abundance. And as shown above, the canonical freeze-out value for \(m_{\chi}=6\)-\(10\) MeV DM annihilation cross section into neutrinos is about \(10^{-25}\) cm\({}^{3}\)/s. Therefore, we have the total non-relativistic annihilation cross section into neutrinos around \(5\times 10^{-23}\) cm\({}^{3}\)/s, translating into relative branching ratios of \(\rm{Br}_{EM}:\rm{Br}_{\nu}\sim 10^{-2}\). As the \(s\)-wave cross section for the new annihilation channel into a single neutrino flavor mediated by \(A\) is
\[\sigma_{\nu}v_{M}=\frac{y_{A}^{2}y_{\nu}^{2}}{8\pi}\frac{s}{(m_{A}^{2}-s)^{2}} \simeq\frac{y_{A}^{2}y_{\nu}^{2}}{2\pi}\frac{m_{\chi}^{2}}{m_{A}^{4}}\,, \tag{6}\]
thus the corresponding parameter set is given by
\[3\sigma_{\nu}v_{M}\simeq 5\times 10^{-23}\,\rm{cm}^{3}\,\rm{s}^{-1}\,\left( \frac{y_{A}y_{\nu}m_{\chi}}{3\,\rm{MeV}}\right)^{2}\left(\frac{\rm{GeV}}{m_{A} }\right)^{4}, \tag{7}\]
where the pre-factor 3 counts three neutrino flavors as final states. Applicable neutrino bounds for this parameter ballpark range are provided below in Sec. III.3.
Finally, we point out that the elastic scattering cross section between \(\chi\) and neutrinos via each mediator reads
\[\sigma_{\chi\nu}= y_{A}^{2}y_{\nu}^{2}\frac{s}{48\pi m_{A}^{4}}\left(1-\frac{m_{\chi}^{ 2}}{s}\right)^{2}\,, \tag{8}\]
where we have assumed that \(s\ll m_{A}^{2}\). Given that the typical squared center of mass energy scales as \((s-m_{\chi}^{2})\sim m_{\chi}T_{\nu}\), this cross section has a temperature dependence such that the elastic scattering rate per \(\chi\)-particle steeply decreases with \(T_{\nu}^{5}\), in analogy with neutrino-electron interactions.
### Cosmological constraints
In order to test for the cosmological compatibility of the millicharged fractional DM scenarios annihilating into neutrinos, we now investigate the value of \(N_{\rm eff}\) at CMB and compute the light element abundances from
BBN.8 For the latter we directly feed the non-trivial evolution of the electromagnetic, neutrino, and dark matter densities \(\rho_{\rm EM}\), \(\rho_{\nu}\) and \(\rho_{\chi}\) into a modified version of a nucleosynthesis code [51]. We note in passing, that the neutrino annihilation products that are being injected after neutrino-decoupling with \(m_{\chi}\) initial energies are too few in number to induce non-thermal reactions such as \(\bar{\nu}_{e}p\to e^{+}n\) at a relevant level; see [73] for a detailed discussion.
Footnote 8: Other works that consider the modifications of light element abundances from MeV-scale dark sectors include [7; 8; 65; 66; 67; 68; 69; 70; 71; 72].
Tab. 3 summarizes the results of this analysis for \(\epsilon=10^{-4}\) with cross section of Eq. (7), and various DM masses at and below 10 MeV. For better comparison, the first line shows the results for a standard cosmological history. Our obtained value \(N_{\rm eff}^{\rm SM}=3.044\) is in agreement with other recent literature results, \(3.043-3.046\)[74; 75; 76; 77; 78; 79]. The generally observed trend when _neutrino-annihilating_ millicharged states are included is that for decreasing \(m_{\chi}\), neutrino heating becomes pronounced, leading to elevated levels of \(N_{\rm eff}\). In terms of \(\Delta N_{\rm eff}\) we observe shifts from 0.075 to 0.308, mostly compatible with the current 95% C.L. range inferred from Planck (see above). With similar branching ratios, \({\rm Br}_{\rm EM}:{\rm Br}_{\nu}\sim 10^{2}\)-\(10^{3}\), and a total annihilation cross section at \(3\times 10^{-26}\,{\rm cm}^{3}/{\rm s}\), Tab. 8 of [6] obtains a lower bound \(m_{\chi}\geq 4.3\,{\rm MeV}\) from \(N_{\rm eff}\lesssim 3.33\). As shown in Fig. 7, a much larger cross section of \(5\times 10^{-23}\,{\rm cm}^{3}/{\rm s}\) adopted here further delays the DM-induced decoupling between neutrino and EM sectors, increasing the bound to \(m_{\chi}\gtrsim 6.1\,{\rm MeV}\).
We now turn to the BBN results. Including \(\chi\) into our calculation, we find an increasing trend with decreasing DM mass for both D/H and \(Y_{p}\). This effect is well known in the helium mass fraction and, within the considered \(m_{\chi}\)-range, the helium abundance is barely changed beyond the one per-cent level. In turn, for the lightest mass considered, \(m_{\chi}~{}=6\) MeV, the deuteron abundance changes by 3.8%, in tension with observations. Given that the SBBN D/H prediction has a one percent uncertainty stemming from nuclear-rate uncertainties, we are not able to make a definite statement of the viability for \(m_{\chi}=7~{}{\rm MeV}\). Still, the relative changes inform us that the scenario is on the verge of being best probed by D/H, and upcoming improvements in the observations of \(N_{\rm eff}\) will provide the definite test. Moreover, the 95% C.L. upper limit on \(Y_{p}\) is only touched for the lightest mass \(m_{\chi}=6\,{\rm MeV}\).
To sum up, for this scenario considered here to explain EDGES, early Universe cosmology suggests a lower bound at \(m_{\chi}\gtrsim 7\,{\rm MeV}\) at this moment. Furthermore, based on our discussion above, given the sizeable value of \({\rm Br}_{\rm EM}{\rm Br}_{\nu}\) adopted here, this bound is not expected to change much for a millicharged scalar case.
### Experimental bounds on neutrino interactions
In the set-up given above, the presence of a neutrino-philic pseudo-scalar mediator induces neutrino self-interactions and neutrino-DM scattering, and thus can be constrained by experimental observations. In contrast, an sub-leading DM candidate as above is only poorly constrained by conventional DM searches, and a MeV dark particle with a charge \(\epsilon=10^{-4}\) is below the current sensitivity of intensity-frontier experiments; see e.g. [21; 23; 80]. In this subsection, we consider relevant observations, with a benchmark mass of intermediate (pseudo-)scalar, \(m_{A}\sim\) GeV and \(y_{A}y_{\nu}\simeq 0.3\).
Regarding such neutrino-DM interaction induced by \(A\), the mean-free-path of a high-energy neutrino passing
Figure 7: The evolution of \(T_{\nu}/T_{\gamma}\) for millicharged DM as a function of photon temperature, with the same parameters as in Tab. 3. The black dashed horizontal lines correspond to its CMB value predicted by Standard cosmology and the projected CMB-S4 sensitivity, while blue-shaded regions indicate the current Planck+BAO bounds, same as Fig. 6.
\begin{table}
\begin{tabular}{l|l l l l l} \hline & \(N_{\rm eff}\) & \(\Delta N_{\rm eff}\) & \(\Delta(D/H)\) & \(Y_{p}\times 10\) & \(\Delta Y_{p}\) viable? \\ \hline SBBN & 3.044 & – & – & 2.478 & – & ✓ \\ \(m_{\chi}=10\) MeV & 3.119 & 0.075 & +1.0\% & 2.488 & +0.4\% & ✓ \\ \(m_{\chi}=9\) MeV & 3.171 & 0.127 & +1.5\% & 2.493 & +0.6\% & ✓ \\ \(m_{\chi}=8\) MeV & 3.193 & 0.149 & +1.8\% & 2.496 & +0.7\% & ✓ \\ \(m_{\chi}=7\) MeV & 3.268 & 0.224 & +2.8\% & 2.503 & +1.0\% & ✓ \\ \(m_{\chi}=6\) MeV & 3.352 & 0.308 & +3.8\% & 2.512 & +1.4\% & ✗ \\ \hline \end{tabular}
\end{table}
Table 3: Cosmological observables for a millicharged DM particle with fractional abundance of 0.2%, with \(\epsilon=10^{-4}\) and Eq. (7). The last column indicates the cosmological compatibility.
through the fractional DM medium can be estimated via
\[l_{\rm mfp}\sim 10^{3}\,{\rm Gpc}\,\left(\frac{10^{-3}}{f_{\rm DM}}\right)\left( \frac{m_{\chi}}{\rm MeV}\right)\left(\frac{10^{3}\,{\rm GeV}^{-2}}{\sigma_{\chi \nu}}\right)\,, \tag{9}\]
which means that a PeV neutrino freely travels in the Universe in our set-up [81]. Moreover, recent \(\mathcal{O}(100)\,{\rm TeV}\) neutrino observations in IceCube from Blazar TXS 0506+056 and active galaxy NGC 1068 could lead to 5-10 orders of magnitude stronger bounds, if such neutrinos were generated around supermassive black holes within dense DM spikes [82; 83; 84]. The conclusion of Ref. [84] can be re-scaled to give in our model
\[y_{A}y_{\nu}\lesssim 10^{2}\,\left(\frac{10^{-3}}{f_{\rm DM}}\right)^{1/2}\left( \frac{m_{A}}{\rm GeV}\right)\left(\frac{\rm MeV}{m_{\chi}}\right)^{1/4}\,, \tag{10}\]
which applies to \(m_{\chi}\gtrsim 1\,{\rm MeV}\). This bound is also very weak since the spike should be truncated by efficient \(\chi\)-pair annihilation. Besides, neutrino-\(\chi\) interaction inside a proto-neutron star may enhance dark particle capture, greatly alleviating the SN cooling bounds on \(\epsilon\), which is similar to self-trapping induced by \(\chi\) self-interaction [85; 86].
For a GeV mediator, stronger bounds may come from neutrino self-interaction. There exists \(y_{\nu}\lesssim 0.3\) from observations of leptonic decay of mesons and heavy leptons, such as \(K/D\to l\nu\) and \(\tau\to l\nu\nu\)[87; 88; 89], which is stronger than the potential bounds from double beta decay [90]. Strong neutrino self-interactions may also affect the neutrino-driven mechanism of SN explosions in a non-trivial way [91; 92; 93]. Therefore, future observations of SN neutrinos could further clarify the exact SN evolution, and thus probe this parameter region; see also e.g. [94; 95]. On the other hand, cosmological bounds on neutrino self-interaction are much weaker. For instance, CMB observations require neutrinos to free-stream at \(z\leq 10^{4}\) (assuming no recoupling), leading to \(y_{\nu}^{2}/m_{A}^{2}\leq 0.1\,{\rm MeV}^{-2}\)[96]. For a summary of neutrino self-interaction bounds, see a recent review [97]. As a result, the current limit on \(y_{\nu}\) in turn requires \(y_{A}\) values to be around, or slighter larger than, unity in our model.
At last we briefly comment on the contribution of new particles to the electromagnetic properties of neutrinos. In our set-up, neutrinos can couple to photon via loops of intermediate pseudoscalar and millicharged particles. As the photon does not mix with a pseudoscalar, dimensional analysis suggests that in heavy scalar limit this can at most happen via charge radius terms with coefficients of the order \((\epsilon e)m_{\chi}^{2}/m_{A}^{4}\lesssim 10^{-8}\,{\rm GeV}^{-2}(m_{\chi}/10\,{ \rm MeV})^{2}({\rm GeV}/m_{A})^{4}\), or even smaller as \((\epsilon e)m_{\nu}^{2}/m_{A}^{4}\). Such benchmark values of this model are well below the current bounds on neutrino charge radius, \(\langle r^{2}\rangle\lesssim 10^{-6}\,{\rm GeV}^{-2}\) (about \(10^{-33}\,{\rm cm}^{2}\)) [98], for the parameter region of our concern.
## IV Conclusions
In this work, we explored the prediction of DM candidates with a mass below 30 MeV and which come into thermal equilibrium with the SM. Such states undergo thermal freeze-out close to or during the epoch of neutrino-decoupling, affecting the SM predictions of the ratio of neutrino to photon temperatures, or, equivalently, of \(N_{\rm eff}\). When DM annihilates into both neutrinos and electrons/photons, an accurate prediction of \(N_{\rm eff}\) as well as of the relic density, necessitate the simultaneous solution of the coupled three-sector system: DM, neutrinos, and the EM sector. The methodology for achieving this in a fully self-consistent way that takes into account energy transfer into the various sectors from both annihilation and elastic scattering was developed in our preceding work [9].
Here we utilize this formulation to derive the thermal values of the annihilation cross section that yield the correct relic abundance for an exemplary set of branching ratios into neutrinos and electrons for \(s\)- and \(p\)-wave annihilation. For example, an \(s\)-wave annihilating complex scalar \(\phi\) with \(m_{\phi}=1\) MeV requires a total annihilation cross section of \(a\simeq 7.5\times 10^{-26}\) cm\({}^{3}\)/s for dominant annihilation into neutrinos, \({\rm Br_{EM}/Br_{\nu}}<10^{-4}\), and an annihilation cross section equal or larger than \(10^{-25}\) cm\({}^{3}\)/s for dominant annihilation into electrons, \({\rm Br_{EM}/Br_{\nu}}>10^{4}\). For \(m_{\phi}=15\) MeV all cases converge to approximately \(a\simeq 8\times 10^{-26}\) cm\({}^{3}\)/s, in agreement with previous investigations. For the \(p\)-wave annihilating case and \(m_{\phi}=1\) MeV we obtain \(b\simeq 3.4\times 10^{-26}\) cm\({}^{3}\)/s for \({\rm Br_{EM}/Br_{\nu}}<10^{-4}\), and \(b\geq 5\times 10^{-26}\) cm\({}^{3}\)/s for dominant annihilation into electrons, \({\rm Br_{EM}/Br_{\nu}}>10^{4}\). Small differences among the cases pertain to higher masses--an effect that traces back to the non-trivial temperature evolution in the dark sector; at \(m_{\phi}=20\) MeV all cases require \(b\simeq 4.5\times 10^{-26}\) cm\({}^{3}\)/s. The equipartitioned cases with \({\rm Br_{EM}/Br_{\nu}}=1\) lie in between the above values. For a Dirac fermion the observed trends as a function of mass are similar, but thermal cross sections mildly differ from the complex scalar case.
Using the thermal cross section values, we are then in the unique position to obtain a precision determination of \(N_{\rm eff}\) as a function of \(m_{\phi}\) and contrast it with current and future CMB-inferred bounds and projections. Exclusive annihilation into neutrinos (electrons) raises (lowers) \(N_{\rm eff}\), excluding \(m_{\phi}<8(6)\) MeV for complex scalars and \(m_{\chi}<11(7)\) MeV for Dirac fermions at 95% C.L. for both \(s\)- and \(p\)-wave annihilation from Planck measurements. Those limits are lowered when annihilation proceeds simultaneously into neutrinos and the EM sector. For \({\rm Br_{EM}/Br_{\nu}}=10^{4}\)_and_\(10^{-4}\) an elevated \(N_{\rm eff}\) excludes DM mass below 2 MeV _and_ below 4 MeV, respectively, for both \(s\)-wave and \(p\)-wave annihilation and both model cases. Precise values for all cases are shown in Figs. 3 and 4 and Tab. 1.
We also explore the possibility of a fine-tuned parameter region in \({\rm Br_{EM}/Br_{\nu}}\gg 1\) with dominant annihilation
into electrons where the minimum DM mass value can be lowered further as the heating of the photon and neutrino baths proceeds such that \(N_{\rm eff}\) remains almost unchanged. For \(p\)-wave annihilation this presents a loophole to entertain lower-mass thermal DM; for \(s\)-wave DM any significant annihilation into electrons is already excluded from energy injection during the CMB epoch. For complex scalar DM and \(p\)-wave annihilation, we establish a required branching ratio into neutrinos of \({\rm Br}_{\nu}\simeq 5\times 10^{-7}\) for \(m_{\phi}=1\) MeV to \(10^{-4}\) for \(m_{\phi}=10\) MeV that remain unchallenged by current and future CMB \(N_{\rm eff}\) measurements; see Fig. 5. To complete the cosmic viability test, we also feed the non-standard evolution of photon and neutrino temperatures as well as the DM energy density into a BBN code and calculate the light element abundance yields. We find that \(m_{\rm DM}<3\) MeV is excluded by an elevated helium abundance, see Tab. 2. Finally, we complement those studies by presenting particle physics realizations where we summarize other relevant observational and experimental constraints.
In a second part, as a further case-study, we focus on the predictions of MeV-scale millicharged Dirac fermions \(\chi\) in the mass-coupling regime where it affects the predictions of the global cosmological 21 cm signal through its thermal coupling to baryons at the cosmic dawn era. Because of other cosmological constraints, such states must have a sub-percent level fractional abundance. We show that it is possible to have a consistent thermal history of such particles when \(\chi\) is being supplied with additional neutrino interactions that dominate the DM freeze-out. This allows \(\chi\) itself to deplete in number sufficiently. We compute \(N_{\rm eff}\) and light element yields in the modified thermal history and find that a narrow mass-window situated around 10 MeV survives all current observational and experimental tests. Nevertheless, this window may become firmly closed by direct detection experiments soon.
Calculations of the thermal DM relic density are standard repertoire when entertaining dark sector models. Yet, the lowest mass range for simple thermal relics, \(1-30\) MeV, considered on the brink of being compatible with cosmology, has not yet been studied at the appropriate level of rigor. In this work we have closed this gap. A further application of the three-sector approach may be considered the annihilation of \(d\)-wave annihilating DM or the decay of MeV-scale particles during the non-trivial epoch of neutrino decoupling and electron annihilation.
Acknowledgments.We thank Jui-Lin Kuo for collaboration in the early phases of this work and for providing an initial version of the numerical code. This work was supported by the FWF Austrian Science Fund research teams grant STRONG-DM (FG1) and by the U.S. National Science Foundation (NSF) Theoretical Physics Program, Grant PHY-1915005. Funded/Co-funded by the European Union (ERC, NLO-DM, 101044443). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. The computational results presented were in part obtained using the Vienna "CLIP cluster".
## Appendix A Semi-analytical freeze-out solutions
In this appendix, we also provide a semi-analytical solution to freeze-out in the case that DM with number density \(n\) couples exclusively to the EM _or_ the \(\nu\) sector, i.e. \({\rm Br}_{\nu}{\rm Br}_{\rm EM}=0\). The derivation largely follows the literature, such as Ref. [11; 12], and the results are summarized in Fig. 8. The aim is to illustrate the solution works for both \(T=T_{\gamma}\) and \(T=T_{\nu}\), by simply re-defining the entropy/energy degrees of freedom.
As we are only concerned with non-relativistic freeze-out, we assume DM follows a thermal evolution governed by the temperature of the sector it couples to, \(T_{\gamma}\) or \(T_{\nu}\). For this (semi-)analytical solutions in Fig. 8, we shall assume a sudden neutrino decoupling at \(T_{\gamma}=T_{\nu}=2\) MeV. One can start by defining an "effective" total entropy, \(\hat{S}=a^{3}(2\pi^{2}g_{S}T^{3}/45)\equiv a^{3}\hat{s}\), which is conserved during the whole epoch through the appropriate choice of \(g_{\hat{s}}(T)\). We emphasize again that \(T\) can be the temperature shared by DM and its coupled sector, either \(T_{\gamma}\) or \(T_{\nu}\). In each case, one solves the Boltzmann equation
\[\frac{dY}{dt}=-\hat{s}\langle\sigma_{\rm ann}v\rangle(Y^{2}-Y_{\rm eq}^{2})\,. \tag{10}\]
At the initial stage of the freeze-out evolution, the DM abundance follows its thermal value \(Y_{\rm eq}\), up to a small correction, with
\[Y_{\rm eq}=\frac{n}{\hat{s}}\simeq 0.1447\,\left(\frac{g_{\rm DM}}{g_{\hat{s}}} \right)x^{1.5}e^{-x}\,. \tag{11}\]
We then follow a common convention and define the freeze-out point through \(Y(x_{\rm fo})=(1+c)Y_{\rm eq}(x_{\rm fo})\) with a constant \(c\). It implies that at \(x=x_{\rm fo}\)
\[\left.\frac{d\ln Y_{\rm eq}}{d\ln x}\right|_{x_{\rm fo}}=-Y_{eq}\frac{\hat{s} \langle\sigma_{\rm ann}v\rangle}{H}\left(1-\frac{1}{3}\frac{d\ln g_{\hat{s}}}{ d\ln x}\right)(2+c)c\]
holds. By re-writing the equation as
\[e^{x}=-0.1447\,\left(\frac{g_{\rm DM}}{g_{\hat{s}}}\right)x^{1.5}\frac{\hat{s} \langle\sigma_{\rm ann}v\rangle}{H}\frac{\left(1-\frac{1}{3}\frac{d\ln g_{ \hat{s}}}{d\ln x}\right)(2+c)c}{\frac{d\ln Y_{\rm eq}}{d\ln x}}\]
together with \(d\ln Y_{\rm eq}/d\ln x=-(x-1.5+d\ln g_{\hat{s}}/d\ln x)\), we refer to the R.H.S. of the equation above as \(\mathcal{F}(x)\), and re-write the whole equation as \(e^{x}=\mathcal{F}(x)\). A numerical solution to this equation can be obtained iteratively,
\[x_{\rm fo}=\ln\mathcal{F}|_{\{x\rightarrow\ln\mathcal{F}|_{|x\rightarrow\ldots |}\}}\,, \tag{12}\]
where as initial value of \(x\) for MeV DM freeze-out one may choose 10-20.
Once the freeze-out point is reached, \(Y\) becomes increasingly larger than \(Y_{\rm eq}(x\gg x_{\rm fo})\), but smaller than \(Y_{\rm eq}(x=x_{\rm fo})\), allowing for another approximation of the Boltzmann equation which may be cast in the form
\[\frac{dY}{Y^{2}}=-\frac{\hat{s}\langle\sigma_{\rm ann}v\rangle}{Hx}\left(1- \frac{1}{3}\frac{d\ln g_{\hat{s}}}{d\ln x}\right)dx\,. \tag{10}\]
Integrating both sides from \(x=x_{\rm fo}\) to \(x=\infty\) gives
\[Y_{\infty}\simeq\left[\int_{x_{\rm fo}}^{\infty}\frac{\hat{s}\langle\sigma_{ \rm ann}v\rangle}{Hx}\left(1-\frac{1}{3}\frac{d\ln g_{\hat{s}}}{d\ln x}\right) dx\right]^{-1}\,. \tag{11}\]
For the analytical solution of DM annihilating into neutrinos only, one simply replaces \(x\equiv m/T_{\gamma}\) by \(x_{\nu}\equiv m/T_{\nu}\), and re-writes the functions of \(g_{*}\) and \(g_{\hat{s}}\) with respect to \(T_{\nu}\).9 Since the neutrino sector is not heated up in electron annihilation, it generally makes the DM particles cooler than \(T_{\gamma}\), and thus requires less DM annihilation to yield the observed DM abundance, as illustrated in the figure. Besides, for DM masses well above MeV, DM particles do not contribute much to the total energy density of the Universe, thus the canonical cross sections become almost independent of the spin of DM particles.
Footnote 9: As usual, \(g_{*}\) and \(g_{\hat{s}}\) are defined as \(g_{*}=\sum_{b}g_{b}(T_{b}/T)^{4}+7/8\sum_{f}g_{f}(T_{f}/T)^{4}\) and \(g_{\hat{s}}=\sum_{b}g_{b}(T_{b}/T)^{3}+7/8\sum_{f}g_{f}(T_{f}/T)^{3}\) with \(g_{b}\) and \(g_{f}\) being the active bosonic (fermionic) relativistic degrees of freedom.
The solutions of this semi-analytical method are shown as colored lines in Fig. 8, which agrees with our full numerical results (gray lines) studied in the main text. Note
Figure 8: Canonical cross section needed to obtain the observed abundance for scalar DM \(\phi\) (left panels) and fermion DM \(\chi\) (right panels) in the case of \(s\)-wave annihilation (top) and \(p\)-wave annihilation (bottom) using the semi-analytic approach detailed in App. A. Red lines are for exclusive pair annihilation into the EM sector, while blue lines are for exclusive annihilation into neutrinos. The y-axes give the parameter values of \(\langle\sigma_{\rm ann}v\rangle=a+b(6T/m_{\phi,\chi})\). Dot-dashed lines are canonical values for self-conjugate DM, re-scaled by a factor of 2, as self-conjugate cases need weaker annihilation. The gray dotted (dashed) lines give the exact numerical result for electron-only (\(\nu\)-only) annihilation obtained in the main text.
that for the figure we have taken \(c=0.4\) for both \(s\)-wave and \(p\)-wave annihilation, as it fits the exact numerical results of MeV-scale DM better. If following [10], adopting \(c=0.75\) for \(p\)-wave cases would increase the canonical cross sections by 5%-10%. For fermionic DM above \(20\,\text{MeV}\), it is also in good agreement with previous literature values, see e.g. [12; 13]. Similar to [13], we observe that the ratio in \(p\)-wave canonical cross sections between non-self-conjugate and self-conjugate cases visibly deviates from 2 (lower panel of Fig. 8) even for TeV DM. At last, we emphasize that this semi-analytical solution always assumes the kinetic equilibrium between DM and the SM sector it couples to. In contrast, the full numerical results adopt concrete particle models, and typically have DM kinetically decoupled from all radiation at \(x\sim 100\). The latter thus requires a slightly larger canonical annihilation cross sections in cases of velocity-suppressed annihilation.
## Appendix B Boltzmann equations
Here we summarize the definitions and notation explained in further detail in the preceding methodology paper [9]. The evolution of number and energy densities, \(n_{i}\) and \(\rho_{i}\) of species \(i\) is given by the Boltzmann equations,
\[\frac{\partial n_{i}}{\partial t}+3Hn_{i}\equiv\frac{\delta n_{i} }{\delta t}\,, \tag{13}\] \[\frac{\partial\rho_{i}}{\partial t}+3H(\rho_{i}+P_{i})\equiv \frac{\delta\rho_{i}}{\delta t}\,, \tag{14}\]
with \(P_{i}\) being the pressure density. The right-hand-side terms \(\delta n_{i}/\delta t\) and \(\delta\rho_{i}/\delta t\) are, respectively, given in terms of sums over all contributing annihilation channels and all two-body processes (annihilation and scattering),
\[\frac{\delta n_{i}}{\delta t} =g_{i}\int\frac{d^{3}p_{i}}{(2\pi)^{3}E_{i}}\,\sum_{\text{ann.}} C[f_{i}]\,\Delta n\,, \tag{15}\] \[\frac{\delta\rho_{i}}{\delta t} =g_{i}\int\frac{d^{3}p_{i}}{(2\pi)^{3}E_{i}}\,\sum_{\text{all}}C[f _{i}]\,\delta E\,, \tag{16}\]
where \(C[f_{i}]\) is the collision term as it appears in the unintegrated Boltzmann equation for the phase space distribution function \(f_{i}\) of species \(i\); \(\Delta n\) and \(\delta E\) are the number and energy exchanged in the process under question. In practice, we take \(\Delta n=\pm 2\) for pair creation/annihilation. For dark matter, which is assumed to develop a non-vanishing chemical potential after becoming non-relativistic,10 its momentum distribution function is approximated as
Footnote 10: This assumption may not be proper if the DM sector only couples to neutrinos. While this relies on exact parameters of the model, such as the mediator mass, we further assume that when \(\text{DMDM}\leftrightarrow\nu\nu\) is sufficient, \(4\nu\leftrightarrow 2\nu\) via dark sector mediators is sufficient too, resulting in \(\mu_{\text{DM}}=\mu_{\nu}=0\) during this period.
\[f_{\text{DM}}(E_{\text{DM}})\simeq\frac{e^{\mu_{\text{DM}}/T_{\text{DM}}}}{e ^{E_{\text{DM}}/T_{\text{DM}}}\pm 1}\,, \tag{17}\]
where "DM" is set as \(\phi\) (\(\chi\)) for scalar (fermionic) dark matter candidate. For neutrinos, which are always relativistic and only obtain tiny chemical potentials, there exists, to the first order
\[f_{\nu}(E_{\nu})\simeq\frac{1}{e^{E_{\nu}/T_{\nu}}+1}+\frac{\mu_{\nu}/T_{\nu} }{e^{E_{\nu}/T_{\nu}}+e^{-E_{\nu}/T_{\nu}}+2}\,. \tag{18}\]
At last, the electromagnetic sector has no chemical potential above keV. For more details and uncertainty discussions, see [9].
Our methodology of solving the three-sector system then involves factorizing the collision integrals in products of functions of two (normalized) chemical potentials \(\tilde{\mu}_{i}=\mu_{i}/T_{i}\) and three temperatures \(T_{i}\),
\[\frac{\delta n_{i}}{\delta t} =\sum_{i\neq j}a_{ij}\,\beta_{ij}(\tilde{\mu}_{i},\tilde{\mu}_{j} )\,\gamma_{ij}(T_{i},T_{j})\,, \tag{19}\] \[\frac{\delta\rho_{i}}{\delta t} =\sum_{i\neq j}b_{ij}\,\beta_{ij}(\tilde{\mu}_{i},\tilde{\mu}_{j} )\,\zeta_{ij}(T_{i},T_{j})\,. \tag{20}\]
Here, \(a_{ij}=\pm 1\) and \(b_{ij}=\pm 1\), depending on the process under question, and \(\beta_{ij}\) are functions of initial state chemical potentials such as \(e^{\tilde{\mu}_{i}+\tilde{\mu}_{j}}\) or \(\tilde{\mu}_{i}e^{\tilde{\mu}_{j}}\). Elastic scattering processes only enter in \(\zeta_{ij}\) as particle number is conserved.
Analytical approximations for \(\gamma_{ij}\) and \(\zeta_{ij}\) for the charged scalar \(s\)-wave case are given in App. C, for the \(p\)-wave annihilation case they are found in the preceding work [9] and for the millicharged scenario they are given in App. F.
## Appendix C collision terms for \(s\)-wave scalar DM
Here we provide the interaction rates for the pseudo-scalar mediated complex scalar DM model given in (2). In the concrete expressions below, the d.o.f. of all initial states have been summarized to give the physical number/energy-exchange between two sectors with fixed temperatures.
For the annihilation process \(ee\leftrightarrow\phi\phi\), we obtain
\[\left(\frac{\delta n}{\delta t}\right)_{ee\leftrightarrow\phi\phi} =\beta_{ee\leftrightarrow\phi\phi}\gamma_{ee\leftrightarrow\phi \phi}\,, \tag{21}\] \[\left(\frac{\delta\rho}{\delta t}\right)_{ee\leftrightarrow\phi\phi} =\beta_{ee\leftrightarrow\phi\phi}\zeta_{ee\leftrightarrow\phi\phi}\,, \tag{22}\]
with \(\beta_{ee\leftrightarrow\phi\phi}=e^{2\tilde{\mu}_{e}}\) and
\[\gamma_{ee\leftrightarrow\phi\phi} =\frac{g_{e}g_{\bar{e}}}{(2\pi)^{4}}\int\frac{dsdE_{+}dE_{-}}{2} f_{e}^{\text{eq}}f_{e}^{\text{eq}}\sigma_{ee\leftrightarrow\phi\phi}\mathcal{F}_{12}\] \[\times\left[(1-\Delta_{\text{ann}})+\Delta_{\text{ann}}(1-\beta_{ \text{ann}})\right]\,,\]
\[\zeta_{ce\leftrightarrow\phi\phi} =\frac{g_{e}g_{\bar{e}}}{(2\pi)^{4}}\int\frac{dsdE_{+}dE_{-}}{2}\,f _{e}^{\rm eq}f_{e}^{\rm eq}\sigma_{ee\leftrightarrow\phi\phi}\mathcal{F}_{12}\] \[\times E_{+}\left[(1-\Delta_{\rm ann})+\Delta_{\rm ann}(1-\beta_{ \rm ann})\right]\,,\]
where \(g_{e}\equiv g_{\bar{e}}=2\). Here, the electron equilibrium distribution functions are given at vanishing chemical potential,
\[f_{e}^{\rm eq}\simeq\frac{1}{e^{E_{e}/T_{\gamma}}+1}\quad(T_{\gamma}\gtrsim m _{e}/20)\,. \tag{109}\]
The flux factor for a reaction of the type \(1+2\leftrightarrow 3+4\) is given by, \(\mathcal{F}_{12}=\sqrt{\lambda(s,m_{1}^{2},m_{2}^{2})}/2\,,\) with \(\lambda(a,b,c)=a^{2}+b^{2}+c^{2}-2(ab+ac+bc)\) being the triangle function. As usual, \(s=(p_{1}+p_{2})^{2}\) is the squared center-of-mass (CM) energy and \(E_{\pm}=E_{1}\pm E_{2}\). Finally, the weights are defined as \(\Delta_{\rm ann.}\equiv e^{(T_{1}^{-1}-T_{3}^{-1})E_{+}}\) and \(\beta_{\rm ann.}\equiv e^{2(\tilde{\mu}_{3}-\tilde{\mu}_{1})}=e^{2(\tilde{\mu }_{4}-\tilde{\mu}_{2})}\); here we have for the subscripts \(1\widehat{=}e\) and \(3\widehat{=}\phi\). For the pseudoscalar induced annihilation, we obtain
\[\sigma_{ee\rightarrow\phi\phi}=\frac{y_{e}^{2}}{32\pi\Lambda^{2}}\,\frac{(1-4 m_{\phi}^{2}/s)^{1/2}}{(1-4m_{e}^{2}/s)^{1/2}}\,. \tag{110}\]
Note that we always set the heavier particles as final states in annihilation cross sections, as we often neglect quantum statistics of final states to simplify the numerical computation; see [9] for quantitative discussions and how the full statistics can be further included.
For the elastic scattering \(\phi e\leftrightarrow\phi e\) we obtain,
\[\left(\frac{\delta\rho}{\delta t}\right)_{\phi e\leftrightarrow\phi e}=\beta _{\phi e\leftrightarrow\phi e}\zeta_{\phi e\leftrightarrow\phi e}\,, \tag{111}\]
with \(\beta_{\phi e\leftrightarrow\phi e}=e^{\tilde{\mu}_{0}+\tilde{\mu}_{e}}\) and
\[\zeta_{\phi e\leftrightarrow\phi e} =\frac{(g_{\phi}+g_{\phi^{\star}})(g_{e}+g_{\bar{e}})}{(2\pi)^{4 }}\int dE_{1}dE_{2}dsdt\,f_{\phi}^{\rm eq}f_{e}^{\rm eq}\] \[\times\frac{d\sigma_{\phi e\rightarrow\phi e}}{dt}\,\mathcal{F}_ {12}\langle\Delta_{\rm sca}\delta E\rangle\,,\]
where \(g_{\phi}\equiv g_{\phi^{\star}}=1\) and \(f_{\phi}^{\rm eq}\) is the \(\phi\) equilibrium distribution function at vanishing chemical potential,
\[f_{\phi}^{\rm eq}=\frac{1}{e^{E_{\phi}/T_{\phi}}-1}. \tag{112}\]
In the collision integral, \(\langle\Delta_{\rm scat.}\delta E\rangle\) is the energy transfer per scattering, averaged over the azimuthal angle in the CM frame; the explicit expression is found in [9]. The integration region of \(t\) is given by \(\left[-\lambda(s,m_{1}^{2},m_{2}^{2})/s,0\right]\).
For the pseudoscalar mediator, the differential cross section of elastic scattering is
\[\frac{d\sigma_{\phi e\rightarrow\phi e}}{dt}=\frac{-y_{e}^{2}t}{16\pi\Lambda^{ 2}[m_{e}^{4}-2m_{e}^{2}(m_{\phi}^{2}+s)+(m_{\phi}^{2}-s)^{2}]}\,. \tag{113}\]
Throughout this work, we set \(d\sigma_{\rm scat.}/dt\) positive, with \(t\) being negative as defined above.
Turning to neutrino interactions with \(\phi\). To avoid confusions, only interaction terms with left-handed neutrinos are included in this work, so we can safely negelct right-handed neutrinos. Consequently, we take \(g_{\nu}\equiv g_{\bar{\nu}}=1\) for each neutrino flavor, distinguishing a left-handed neutrino from a right-handed anti-neutrino; \(N_{g}=3\) gives the number of neutrino families.
Now, the collision term for neutrino pair annihilation, \(\nu\nu\leftrightarrow\phi\phi\), is given by
\[\left(\frac{\delta n}{\delta t}\right)_{\nu\nu\leftrightarrow\phi \phi} =\gamma^{0}_{\nu\nu\leftrightarrow\phi\phi}+\beta^{1}_{\nu\nu\nu \leftrightarrow\phi\phi}\gamma^{1}_{\nu\nu\leftrightarrow\phi\phi}\,, \tag{114}\] \[\left(\frac{\delta\rho}{\delta t}\right)_{\nu\nu\leftrightarrow \phi\phi} =\zeta^{0}_{\nu\nu\leftrightarrow\phi\phi}+\beta^{1}_{\nu\nu\nu \leftrightarrow\phi\phi}\zeta^{1}_{\nu\nu\leftrightarrow\phi\phi}\,. \tag{115}\]
Here, the superscripts \(0\) and \(1\) signify an expansion in the neutrino chemical potential. The collision integrals are given by
\[\gamma^{0(1)}_{\nu\nu\leftrightarrow\phi\phi}(T) =\frac{g_{\nu}g_{\bar{\nu}}N_{g}}{(2\pi)^{4}}\int\frac{dsdE_{+}dE_{- }}{2}\,f_{\nu}^{\rm eq}f_{\nu}^{\rm eq.(1)}\sigma_{\nu\nu\rightarrow\phi\phi \phi}\mathcal{F}_{12}\] \[\times[(1-\Delta_{\rm ann})+\Delta_{\rm ann}(1-\beta_{\rm ann})]\,\] \[\zeta^{0(1)}_{\nu\nu\leftrightarrow\phi\phi}(T) =\frac{g_{\nu}g_{\bar{\nu}}N_{g}}{(2\pi)^{4}}\int\frac{dsdE_{+}dE _{-}}{2}\,f_{\nu}^{\rm eq}f_{\nu}^{\rm eq.(1)}\sigma_{\nu\nu\rightarrow\phi \phi}\mathcal{F}_{12}\] \[\times E_{+}\left[(1-\Delta_{\rm ann})+\Delta_{\rm ann}(1-\beta_{ \rm ann})\right]\,.\]
where we set [9]
\[f_{\nu}^{\rm eq}(E_{\nu}/T_{\nu}) =\frac{1}{e^{E_{\nu}/T_{\nu}}+1}\,, \tag{116}\] \[f_{\nu}^{\rm eq.1}(E_{\nu}/T_{\nu}) =\frac{1}{e^{E_{\nu}/T_{\nu}}+e^{-E_{\nu}/T_{\nu}}+2}\,, \tag{117}\]
and \(\beta^{1}_{\nu\nu\leftrightarrow\phi\phi}=2\tilde{\mu}_{\nu}\). We have neglected the second-order corrections that are proportional to \(f^{\rm eq,(1)}f^{\rm eq,(1)}\). For the pseudoscalar-mediated annihilation it reads,
\[\sigma_{\nu\nu\rightarrow\phi\phi}=\frac{y_{\nu}^{2}}{16\pi\Lambda^{2}}\,(1-4m_ {\phi}^{2}/s)^{1/2}\,, \tag{118}\]
This cross section is a factor of \(2\) larger than the case of electron annihilation (in the limit \(m_{e}\to 0\)), the reason being that here we take each chiral SM neutrino as one degree of freedom, as stated above.
For the elastic scattering \(\phi\nu\leftrightarrow\phi\nu\), we obtain
\[\left(\frac{\delta\rho}{\delta t}\right)_{\phi\nu\leftrightarrow\phi\nu}=\beta^ {0}_{\phi\nu\leftrightarrow\phi\nu}\zeta^{0}_{\phi\nu\leftrightarrow\phi\nu}+\beta^{1}_ {\phi\nu\leftrightarrow\phi\nu}\zeta^{1}_{\phi\nu\leftrightarrow\phi\nu}\,, \tag{119}\]
with \(\beta^{0}_{\phi\nu\leftrightarrow\phi\nu}=e^{\tilde{\mu}_{\phi}}\,,\,\beta^{1}_{ \phi\nu\leftrightarrow\phi\nu}=e^{\tilde{\mu}_{\phi}}\tilde{\mu}_{\nu}\) and
\[\zeta^{0(1)}_{\phi\nu\leftrightarrow\phi\nu} =\frac{(g_{\phi}+g_{\bar{e}^{\star}})(g_{\nu}+g_{\bar{e}})N_{g}}{(2 \pi)^{4}}\int dE_{1}dE_{2}dsdt\,f_{\phi}^{\rm eq}f_{\nu}^{\rm eq.(1)}\] \[\times\frac{d\sigma_{\phi\nu\rightarrow\phi\nu}}{dt}\,\mathcal{F}_ {12}\langle\Delta_{\rm sca}\delta E\rangle\,.\]
The differential cross section for the pseudoscalar-mediated \(\phi\) interaction reads
\[\frac{d\sigma_{\phi\nu\rightarrow\phi\nu}}{dt}=\frac{-y_{e}^{2}t}{16\pi\Lambda^{ 2}(m_{\phi}^{2}-s)^{2}}\,. \tag{100}\]
And the cross section is the same for anti-neutrinos (same below).
## Appendix D collision terms for \(p\)-wave scalar DM
The \(p\)-wave case follows the dark gauge boson mediator model in our preceding methodology paper [9], and thus we further assume that electrons and neutrinos may carry different dark charges, labeled as \(y_{e}\) and \(y_{\nu}\). The (differential) cross sections, averaging over initial states and summing up final states are given as follows.
For DM-neutrino interactions, there are pair creation/annihilation
\[\sigma_{\nu\nu\rightarrow\phi\phi}=y_{\nu}^{2}\frac{(s-4m_{\phi}^{2})}{24\pi \Lambda_{Z^{\prime}}^{4}}\left(1-\frac{4m_{\phi}^{2}}{s}\right)^{1/2}\,, \tag{101}\]
and elastic scattering
\[\frac{d\sigma_{\phi\nu\rightarrow\phi\nu}}{dt}=y_{e}^{2}\frac{(m_{\phi}^{2}-s )^{2}+st}{4\pi\Lambda_{Z^{\prime}}^{4}(m_{\phi}^{2}-s)^{2}}\,, \tag{102}\]
per neutrino species. For DM-electron interactions, there are also creation/annihilation
\[\sigma_{ee\rightarrow\phi\phi}=y_{e}^{2}\frac{(s-4m_{\phi}^{2})(s+2m_{e}^{2} )}{48\pi\Lambda_{Z^{\prime}}^{4},s}\left(\frac{s-4m_{\phi}^{2}}{s-4m_{e}^{2}} \right)^{1/2}\,. \tag{103}\]
and elastic scattering
\[\frac{d\sigma_{\phi e\rightarrow\phi e}}{dt}=y_{e}^{2}\frac{(m_{e}^{2}+m_{ \phi}^{2}-s)^{2}+t(s-m_{e}^{2})}{4\pi\Lambda_{Z^{\prime}}^{4}[m_{e}^{4}-2m_{e} ^{2}(m_{\phi}^{2}+s)+(m_{\phi}^{2}-s)^{2}]}\,. \tag{104}\]
## Appendix E \(s\)-wave and \(p\)-wave Dirac DM
In the case of \(s\)-wave Dirac DM, we take the simplest case
\[\mathcal{L}^{\text{S}}=\sum_{l}\frac{y_{l}}{\Lambda_{Z^{\prime}}^{2}}\left( \bar{\chi}\gamma^{\mu}\chi\right)\left(\bar{l}\gamma_{\mu}l\right)\,, \tag{105}\]
where we assume that charged leptons and neutrinos can have different dark charges, being similar to the \(p\)-wave scalar case above. Then the associated four (differential) cross sections are
\[\sigma_{ee\rightarrow\chi\chi}=y_{e}^{2}\frac{(s+2m_{e}^{2})(s+2m_{\chi}^{2} )}{12\pi\Lambda_{Z^{\prime}}^{4},s}\left(\frac{s-4m_{\chi}^{2}}{s-4m_{e}^{2}} \right)^{1/2}\,, \tag{106}\]
\[\frac{d\sigma_{\chi e\rightarrow\chi e}}{dt}=y_{e}^{2}\frac{2(s-m_{e}^{2}-m_ {\chi}^{2})^{2}+2st+t^{2}}{8\pi\Lambda_{Z^{\prime}}^{4}[(s-m_{\chi}^{2})^{2}+ m_{e}^{4}-2m_{e}^{2}(s+m_{\chi}^{2})]}\,, \tag{107}\]
for electrons, and for each neutrino species
\[\sigma_{\nu\nu\rightarrow\chi\chi} =y_{\nu}^{2}\frac{(s+2m_{\chi}^{2})}{6\pi\Lambda_{Z}^{4}}\left(1 -\frac{4m_{\chi}^{2}}{s}\right)^{1/2}\,, \tag{108}\] \[\frac{d\sigma_{\chi\nu\rightarrow\chi\nu}}{dt} =y_{\nu}^{2}\frac{2(s-m_{\chi}^{2})^{2}+2t(s-m_{\chi}^{2})+t^{2 }}{8\pi\Lambda_{Z^{\prime}}^{4}(s-m_{\chi}^{2})^{2}}\,. \tag{109}\]
In the case of the \(p\)-wave annihilation, for simplicity, we only adopt the interaction term
\[\mathcal{L}^{\text{P}}=\sum_{l}\frac{\tilde{y}_{l}}{\Lambda_{Z^{\prime}}^{2}} \left(\bar{\chi}\gamma^{\mu}\gamma^{5}\chi\right)\left(\bar{l}\gamma_{\mu}l \right)\,, \tag{110}\]
for which the associated four (differential) cross sections read
\[\sigma_{ee\rightarrow\chi\chi} =\tilde{y}_{e}^{2}\frac{(s+2m_{e}^{2})(s-4m_{\chi}^{2})}{12\pi \Lambda_{Z^{\prime}}^{4}s}\left(\frac{s-4m_{\chi}^{2}}{s-4m_{e}^{2}}\right)^{1 /2}\,, \tag{111}\] \[\frac{d\sigma_{\chi e\rightarrow\chi e}}{dt} =\tilde{y}_{e}^{2}\frac{2(s-m_{e}^{2}-m_{\chi}^{2})^{2}+(2s-4m_{e }^{2}+t)t-8m_{e}^{2}m_{\chi}^{2}}{8\pi\Lambda_{Z^{\prime}}^{4}[(s-m_{\chi}^{2} )^{2}+m_{e}^{4}-2m_{e}^{2}(s+m_{\chi}^{2})]}\,, \tag{112}\]
for electrons, and for each neutrino species
\[\sigma_{\nu\nu\rightarrow\chi\chi} =\tilde{y}_{\nu}^{2}\frac{(s-4m_{\chi}^{2})}{6\pi\Lambda_{Z^{ \prime}}^{2}}\left(1-\frac{4m_{\chi}^{2}}{s}\right)^{1/2}\,, \tag{113}\] \[\frac{d\sigma_{\chi\nu\rightarrow\chi\nu}}{dt} =\tilde{y}_{\nu}^{2}\frac{2(s-m_{\chi}^{2})^{2}+2t(s-m_{\chi}^{2} )+t^{2}}{8\pi\Lambda_{Z^{\prime}}^{4}(s-m_{\chi}^{2})^{2}}\,. \tag{114}\]
## Appendix F Millicharged Dirac DM coupled to neutrino-philic pseudoscalar
We now turn to fermionic DM models, for instance, various collision terms predicted from the millicharged dark states supplied with neutrino interactions (4).
For the annihilation channel \(ee\leftrightarrow\chi\chi\) we write,
\[\sigma_{ee\rightarrow\chi\chi}=\frac{4\pi\alpha^{2}\epsilon^{2}(s+2m_{e}^{2})(s +2m_{\chi}^{2})}{3s^{3}}\left(\frac{s-4m_{\chi}^{2}}{s-4m_{e}^{2}}\right)^{1/2}\,. \tag{115}\]
For the elastic scattering \(\chi e\leftrightarrow\chi e\), the interaction rates can be expressed as
\[\frac{d\sigma_{\chi e\rightarrow\chi e}}{dt}=\frac{2\pi\alpha^{2}\epsilon^{2} \left[2(m_{e}^{2}+m_{\chi}^{2}-s)^{2}+2st+t^{2}\right]}{(t-m_{\gamma,\text{eff}} ^{2})^{2}\left[m_{e}^{4}-2m_{e}^{2}(m_{\chi}^{2}+s)+(m_{\chi}^{2}-s)^{2}\right]}\,. \tag{116}\]
The effective photon mass \(m_{\gamma,\text{eff}}\) induced by finite-temperature effects has been introduced to avoid potential collinear divergence, given by [99]
\[m_{\gamma,\text{eff}}^{2}(T_{\gamma})=\begin{cases}4\pi\alpha n_{e}/m_{e}\,,&(T_{ \gamma}\ll m_{e})\\ 2\alpha\pi T_{\gamma}^{2}/3\,,&(T_{\gamma}\gg m_{e})\end{cases} \tag{10}\]
where \(n_{e}\) is the number density of electrons plus positrons.
Turning to the neutrino interactions mediated by a pseudoscalar, we get the (differential) cross sections as
\[\sigma_{\nu\nu\to\chi\chi}=y_{A}^{2}y_{\nu}^{2}\frac{s}{8\pi\Lambda^{4}}\left( 1-\frac{4m_{\chi}^{2}}{s}\right)^{1/2}\,, \tag{11}\]
and
\[\frac{d\sigma_{\chi\nu\to\chi\nu}}{dt}=y_{A}^{2}y_{\nu}^{2}\frac{t(t-2m_{\chi} ^{2})}{16\pi\Lambda^{4}(s-m_{\chi}^{2})^{2}}\,, \tag{12}\]
which applies to each SM neutrino species.
|
2306.04938 | Knowledge Detection by Relevant Question and Image Attributes in Visual
Question Answering | Visual question answering (VQA) is a Multidisciplinary research problem that
pursued through practices of natural language processing and computer vision.
Visual question answering automatically answers natural language questions
according to the content of an image. Some testing questions require external
knowledge to derive a solution. Such knowledge-based VQA uses various methods
to retrieve features of image and text, and combine them to generate the
answer. To generate knowledgebased answers either question dependent or image
dependent knowledge retrieval methods are used. If knowledge about all the
objects in the image is derived, then not all knowledge is relevant to the
question. On other side only question related knowledge may lead to incorrect
answers and over trained model that answers question that is irrelevant to
image. Our proposed method takes image attributes and question features as
input for knowledge derivation module and retrieves only question relevant
knowledge about image objects which can provide accurate answers. | Param Ahir, Hiteishi Diwanji | 2023-06-08T05:08:32Z | http://arxiv.org/abs/2306.04938v1 | ###### Abstract
###### Abstract
Visual question answering (VQA) is a Multidisciplinary research problem that pursued through practices of natural language processing and computer vision. Visual question answering automatically answers natural language questions according to the content of an image. Some testing questions require external knowledge to derive a solution. Such knowledge-based VQA uses various methods to retrieve features of image and text, and combine them to generate the answer. To generate knowledge-based answers either question dependent or image dependent knowledge retrieval methods are used. If knowledge about all the objects in the image is derived, then not all knowledge is relevant to the question. On other side only question related knowledge may lead to incorrect answers and over trained model that answers question that is irrelevant to image. Our proposed method takes image attributes and question features as input for knowledge derivation module and retrieves only question relevant knowledge about image objects which can provide accurate answers.
**Knowledge Detection by Relevant Question and Image Attributes**
**in Visual Question Answering**
Param Ahir\({}^{1}\)*, Dr. Hiteishi Diwanji\({}^{2}\)
_Department of Information Technology_
_L. D. College of Engineering_
_Ahmedabad, India_
\({}^{1}\)[email protected], \({}^{2}\)[email protected]_
_Keywords: vision, attention model, feature extraction, natural language, question-answering, knowledge retrieval_
## 1 Introduction
The task of describing visual objects is related to the visual Turing test. Visual question answering is a sub-task of that field. It is a bit complex as it is an AI-complete problem. The ultimate goal of such vision tasks is to be capable to describe images the same as humans.
Visual question answering has a number of use cases. It helps in image retrieval, online chatbots that generate answers from web content, automated driving, object descriptions, food nutrition calculation. VQA has some societal impact as VQA systems are helpful to visually impaired people. They can capture their surrounding in the image and then ask a question about it to get a better understanding of it. Visually impaired farmers can capture the images of their fields and can ask questions like how tall yield has grown? Is any leaf having any disease? Is soil looking less humid? Blind children can ask questions from a storybook. VQA makes possible for a visually impaired person to understand any kind of data visualization.
Visual question answering system can help in humanizing human-computer interactions in the artificial intelligence field in such a way that it becomes similar to human conversations. It is a multi-disciplinary research problem and requires concurrent processing of textual features from a question and visual features from the image It uses NLP to understand the input question and answer it. It is significantly different from the normal NLP problem as it requires analysis and reasoning of text over the content of the image. Object recognition techniques help in identifying the content of the image. To make the process simpler one can derive which areas of an image are important to answer the given question by providing those parts of the question to the image processing module. So that it gives attention to only essential regions of an image and process them
only. In VQA system text analysis and image analysis are mutually dependent on each other. As a human, we can easily identify objects, their position and surrounding in an image, understand the question and its relation to image and can use the knowledge and common sense to answer it. When we want a computer system to perform the same tasks systematic approach and algorithms are required. Knowledge aware VQA add one more step to the entire process that is deriving essential knowledge which can help in generating answer. The process of such a VQA system contains four modules, (i) Question features extraction (ii) Image feature extraction (iii) Knowledge extraction (iv)Answering Module. Various deep learning techniques are used to implement these modules. For processing and extraction of text features recurrent neural network (RNN)[1] is used. For processing and extraction of image features convolution neural network (CNN)[2] is used. For knowledge extraction established knowledge bases like concept net[3], DBPedia[4] are used. To predict the correct answer various classification methods are used.
Previous models of knowledge aware visual question answering have some drawbacks. These models either retrieve knowledge about question features or image features which is not always adequate to answer. As if only question-based knowledge is available there is a possibility that generated answer is not accurate for a given picture because image contains attributes that are not inclined with usual world knowledge. For example, if the user asks a question like "What is the color of apple?" then question dependent system will generate answers like red, yellow, green or pink which are common colors for an apple. There is a possibility that the image is abstract and the artist has drawn an apple in blue color then the generated answer will become false. So whenever abstract images are present where everything is upon the imagination of the artist then the system will generate a false answer for question-based system. If only image-based knowledge is derived a then large amount of unnecessary knowledge is extracted which is of no use as the system will derive knowledge about all the objects present in the image regardless of their occurrence in question. Think of an image of market place number of objects present there will be certainly large. Such issues affect the performance of the model and refrain the system from attaining higher accuracy.
Our proposed model uses the concept of attention model in deriving knowledge. This method gives more attention to those image features which are there in the question and derive their knowledge first and after that remaining topmost image attributes are passed to derive knowledge. To derive knowledge, we use the concept of the net knowledgebase. In answer generation model image features, question features and knowledge are passed to generate answers. Using Cross entropy loss function cost is minimized. By implementing and testing the model we derived the correct number of features that need to pass to the knowledge derivation mechanism and how much knowledge should be extracted is derived. Our proposed solution provides better results than all the state of art approaches currently in use.
In summary, the contributions of this paper are, question and image relevant knowledge Detection For VQA, adding question feature matrix to retain image attributes by generating fusion matrix, adding question feature after the attributes prediction layer in such a way that attributes derived at this level gets higher priority than the previously derived attributes. Main Objective is to ensure better classification of Image Attributes.
## 2 Background
Most of the previous work in this field deals with simple visual question answering systems with classifiers. There was no scope to gather peripheral knowledge about objects in question. Previously features of question and image were derived using diverse algorithms. To derive the image features algorithms like VGGNet [5] and GoogleNet [6] were used. To derive textual features algorithms like Word2Vec [7] and one-hot encoding were used. To produce feature fusion, vector representation of the combination of textual
and image features were given as input to the multilayer perceptron [8] in neural network. Various methods were used to combine these features like simple concatenation, sum pooling, average pooling or product of features, etc. Answer generation module takes features with the question and feature fusion as input and passes it to classifier to predict the answer. Correct answer is produced by predicting score of candidate answers by relating them to some loss function to measure dissimilarity among two probability distributions. Earlier basic baseline models [9][10] were used to answer the question about the image. Those models answer the question by giving the most frequent answers. Some models even answer the question by randomly picking the answer and then checking its accuracy with various loss functions. Later on, some sophisticated models with a linear classifier or multilayer perceptron were used. Vector representation of the combination of textual and image features are given as input to the multilayer perceptron. Various methods were used to combine these features like simple concatenation, sum pooling, average pooling or product of features, etc. Most of the previous works deal with two models, Simple multilayer perceptron (MLP) and Long short-term memory (LSTM) [11]. MLP used a neural network classifier with two hidden layers. Image features combined with textual features were given as input. To derive the output tanh activation function is used. For textual features representation, a bag-of-words method [13] was used. For image features, the output of the last layer of ResNet [12] (visual geometry group) was used. LSTM model used one-hot encoding for question features and for, image features are derived just like MLP but features are transformed into a linear vector of 1024 dimension to match it with the question feature vector. In both models, for image representation other than ResNet GoogLeNet, VGGNet can be used. The basic problem with using global features is that it generates obscure input space for the model. It is important to attend the most relevant region of the input space. So that generated input space is relevant to the task and model gets clarity about its target area that should be looked upon to generate the answer. An issue with these models is that they include global image features in processing and generation of the answer which is not required. So, the attention model only focuses on local features of the image which are derived using the textual attention model.
## 3 Related Work
Currently, there are several methods in use for external knowledge-based visual question answering. As stated earlier there are four modules and different VQA implement these models differently.
### Image Features Extraction
Image features are extracted using various object detection methods to get the list of objects and their attributes. There are many algorithms like, R-CNN [14] which extracts region proposal and pass them into CNN and generate answers using classifier like SVM [15]. There is a Spatial Pyramid Pooling (SPP-NET) [16] which solves the problem of the fixed image size of R-CNN by introducing spatial pyramid pooling layer. There is Faster-RCNN which unlike R-CNN uses regression and classification layer in net and uses the ROI pooling layer [17]. There is a Feature Pyramid Network (FPN) [18] which solves the problem of class imbalance and undetected small objects in faster-RCNN using focal loss and feature pyramid networks. There is RetinaNet [19] which combines the stages of all previous methods into single and generates a single-stage module for object detection. There is YOLO [20] (you only look once) which uses Dark Net architecture to generate answers.
### Textual Features Extraction
In VQA text data is questions and answers which are converted into sophisticated feature sets that can be used by classifiers. There are various methods for that like, bag-of
words which counts the presence of words and place it in the feature list. It does not take the number of word occurrences in consideration. There is a TF-IDF [21] algorithm which is related to term frequency and inverse document frequency of words. There is a word2vec algorithm that uses a neural network to generate word embedding in such a way that it preserves the semantics of words.
### External Knowledge Extraction
In VQA currently, there is a possibility of extracting knowledge of question or image features from knowledge bases. SPARQL [3] like structured query language is used to generate the query and knowledge is derived.
### Answer Generation
This module is divided into two parts (i)feature fusion and (ii)Answer Predictor. For feature fusion currently, the attention model is used. The Attention model contains vectors that assign weights to the regions of the image and words of a sentence. A similar approach is used for the sentence where a specific word can increase the probability of the occurrence of another word. There are many mechanisms available to implement attention like, dot-product, scale dot-product, content-base and location base. Broadly attention mechanism can be divided into three categories, Self-attention - Try to find a relation between input sequences by relating words at a different position in input sequence in each trial. Global-attention - Attend the entire input sequence. Local Attention - Attend parts of the input sequence. Method like Generalized Multimodal Factorized High-Order (MFH) Pooling uses the Multimodal Factorized Bilinear Pooling (MFB) model to provide a fusion of textual and image features. It uses a generalized MFH approach to cascade multiple MFB to find complex correlations to represent an accurate distinction between different question-image pairs. In CROSS-MODAL MULTISTEP FUSION NETWORK (CMF) output from image and word attention is fed as input into CMF network then at each layer three outputs are generated. Out of the two attention features are given for the next CMF unit and fusion feature provides multistep fusion using sum pooling to get the final feature for answer prediction. The process of answer generation is treated as a classification problem. Feature fusion of image and text and derived external knowledge is passed to a multi-layer perceptron. A Softmax activation function is used to generate the probability of each answer. There are some methods that also pass question types in classifiers for better classification of an answer. There are some methods that use pictorial superiority theory and pass image vector externally into the classifier. Some methods that treat the whole problem in terms of a graph and use graph convolution layer to predict the answer. For parameter learning and loss calculation, varied loss functions are used.
### Datasets
An external knowledge-based VQA system requires a special kind of dataset that contains questions that requires some external knowledge to generate answers. Normal VQA contains images and question-answer pairs. There are knowledge-based VQA datasets like Ok-VQA, KVQA, and FVQA available.
## 4 Our Approach
Our proposed approach derives question and image feature in the beginning and then passes it into the knowledge extraction module to extract question aware image attributes knowledge which can help in generating highly accurate answers. Our knowledge extraction module is not biased toward textual or visual contents. Extracted knowledge using this method is more precise than older methods.
Our proposed flow is as follows,
1. Region extraction from input image using bounding boxes.
2. Extract image attributes Vatt(I) using Faster R-CNN.
3. Textual data embedding is generated using Glove.
4. Data embedding is passed to LSTM to generate text attributes Tatt (QA)
5. Combine question aware image attributes for knowledge extraction.
6. Write ConceptNet query to derive knowledge about attributes.
7. Vatt(I), Kknow (I, Q) and Tatt (Q) combined into single scene representation as triplet.
8. MLP classifier this triplet as input.
9. Answers with probability distributions are generated.
### Image Features Extraction
Images are resized to 224 x 224 dimensions. Such preprocessed images are given to Faster R-CNN pre-trained on ImageNet dataset with VGG16 as the backbone. THE faster R-CNN algorithm detects objects in two parts. In the first part, it detects all the image regions that possibly contain the objects through the sliding window. The first part is called Region Proposed Network (RPN). In second pass object is identified and a vector of dimension 2048 is generated. The feature vector contains the feature and means pooling of the convolution layer or region from where it's detected. Feature files contain the shape of the image and the name of objects.
\[\text{V}_{k}\!\in\!\text{d}^{\text{m}}\ \text{X} \tag{1}\]
\[\text{V}_{\text{att}}\ \text{(I}_{i})=\sum_{k=0}^{n}\nu^{k} \tag{2}\]
As shown in equation (1) feature vector of kth objects of single image i contain detected object name m and dimension of its proposed region dm. Such n objects of the same image are combined using their image id. The threshold value for derived objects is between [10,100].
### Textual Features Extraction
To create textual data embedding pre-trained Glove word vectors with 42B tokens, 1.9M vocab and 300d vectors are used. Questions and answers are tokenized into words. The vocabulary of the top question and answers are created and word vectors from pre-trained Glove are generated. This vector of dimension 300 is passed to the LSTM network with 1024 hidden layers to generate final output embedding.
### External Knowledge Extraction
In this module image attributes Vatt (I) and Question attributes Tatt (Q) are passed into the knowledge extraction module where ConceptNet derives knowledge about objects in JSON REST API. Edges in extracted JSON file contain knowledge about objects in the form of relation and fact pair.
Our approach is to derive the knowledge about those image attributes which are part of question attributes and then trailing the top 5 attributes are considered. Derived knowledge is stored in the form of a triplet and using word2Vector converted in K know (IQ) vectors. Knowledge up to level 11 is derived about an object. Top 11 object edges are selected by the observation that most objects contain at least 11 edges of knowledge available and are enough to generate the answer.
**Algorithm: knowledgeExtraction (I, Q)**
```
1:for \(\upsilon^{k}\) in V\({}_{\text{an}}\) (Ii) until count isequal to 5
2:for m in \(\upsilon^{k}\)
3:if (m == T\({}_{\text{att}}\) (Q))
4:know(m)
5:else
6:know(m)
7:count++
8:K\({}_{\text{know}}\) (Ii, Qj) = know(m) X Ii
9:End loop
10:End loop
```
In algorithm v\({}^{\text{v}}\)k object vector m is name of object about which know(m) knowledge is extracted about ith image and ith question. It uses concept similar to attention models where textual attention is given to image feature vectors.
### Answer Generation
Answer generation model is classification model with multilayer perceptron where top N-most answers are considered N-classes. Output is in form of probabilities of each class and for that SoftMax activation function is used. Input to neural network is image feature vector of dimension 2048 is concatenated with question feature vector of dimension 300and knowledge vector K know (IQ). In model training to update the model weights iteratively AMSGrad variant of adam optimizer with 0.003 learning rate is used. It uses categorical cross entropy loss function to calculate to calculate the cost of network.
\[\text{Z'}=\text{concate}\ (\text{V}_{\text{att}}\text{(I)}\ \text{U}\ \text{K}_{\text{know}}\ \text{(I, Q)}\ \text{U}\ \text{T}_{\text{att}}\ \text{(Q)}) \tag{3}\]
\[\text{Y'}=\text{SoftMax}\ \text{(Z'}+\text{W}\ _{\text{i,q}}) \tag{4}\]
Loss calculation,
\[\text{Loss}\ \text{(Y, Y')}=\ \sum_{i=0}^{n}(\text{Y}_{\text{i}}\ \text{*}\ \text{log}\ \text{(Y}_{\text{i'}})) \tag{5}\]
Here W i,q is weights for the model and Y' is predicted output and Y is actual output.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Relationship** & **Example** \\ \hline RelatedTo & (Skiing,RelatedTo,Mountains) \\ \hline AtLocation & (Tajmahal,AtLocation,Agra) \\ \hline IsA & (Dog,IsA,Mammal) \\ \hline CapableOf & (Dog,CapableOf,barking) \\ \hline UsedFor & (Umbrella,UsedFor,shading) \\ \hline Desires & (Dog,Desires,Playing) \\ \hline HasProperties & (Donuts,HasProperties,sweet) \\ \hline HasA & (Dog,HasA,tail) \\ \hline part of & (Dog,part of, canines) \\ \hline ReceivesAction & (Dog,ReceievesAction,Fed by human) \\ \hline CreatedBy & (Chocolate,CreatedBy,Coco) \\ \hline \end{tabular}
\end{table}
Table 1: Relationship and Knowledge from ConceptNet [3]
## 5 Experiment
In this section, we evaluate the performance of our model with the state-of-art model. We are using Tensorflow and Keras to implement our proposed solution on the established dataset and compare it with previous results.
### Datasets
For training we used Ok-VQA [22] dataset and compared results with model with only image-based model. We also created a small dataset of 20 images and 30 question-answer with knowledge base file to decide different parameters and accuracy of our model. Ok-VQA Dataset contains around 14,055 open-ended questions and 5 ground truth answer per question on an average. All questions in the dataset required external knowledge to retrieve answer.
### Evaluation
Our model is trained on 9009 sample and for validation 7779 samples were taken. To evaluate model Wu-Palmer Similarity (WUPS) [23] accuracy metric is used which can
Figure 1: Proposed Model
calculate the similarity index between two words and return relatedness score between them.
The first step is to preprocess textual and visual data. In textual preprocessing data is converted into lowercase. Question and answers are tokenized into words of maximum character length 20. For image data preprocessing all images are converted into 'bgr' format to make it compatible with OpenCV. All images are resized to [224,224] shape. The second step is feature extraction from textual and visual content. For text feature extraction questions and answers are encoded. In both datasets, the question is stored in the JSON file. Format of this JSON file is as follows,
e.g.,
\(\
performance and until depth 11 intelligent and useful knowledge is derived. Format of the derived JSON file is as follows,
e.g., { know_id: 81721,
uri: ConceptNet/e/655e2da1f472ca894742d4156a8d363b
Labels: umbrella, sunny
Surface: "[Umbrella] is used for shading in [sunny] place."
Relation: "usedfor"}
Finally, simple multilayer perceptron and long short-term memory are used for answer classification. In MLP number of hidden units are 1024 and the number of hidden layers is 3. We are using a dropout of 0.5. The activation function for the first and second layers is tanh and Relu. In LSTM number of hidden units are 512 and the number of hidden layers is one. For the final layer of MLP SoftMax activation function with 1000 classes is used to generate answers with their probability. It uses an adam optimizer with an o.o01 learning rate. For loss calculation, a categorical cross-entropy function is used. We trained our model for 10 epochs with batch size 100. Comparisons of our result with the standard method are given below,
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Accuracy**} \\ \cline{2-3} & _Training_ & _Validation_ \\ \hline Img-attention & 0.6532 & 0.5162 \\ \hline Que-attention & 0.6023 & 0.5170 \\ \hline
**Img-Que Co-attention (Proposed)** & **0.6698** & **0.5963** \\ \hline Img-Que & 0.6703 & 0.5972 \\ Stacked Co-attention [24] & & \\ \hline \end{tabular}
\end{table}
Table 1: **Overall accuracies on training and validation samples of OKA**
Figure 3: **Training and Validation Accuracy on OK-VQA**
Figure. 5 shows that accuracy is not much improved on training data because we are still considering dataset ground-truth answers as the first choice of answer. If the dataset does not contain an answer then the only knowledge-based answer is considered. Overall model accuracy is mentioned in table II. Our model accuracy is around 66% for training data and 59% for validation data. We also tried to implement our model on the stacked co-attention VQA model to improve accuracy only marginally. Figures. 6 and 7 show some of the results derived from our model. Results in figure 7 have a probability of less than 50% which is not considered a proper answer. Reason for low accuracy in question 3 is complex grammatically structure of question and in question 4 is knowledgebase is not able to derive knowledge accurately.
deducting their implication to deliver fitting answers. Some previous systems required additional knowledge that is fulfilled by adding knowledge from the knowledge bases in the system. Previous systems fed visual attributes to the knowledge detection module to derive the information regarding an object in question. The problem with this approach is, it contains lots of additional information about various objects of an image that is not required for answer generation and does not contain the information required for answers. The previous model provides low accuracy for a question like 'why' which heavily depends on the knowledge. To solve this issue our model fed those visual attributes of input in the knowledge detection module that are present in question or at least give them precedence over other visual features. Our model will generate more relevant knowledge to answer questions which eventually increase the accuracy of the model. The previous model provides an accuracy of around 52% which we improved to 59%.
There are many challenges in a knowledge-based approach that gives scope for future work like, question embedding of a model can be more semantically precise for questions like question 3 of figure 7. Knowledge extraction is still not completely automated and can be trained separately to be used in different areas. Domain-specific datasets can help in creating domain-based VQA systems. The process of knowledge classification for answer generation can also be improved. These are some of the problems of the current approach which gives direction for future work in this field.
## Acknowledgments
We like to thank the team of Ok-VQA for generously making their dataset available for use for researchers like us.
|
2304.12923 | Quantum Gaussian Process Regression for Bayesian Optimization | Gaussian process regression is a well-established Bayesian machine learning
method. We propose a new approach to Gaussian process regression using quantum
kernels based on parameterized quantum circuits. By employing a
hardware-efficient feature map and careful regularization of the Gram matrix,
we demonstrate that the variance information of the resulting quantum Gaussian
process can be preserved. We also show that quantum Gaussian processes can be
used as a surrogate model for Bayesian optimization, a task that critically
relies on the variance of the surrogate model. To demonstrate the performance
of this quantum Bayesian optimization algorithm, we apply it to the
hyperparameter optimization of a machine learning model which performs
regression on a real-world dataset. We benchmark the quantum Bayesian
optimization against its classical counterpart and show that quantum version
can match its performance. | Frederic Rapp, Marco Roth | 2023-04-25T15:38:19Z | http://arxiv.org/abs/2304.12923v1 | # Quantum Gaussian Process Regression for Bayesian Optimization
###### Abstract
Gaussian process regression is a well-established Bayesian machine learning method. We propose a new approach to Gaussian process regression using quantum kernels based on parameterized quantum circuits. By employing a hardware-efficient feature map and careful regularization of the Gram matrix, we demonstrate that the variance information of the resulting quantum Gaussian process can be preserved. We also show that quantum Gaussian processes can be used as a surrogate model for Bayesian optimization, a task that critically relies on the variance of the surrogate model. To demonstrate the performance of this quantum Bayesian optimization algorithm, we apply it to the hyperparameter optimization of a machine learning model which performs regression on a real-world dataset. We benchmark the quantum Bayesian optimization against its classical counterpart and show that quantum version can match its performance.
**Keywords: quantum computing, quantum machine learning, quantum kernel methods, Gaussian processes, Bayesian optimization, hyperparameter optimization**
Contributing authors: [email protected]; [email protected];
Quantum computers are expected to have a profound impact on numerous areas in science and industry. The ongoing progress of quantum computing hardware [1, 2, 3] is accompanied by intense algorithmic research activities which explore avenues towards achieving a quantum advantage beyond proof-of-principles [4, 5]. Quantum machine learning combines quantum computing and machine learning and is often deemed as one of the fields that could benefit from quantum computing early [6]. While some quantum machine learning methods rely on running quantum versions of linear algebra sub-routines for a speed-up [7, 8, 9], these methods usually require deep quantum circuits that are beyond the capabilities of currently accessible noisy intermediate-scale quantum (NISQ) hardware [10].
Recently, quantum kernel methods have received much attention. These methods are appealing because they can be studied using the well established toolbox of classical kernel theory [11, 12]. Furthermore, using a suitable feature map, they can be implemented on available NISQ devices [13]. The general idea is to project the data into the Hilbert space of a quantum computer using a quantum feature map. By calculating pair-wise inner products of data points, a kernel matrix can be calculated which can then be used in classical methods such as support vector machines or kernel ridge regression [14, 15]. The expectation is that by encoding the data into a quantum Hilbert space, the feature map can be enriched with non-classical resources that provide an advantage compared
to classical feature maps. This has already been demonstrated for tailored datasets [6, 16].
While quantum versions of kernel machines like the support vector machine [8] have been the focus of recent studies, quantum variants of probabilistic kernel methods have not received as much attention. In this work, we use quantum kernels to create _quantum Gaussian processes_ (QGP). Gaussian process (GP) models are popular machine learning methods based on Bayesian inference. GPs are specified by a covariance matrix which can be obtained by calculating the Gram matrix of a kernel function for a given dataset. Given their probabilistic nature, GPs have the desirable property of providing a variance for their predictions which allows uncertainty quantification.
Earlier investigation of QGPs have focused on using quantum approximations of classical kernels and have raised the question whether the variance information can be retained in noisy near-term devices [17]. Here, we investigate QGP regression using a hardware-efficient, parameterized feature map. We demonstrate that careful regularization of the Gram matrix can help preserve the variance and show how the overall performance can be improved with an end-to-end optimization using log-likelihood optimization. We show the capabilities of the QGP model by using it as surrogate model for a Bayesian optimization (BO) [18], a task that critically relies on the variance information of the surrogate model. We benchmark the resulting quantum Bayesian optimization (QBO) against optimizations using a surrogate models based on conventional GPs and show that QBO can match their performance on the task of optimizing the multidimensional hyperparameters of a classical machine learning model. The hyperparamter optimization is performed on a regression task of a real-world dataset which evaluates the remaining value of used industrial machinery. Figure 1 gives an overview of the various components used in this work.
The manuscript is structured as follows. In Sec. 1, we provide an introduction to the fundamentals of QGPs by briefly discussing GP theory and exploring quantum kernels. Subsequently, we illustrate the concept of quantum BO using a QGP surrogate model. In Sec. 2, we demonstrate the versatility and effectiveness of QGP models through our analysis of a one-dimensional dataset, followed by their successful application in QBO for the purpose of minimizing a multidimensional function and identifying the optimal hyperparameters of a machine learning model. We present the results of our simulations, including those obtained from noiseless and sample-based experiments, as well as the outcomes from a real quantum computing backend.
## 1 Quantum Gaussian Process Regression
Gaussian process regression is a non-parametric Bayesian machine learning method [19]. It can be used to solve a regression problem of the form
\[y=f(\mathbf{x})+\epsilon\,, \tag{1}\]
where \(f(\mathbf{x})\) is a data generating function, with labels \(y\in\mathbb{R}\), observed data \(\mathbf{x}\in\mathcal{X}\subset\mathbb{R}^{d}\) and independent zero-mean Gaussian noise \(\epsilon\sim\mathcal{N}(0,\sigma^{2})\). If \(f\) is a random function with a Gaussian prior distribution then the function values can be taken as random variables that form a Gaussian process (GP). We denote the GP as \(\mathcal{GP}(m,k)\) with a mean function \(m\) and a covariance function \(k\). Note that \(k\) is mathematically equivalent to a kernel function, we will therefore refer to it as _kernel_ in the following. A GP is a collection of random variables such that any finite subset is Gaussian distributed [19]. Concretely, for collection of data points \(X:=(\mathbf{x}_{1},\dots,\mathbf{x}_{n})\,,\mathbf{x}_{i}\in\mathcal{X}\) the variables \(f(\mathbf{x}_{i})_{i=1}^{n}\) are jointly distributed by a multivariate Gaussian distribution such that
\[f(\mathbf{x})\sim\mathcal{N}(m(\mathbf{x}),k(\mathbf{x},\mathbf{x^{\prime}}))\,,\]
GPs are thus distributions over functions specified by the covariance \(k\)[20].
To predict the values \(f_{*}\) of new data points \(X_{*}\) (test points), we can calculate the posterior distribution given \(X\) and \(X_{*}\)
\[p(f_{*}|X_{*},X,f)=\mathcal{N}(f_{*};\mu_{*},\Sigma_{*})\,. \tag{2}\]
GP regression thus not only yields a prediction for the mean \(\mu_{*}\) but also for the covariance \(\Sigma_{*}\). They
are given by
\[\mu_{*} = k_{XX_{*}}^{T}(k_{XX}+\sigma^{2}\mathbf{I})^{-1}f, \tag{3}\] \[\Sigma_{*} = k_{X_{*}X_{*}}-k_{XX_{*}}^{T}(k_{XX}+\sigma^{2}\mathbf{I})^{-1}k_{XX_{* }}\,. \tag{4}\]
The elements of the Gram matrices \(k_{XX}\), \(k_{XX_{*}}\) and \(k_{X_{*}X_{*}}\) are the pair-wise inner products of the training points, the training and test points, and the test points, respectively. Note that here we have assumed that we only have access to noisy labels as in Eq. (1). The variance of this noise can be explicitly taken into account in the calculation of the mean and the variance. This servers as an implicit regularization which often results in a better conditioned posterior covariance matrix.
Equations (2)-(4) show that the outcome of the GP is fully governed by the choice of the kernel. In general, a kernel is a positive definite function \(k:\chi\times\chi\rightarrow\mathbb{R}\), which serves as a similarity measure between pairs of inputs \(\mathbf{x}\) and \(\mathbf{x^{\prime}}\). Specifically, the kernel computes the inner product of the corresponding feature vectors \(\phi(\mathbf{x})\) and \(\phi(\mathbf{x^{\prime}})\)
\[k(\mathbf{x},\mathbf{x^{\prime}})=\langle\phi(\mathbf{x}),\phi(\mathbf{x^{\prime}})\rangle_{ \mathcal{F}}, \tag{5}\]
in a potentially high-dimensional feature space \(\mathcal{F}\), where the feature map \(\phi(\mathbf{x})\) is a non-linear map from the input space \(\chi\) to the feature space \(\mathcal{F}\).
### Quantum kernels
Kernels can be constructed by embedding data into the Hilbert space of a quantum system [13, 21] [see Fig. 1(a)]. The resulting quantum state is
\[\left|\phi(\mathbf{x};\mathbf{\theta})\right\rangle=U(\mathbf{x};\mathbf{\theta})\left|0 \right\rangle\,. \tag{6}\]
The unitary operator \(U(\mathbf{x};\mathbf{\theta})\) implements the quantum feature map quantum feature map \(\phi\). It encodes the classical data point \(\mathbf{x}\) into a quantum state. In principle, it can depend on additional parameters \(\mathbf{\theta}\) that can be trained variationally [22]. Using the feature map in Eq. (6), a quantum kernel can be defined in terms of the Hilbert-Schmidt inner product
\[k(\mathbf{x},\mathbf{x^{\prime}})=\operatorname{Tr}\left[\rho(\mathbf{x})\rho(\mathbf{x^{ \prime}})\right], \tag{7}\]
with the density matrix \(\rho(\mathbf{x})=U(\mathbf{x})\left|0\right\rangle\!\!\langle 0|\,U^{\dagger}(\mathbf{x})\). It can be shown that this definition results in a positive definite kernel [12]. For pure states, Eq. (7) reduces to the overlap between the states encoding the data points such that in practice the kernel elements can be calculated by applying the feature map and its inverse to \(\mathbf{x}\) and \(\mathbf{x^{\prime}}\) and measuring the occupation of the ground state
\[k(\mathbf{x},\mathbf{x^{\prime}})=\left|\langle\phi(\mathbf{x^{\prime}})|\phi(\mathbf{x}) \rangle\right|^{2}=\left|\langle 0|\,U(\mathbf{x^{\prime}})^{\dagger}U(\mathbf{x}) \left|0\right\rangle\right|^{2}. \tag{8}\]
From this it becomes clear that the defining quantity for a quantum kernel \(k\) is the quantum feature map \(\phi\). The choice of an optimal embedding strategy is an open research question such that the feature maps are often chosen heuristically. Finally, to obtain a quantum GP, we substitute a quantum
Figure 1: Conceptual layout of the workflow used in this work. (a) The QGP model is constructed by calculating a quantum kernel and substituting the corresponding Gram matrix as covariance matrix into a classical GP. If the feature map used for the quantum kernel contains variational parameters, they can be optimized using maximum likelihood estimation [Eq. (9)]. (b) By using a QGP model as a surrogate model for Bayesian optimization, a QBO can be obtained. (c) In Sec. 2.2, the QBO algorithm is used to optimize the hyperparameters \(\xi\) of a gradient boosting model \(h(\mathbf{x},\xi)\) which performs regression on a dataset for remaining value estimation of industrial machines.
kernel Eq. (7) into the definition of the variance of a GP model Eq. (4). This is illustrated in Fig 1(a).
The variational parameters in Eq. (6) can be trained using various methods. Popular approaches for quantum kernel machines such as quantum support vector machines or quantum kernel ridge regression often optimize the kernel directly using, e.g., kernel alignment techniques [22, 23, 24]. In this work we make use of the Bayesian framework of GPs and train the QGP model end-to-end by maximizing the marginal log-likelihood. Due to the Gaussian form of the posterior [cf. Eq. (2)], the marginal log-likelihood can be given in closed form [19]
\[\log p(y|X)= -\frac{1}{2}y^{T}(k_{XX}(\mathbf{\theta})+\sigma^{2}I)^{-1}y \tag{9}\] \[-\frac{1}{2}\log\det\bigl{(}k_{XX}(\mathbf{\theta})+\sigma^{2}I\bigr{)}\,.\]
Here, \(k_{XX}(\mathbf{\theta})\) indicates the dependence of the kernel on the parameters \(\mathbf{\theta}\) through the parameterized feature map. The optimization workflow is sketched in Fig. 1(a). Optimizing parameterized quantum circuit is an active area of research with open questions such as how to avoid barren plateaus during training [25].
In practice, the kernel elements in Eq. (8) can only be computed approximately because any observable has to be be determined using a finite amount of measurements. The resulting statistical error scales as \(\mathcal{O}(1/\sqrt{N})\) where \(N\) is the number of measurements. In addition, available NISQ devices suffer from a multitude of noise sources such as short coherence times, gate errors and cross-talk. As a result the estimated kernel \(\tilde{k}\) deviates from the true kernel \(k\). To ensure that \(\tilde{k}\) is positive definite, we need to apply regularization techniques. Taking account of the variance for noisy objective functions as done in Eq. (3)-(4) already serves as an inherent regularization. Nevertheless, for noiseless objective functions or for noisy estimates \(\tilde{k}\), this might not be sufficient to ensure positive definiteness. Therefore, we employ an eigenvalue-cutoff strategy, where the spectrum of the full Gram matrix is truncated at zero [26]. This requires a full eigenvalue decomposition of the Gram matrix followed by a reconstruction using the truncated spectrum and the original eigenvectors [22]. This technique has already been shown to provide good results [27]. Additionally, compared to other methods such as shifting the spectrum by the lowest eigenvalue, the truncation does not introduce a constant offset to the variance of the GP model which is desirable for applications where the quantification of uncertainty is required. In general, the regularization of Gram matrices used for GP regression is problem-specific and non-trivial, even for classical kernels [28].
In this work, we are interested in using QGP models as surrogate models in Bayesian optimization. This is explained in the next section and illustrated in Fig. 1(b).
### Quantum Bayesian Optimization
Bayesian optimization [30] is a global optimization method that solves problems of the form
\[\mathbf{x}^{*}=\arg\min_{\mathbf{x}}g(\mathbf{x})\,. \tag{10}\]
The optimization is performed iteratively where the next sample is chosen using information obtained from previous iterations. Through this informed guidance, BO usually requires a modest amount of samples which makes it attractive for problems where the evaluation of \(g\) is expensive. BO treats \(g\) as a black-box such that there are no further restrictions regarding its functional form.
The algorithm is initialized by drawing a random sample and fitting a _surrogate model_ as a proxy for \(g\). The next sample is then chosen by considering an exploitation-exploration trade-off which is quantified by an _acquisition function_. This procedure is then repeated such that the surrogate model approximates the true function increasingly well. Due to their posterior variance output GP models are popular choices for surrogates. A common choice for an acquisition function is the expected improvement (EI) [18] which measures the expectation of the improvement on the objective \(g(\mathbf{x})\) with respect to the predictive distribution of the surrogate model. The EI function is given by
\[\text{EI}(\mathbf{x})=[g(\mathbf{x}^{+})-\mu(\mathbf{x})-\lambda]\mathbf{\Phi}(Z)+\Sigma(\mathbf{x} )\varphi(Z)\,, \tag{11}\]
and \(\text{EI}=0\) for \(\Sigma(\mathbf{x})=0\). Here \(\mu(\mathbf{x})\) and \(\Sigma(\mathbf{x})\) are the posterior mean prediction and the prediction uncertainty of the surrogate model at position \(\mathbf{x}\), and \(\varphi(Z)\), and \(\mathbf{\Phi}(Z)\) are the probability distribution and the cumulative distribution of
the standard normal distribution. The location of the best sample, i.e., the current observed minimum of the surrogate model, is indicated by \(\mathbf{x}^{+}\). The standardized prediction error \(Z\) is given by \(Z=[f(\mathbf{x}^{+})-\mu(\mathbf{x})-\lambda]/\Sigma(\mathbf{x})\) if \(\Sigma(\mathbf{x})>0\) and \(Z=0\) if \(\Sigma(\mathbf{x})=0\). The parameter \(\lambda\) in Eq. (11) is a hyperparameter that controls the exploitation-exploration trade-off, where a high value of \(\lambda\) favours exploration.
We obtain a quantum Bayesian optimization (QBO) algorithm by using a QGP model as a surrogate model. This has the potential to enhance BO for scenarios where quantum kernels have an advantage over classical kernels. A possible drawback is that the exploitation-exploration trade-off, which depends on the model variance is now influenced by quantum computing noise sources. To demonstrate the QBOs capabilities, we apply it to several test cases which is shown in the next section.
## 2 Results
We illustrate the capabilities of QGP models on a one dimensional regression problem. We then demonstrate the feasibility of using QBO with a QGP surrogate model on two multidimensional optimization tasks. The quantum circuits for the QGP models are implemented using Qiskit [31]. The linear systems for the GPs are solved using a Cholesky decomposition of the Gram matrices. We validate the algorithm using numerical simulators provided by Qiskit. Results from real quantum computers are obtained from _ibmq_montreal_[32].
### Quantum Gaussian Process Regression
We apply QGP regression on a one dimensional dataset where the data generating function [cf. Eq.(1)] is
\[f(x)=x\sin(x)\,. \tag{12}\]
We assume that only noisy labels \(y\) can be observed with zero-mean Gaussian noise with a variance \(\sigma^{2}=(0.1)^{2}\) [cf. Eq. (1)]. We sample \(n_{\text{training}}=23\) non-equidistant training-points in the interval \([0,2\pi]\), and \(n_{\text{test}}=50\) equidistantly-spaced test points.
The quantum kernel is calculated using a hardware-efficient feature map with variational parameters \(\mathbf{\theta}\) as depicted in Fig. 2.1 We encode the data using \(q=4\) qubits and \(l=2\) layers. To account for the limited domain of the non-linearity in the feature map, the labels \(y\) are scaled to the interval \([-1,1]\).
Footnote 1: to be published.
To gauge the performance of the model under ideal conditions, we perform statevector simulations from which we obtain completely noiseless quantum kernels. The regression result can be seen in Fig. 3(a) where the mean prediction of the model is shown as a solid line and the standard deviation is depicted as a shaded area. Overall the method is able to achieve a good fit a is visible in the figure. The standard deviation that is obtained from the QGP variance has a reasonable behaviour and is low in areas with high training point density and high in a
Figure 2: Example of the hardware efficient feature map with \(q=4\) qubits and \(l=1\) layers, inspired by a Chebychev quantum feature map design [29]. The trainable parameters are denoted by \(\theta_{i}\) and the data points by \(x\). For the results in this work, various values of \(q\) and \(l\) are used.
Although good results can already be achieved using a general feature map, e.g., by choosing the parameters \(\mathbf{\theta}\) randomly [33, 34], we adapt the kernel to the dataset using maximum-likelihood optimization (cf. Eq. (9) and surrounding discussion). The marginal log-likelihood as a function of optimization iterations can be seen in Fig. 4. In this example, the optimization leads to a reduction of the mean squared error (MSE) by about an order of magnitude [from \(0.3\) (\(R^{2}=0.939\)) to \(0.02\) (\(R^{2}=0.996\))] We observe a convergence of the marginal log-likelihood after \(\sim 80\) iterations. The specific optimization behavior is dependent on the chosen feature map design such as the number of qubits, layers and variational parameters. We use the optimal parameters obtained from these ideal simulation for subsequent noisy simulations and calculations on real quantum computers.
Any real quantum computation is ultimately affected by statistical errors. Figure 3(b) shows results of the same simulation as in Fig 3(a) with sample-based estimation of the wavefunctions with a modest amount of \(N=10,000\) measurements per evaluation point. These kind of simulations are a good indicator of the future performance of the model in a regime with low hardware noise. Due to the statistical error in this simulation, the kernel is now only a noisy estimate \(\tilde{k}\) of the true kernel \(k\). As can be seen in the figure, the performance of the model is only slightly worse compared to the ideal simulation (\(\text{MSE}=0.024\)). Particularly, due to careful regularization of the Gram matrix (cf. Sec. 1.1) the variance information can be retained reasonably well.
We conclude this example by running the QGP regression on real quantum hardware using the _ibmq_montreal_ device. The results are shown in Fig. 3(c). We use readout error mitigation [35], and dynamical decoupling [36] to mitigate the hardware errors. Compared to the simulations, the performance of the model slightly decreases with the method obtaining an erorr of \(\text{MSE}=0.114\) on the test data. Nevertheless, the mean prediction only marginally deviates from the true function. As expected, the regularization of the quantum kernel matrices has to be increased such that the overall standard deviation increases. Nevertheless, even on the real quantum computer the variance of the standard deviation of the prediction can still be retained such that one can clearly distinguish
Figure 3: QGP regression on a dataset created using Eq. (12) (black line). The results are obtained using the feature map in Fig. 2 with \(q=4\) qubits and \(l=2\) layers for the encoding, \(n_{\text{training}}=23\) training points, shown as the blue crosses. The test points are marked by the red dots. The posterior mean of the QGP is shown as the red-line and the standard deviation as the shaded area. (a) shows the result of the statevector simulation with optimized parameters, obtaining an \(R^{2}\) score of \(0.996\) and an \(\text{MSE}=0.022\). (b) shows the result of the sample-based simulation. We use the optimal parameters obtained in the previous ideal run, resulting in an \(R^{2}\) score of \(0.996\) and an \(\text{MSE}=0.024\). (c) shows the result of the real hardware run, using the _ibmq_montreal_ backend, leading to an \(R^{2}\) score of \(0.978\) and an \(\text{MSE}=0.114\). All runs use the same parameters.
between areas of high and low uncertainty. This is a substantial improvement compared to previous results [17].
The quality of the solution and the posterior variance are dependent on the chosen quantum feature map. Appendix B shows results for the same dataset using a different feature map and a different quantum computer.
### Quantum Bayesian Optimization
We asses the QBO routine introduced in Sec. 1.2 by minimizing the two-dimensional Branin-Hoo function
\[f_{\mathrm{bh}}(x)=a(x_{2}-bx_{1}^{2}+cx_{1}-r)^{2}+s(1-t)\cos(x_{1})+s\,, \tag{13}\]
where \(a,b,c,s,t\) are real parameters and \(x_{1}\in[-5,10]\), \(x_{2}\in[0,15]\). We fix the parameters such that the function has three global minima (cf. caption of Fig. 5). We substitute Eq. (13) to into Eq. (1) to generate data with zero mean Gaussian noise with a variance of \(\sigma^{2}=(0.5)^{2}\).
The hardware efficient feature map illustrated in Fig. 2 is utilized for the QGP model which is used as a surrogate model for the QBO. We encode the two-dimensional input vector with \(q=4\) qubits which increases the model's expressibility compared to a single encoding [37]. Every parameter \(\theta\) in the feature map is sampled uniformly from the interval \([0,2\pi]\) and kept fixed for the duration of the optimization.
Figure 5(a) shows the results for statevector (red line) and sample-based simulations (blue line) where the optimization has been averaged over 25 runs. The resulting standard deviation of the respective simulations is depicted as shaded areas. It can be seen that both, the BO using kernels obtained from the noiseless and the noisy simulations converge to the true minimum of the function. Especially for the sample-based simulation, this requires thoughtful regularization of the quantum Gram matrices. We compare the performance of the QBO routines to a classical BO with a GP using an RBF kernel. The RBF kernel is optimized in each iteration using maximum likelihood estimation. Despite this optimization which is not used by the QBO it can be seen that the classical and the quantum models perform comparably well.
To demonstrate the applicability of QBO to a real-world scenario, we use the algorithm to optimize the hyperparameters \(\xi\) of a gradient boosting model \(h(\mathbf{x},\xi)\)[38] that is applied to a regression task as illustrated in Fig. 1(c). The gradient boosting model is used to predict the price of industrial machinery with respect to different machine types, specifications, and amount of working hours. In total, the dataset contains 2910 data points, and the one-hot encoding of the categorical features leads to 65 features in total. Further details are shown in Appendix A. For the optimization, we fix the categorical hyperparameters of the gradient boosting model and only optimize the five continuous hyperparameters (cf. Table 1). The objective function for the QBO is the cross validated MAE of the gradient boosting model on the training dataset for a given set of hyperparameters.
We encode the five dimensional hyperparameter vector with the feature map in Fig. 2 using \(q=10\) qubits and \(l=2\) layers. Figure 5(b) depicts the result for the different BO runs. Additionally, a random search is shown for comparison. As in the previous example, the QBO results are compared to a BO with a classical GP with an optimized RBF kernel. It can be seen that the results of the QBO are on par with the results of the classical BO. This is true for both, the statevector and the sample-based simulations. As expected, all BO approaches outperform the random search on average.
## 3 Discussion
In this study, we apply QGP models to one and multi-dimensional regression problems and show that they can be used as a surrogate model for BO to create a QBO. We demonstrate that QBO can
Figure 4: Convergence plot of the log-likelihood loss function [cf. Eq. (9)], the loss is entirely evaluated on the training data. The variable parameter of the optimization are the angles \(\mathbf{\theta}\) in the feature map.
be used to solve real-world hyperparameter optimization problems. Our encoding strategy allows for effectively using the variational parameters of the data embedding circuit as hyperparameters for the quantum kernels. In our simulations, we observe that the posterior variance of the QGP remains intact under the influence of sampling-noise and even for the calculation NISQ devices, although the influence of the various error sources in the latter affect the result. Nevertheless, since the results from the sampling-based simulations can be seen as an upper-bound for future hardware capabilities, the outlook is optimistic.
Although we demonstrate the feasibility of using QBO to optimize hyperparameters of a machine learning model, the potential benefits of employing quantum kernels over classical machine learning methods in tasks using classical data remain uncertain [39]. However, it is reasonable to expect that QBO may provide advantages in problems where quantum data can be leveraged to achieve a quantum advantage [16]. Notably, QBO is potentially well-suited for active learning tasks in expensive molecular simulations, where the evaluation of the potential energy surface is based on quantum mechanics and is computationally expensive [40, 41].
The performance of the QGP model remains unexplored in several avenues within this work. For example, the choice of feature map is a crucial aspect and it has been shown that choosing problem-specific feature map with an inductive bias that is tailored to the dataset has various advantages such as improved performance and trainability [23, 42]. It is also known that using parametrized feature maps require special care when scaling the number of qubits which can lead to exponential concentration [44].
Moreover, in this work, we use fidelity-based kernels for the QGP. These have an unfavorable quadratic scaling with the size of the dataset as
\begin{table}
\begin{tabular}{|c|c|c|} \hline hyperparameter & minimum & maximum \\ \hline \(\alpha\) & 0 & 1.0 \\ \hline \(\gamma\) & 0 & 5.0 \\ \hline \(n_{\text{max-depth}}\) & 1 & 50 \\ \hline \(n_{\text{estimators}}\) & 1 & 300 \\ \hline \(n_{\text{min-child-weight}}\) & 1 & 10 \\ \hline \end{tabular}
\end{table}
Table 1: Hyperparameter space of the gradient boosting model.
Figure 5: BO results averaged over independent runs with the mean shown as solid lines and the variance as shades. The expected improvement [Eq. (11)] is used as acquisition function with an exploration-exploitation parameter of \(\lambda=0.1\) The classical BO uses a GP surrogate model with an optimized RBF kernel (black line). The QBO results are obtained with the feature map in Fig. 2 using statevector (red line) and sample-based simulations (blue line). The the initial samples for each individual run are the same for the quantum and classical QBO for better comparison. At each iteration, only the best current result is shown. (a) shows the result for the minimization Eq. (13) where the parameters are fixed at \(a=1\), \(b=5.1/(4\pi^{2})\), \(c=5/\pi\), \(r=6\), \(s=10\) and \(t=1/8\pi\). The feature map for the QBO uses \(q=4\) qubits and \(l=2\) layers. The results are averaged over 25 runs. (b) shows the result of the hyperparameter optimization of a gradient boosting model on a industrial dataset. The average result of ten iterations of random search runs is shown (green, solid). The kernel is calculated using \(q=10\) qubits and \(l=2\) layers.
the pair-wise inner product of the data points have to be calculated. An alternative approach would be to use projected quantum kernels as proposed in [43] which not only have a linear scaling but also are thought to have beneficial properties when the dimension of the feature space increases significantly. These alternative kernels could easily be integrated in the QGP and analyzed in future studies.
While the QGP models presented in this work feature a quantum calculation of the kernel, the majority of their operations are performed classically. However, there is potential for increased improvements by creating a _fully quantum_ QGP with a quantum kernel and employing HHL-based inversion of the covariance matrix [7, 9]. Such an approach could leverage the benefits of both quantum kernels and quantum linear algebra subroutines, which would help overcome today's limitation of GP models which are currently affected by an unfavorable scaling with the size of the dataset.
Acknowledgments.This work was supported by the German Federal Ministry of Economic Affairs and Climate Action through the project AutoQML. The authors would like to thank Horst Stuhler for kindly providing the dataset. We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team.
|
2308.05152 | Quantum Lego Expansion Pack: Enumerators from Tensor Networks | We provide the first tensor network method for computing quantum weight
enumerator polynomials in the most general form. If a quantum code has a known
tensor network construction of its encoding map, our method is far more
efficient, and in some cases exponentially faster than the existing approach.
As a corollary, it produces decoders and an algorithm that computes the code
distance. For non-(Pauli)-stabilizer codes, this constitutes the current best
algorithm for computing the code distance. For degenerate stabilizer codes, it
can be substantially faster compared to the current methods. We also introduce
novel weight enumerators and their applications. In particular, we show that
these enumerators can be used to compute logical error rates exactly and thus
construct (optimal) decoders for any i.i.d. single qubit or qudit error
channels. The enumerators also provide a more efficient method for computing
non-stabilizerness in quantum many-body states. As the power for these speedups
rely on a Quantum Lego decomposition of quantum codes, we further provide
systematic methods for decomposing quantum codes and graph states into a
modular construction for which our technique applies. As a proof of principle,
we perform exact analyses of the deformed surface codes, the holographic
pentagon code, and the 2d Bacon-Shor code under (biased) Pauli noise and
limited instances of coherent error at sizes that are inaccessible by brute
force. | ChunJun Cao, Michael J. Gullans, Brad Lackey, Zitao Wang | 2023-08-09T18:00:02Z | http://arxiv.org/abs/2308.05152v2 | # Quantum Lego Expansion Pack: Enumerators from Tensor Networks
###### Abstract
We provide the first tensor network method for computing quantum weight enumerator polynomials in the most general form. As a corollary, if a quantum code has a known tensor network construction of its encoding map, our method produces an algorithm that computes its distance. For non-(Pauli)-stabilizer codes, this constitutes the current best algorithm for computing the code distance. For degenerate stabilizer codes, it can provide up to an exponential speed up compared to the current methods. We also introduce a few novel applications of different weight enumerators. In particular, for any code built from the quantum lego method, we use enumerators to construct its (optimal) decoders under any i.i.d. single qubit or qudit error channels and discuss their applications for computing logical error rates. As a proof of principle, we perform exact analyses of the deformed surface codes, the holographic pentagon code, and the 2d Bacon-Shor code under (biased) Pauli noise and limited instances of coherent error at sizes that are inaccessible by brute force.
###### Contents
* 1 Introduction
* 2 General Formalism
* 2.1 Abstract scalar weight enumerator
* 2.2 Generalized Abstract Weight Enumerators
* 2.3 Tensor Weight Enumerators
* 2.4 Tracing tensor enumerators
* 3 Applications of weight enumerators
* 3.1 Code Distance from Enumerators
* 3.2 Error Detection
* 3.2.1 General error channels in the Pauli basis
* 3.2.2 Biased Pauli Errors
* 3.2.3 Coherent error
* 3.2.4 Amplitude damping and dephasing channels
* 3.3 Effective Distance
* 3.4 Subsystem codes and Mixed Enumerator
* 3.5 Higher Genus Enumerator
* 3.6 Coset Enumerator and errors with non-trivial syndrome
* 3.7 Decoders from weight enumerators
* 3.7.1 Maximum likelihood and Bayesian decoders
* 3.7.2 Marginals
* 3.8 Logical Error Rates
* 3.8.1 Exact Computations
* 3.8.2 Error rate estimation
* 4 Computational Complexity
* 4.1 General Comments
* 4.1.1 Brute Force Method
* 4.1.2 Tensor network method
* 4.2 Cost for common codes
* 4.3 Entanglement and Cost
* 5 Examples
* 5.1 Surface code
* 5.2 2D color code
* 5.3 Holographic code
* 5.4 2d Bacon-Shor code
* 5.4.1 2d Compass code
* 6 Discussion
* 6.1 Connection with stat mech mapping
* 6.2 Tensor networks from circuits
* 6.3 Future directions
## 1 Introduction
Topological and geometrical insights have led to a number of recent breakthroughs in quantum error correction, e.g. [1, 2, 3]. On the other hand, quantum weight enumerator polynomials [4] provide a complementary, algebraic perspective on quantum error correcting codes (QECCs). Anecdotally, quantum weight enumerators contain crucial information of the code property. A number of variants and generalizations have also been applied to derive linear programming bounds [5, 6, 7], to understand error detection under symmetric [8] and asymmetric[9] Pauli errors, and for generating magic state distillation protocols [10]. However, wider applications of the quantum weight enumerators have been relatively limited beyond codes of small sizes compared to the other approaches partly to due their prohibitive computational costs.
Building upon the previous framework of quantum lego (QL) [11] and the recently developed tensor weight enumerator formalism [12], we revisit the weight enumerator perspective of quantum error correction and provide a more efficient method to compute them. We present new results in both formalism and in algorithm that enable a number of novel applications for quantum error correction. On the formalism level, we review abstract weight enumerators and their corresponding MacWilliams identities [12]. We then introduce mixed enumerators, higher genus enumerators, coset enumerators and generalized enumerators, which are useful for the study of subsystem codes, decoders, and logical error probability under general independent and identically distributed (i.i.d.) single qubit error channels.
On the algorithmic level, we provide a tensor network method for computing these quantum weight enumerators in their most abstract forms. Because one can read off the code distance from weight enumerators, the problem of finding them is at least as hard as the minimal distance problem for classical linear codes, which is NP-hard [13, 14, 15, 16]. We show that quantum weight enumerators also produce optimal decoders, hence the general problem is at least #P-complete, which is the hardness of evaluating weight enumerators for classical linear codes [17]. However, more efficient algorithms are possible if additional structures are known. To the best of our knowledge, our work constitutes the best current algorithm for generating quantum weight enumerator polynomials as long as a good quantum lego construction for the quantum code is known. Compared to the brute force method, our algorithm provides up to an exponential speed up.
The enumerators immediately induce a protocol to compute quantum code distances. To the best of our knowledge, it provides the first such protocol for general quantum codes beyond (Pauli) stabilizer codes, which can provide up to an exponential speed up compared to brute force search. For non-degenerate Pauli stabilizer codes, the complexity scaling is roughly comparable with existing algorithms for classical linear codes under reasonable assumptions, which implies that it scales exponentially with the code distance. For degenerate codes, our method again provides additional speed-ups which is up to exponential compared to known methods based on
classical linear codes.
Finally, we generalize [8] and connect enumerators to logical error probabilities when the code is subjected to any i.i.d. single qudit error channel. We provide the optimal decoder for any code that admits a known quantum lego construction and propose a more accurate method to compute effective distances and error thresholds. Our arguments hints at a general connection between the hardness of distance calculation, optimal decoding, and the amount of entanglement present in the system. As a proof of principle, we derive weight enumerators, compute (biased) distances, and obtain exact analytical expressions for logical error probabilities under depolarizing and coherent noise for a few well-known stabilizer and subsystem codes that are of order a hundred qubits or so. The novel contributions in this paper are summarized in Fig. 1.
In Sec 2, we review the basics of weight enumerator polynomials in the most abstract form and introduce their generalizations. In Sec. 3, we discuss their existing applications for computing code distance and extend their applications for error detection under general error channels. We introduce new constructions such as mixed enumerators, higher genus enumerators and coset enumerators and construct optimal decoders. We also suggest improvements for threshold computations based on existing sampling-based methods when used in conjunction with enumerators. Then we discuss the computational cost of this method and provide some entanglement-based intuition in Sec. 4. As a proof of principle, and to provide novel analysis of existing codes, we study some common examples and explain their significance in Sec. 5. In Sec 5.1 we construct various weight enumerators of the (rotated) surface code and its deformations. We compare their performances under biased noise and coherent error channels. In Sec 5.2 we provide a new tensor network construction of the 2d color code using Steane codes as basic building blocks and compute its enumerators. In Sec 5.3 we study different bulk qubits with mixed enumerators in the holographic HaPPY code. We obtain their (biased) distances and performance under (biased) Pauli noise. In Sec 5.4 we apply the mixed enumerator technology to the Bacon-Shor code and showcase its computation for subsystem codes. Finally, we make some summarizing comments in Sec. 6 and provide insights on the connection with stat mech model and graph states.
We prove the relevant theorems, discuss technical implementations and clarify practical simplifications in the Appendices. Although not stated explicitly, the distance finding protocol introduced in [18] effectively computes the Shor-Laflamme enumerators for a subset of stabilizer codes known as local tensor network codes. Their approach also shares a number of similarities with our own, which we explain in App. C.3. For such stabilizer codes, our protocol generally offers a quadratic speed-up in the form of reduced bond dimensions. In the regime where the stabilizer code has high rate and code words are highly entangled, our method can lead to an exponential advantage using the quantum MacWilliams identities.
## 2 General Formalism
Throughout the article, we represent multi-indexed objects like vectors and tensors in bold face letters \(\mathbf{A},\mathbf{B}\) to avoid clutter of indices. Scalar objects are written in regular fonts like \(A,B\).
Figure 1: Summary of contributions. Topic dependencies are red-green-blue color coded. If all three colored topics in the formalism section are used, then the color is white. Cyan indicates green and blue topics. Yellow indicates red and green topics. Black indicates that it does not use any of the new formalism, but is a new tensor network construction. Half shaded grey/blue indicates it uses grey and blue topics. Half shaded grey/white indicates that it uses all 4 formalism topics.
### Abstract scalar weight enumerator
Abstract scalar weight enumerators introduced in [12] include common enumerators discussed in literature [4, 6, 9]. Let \(\mathcal{E}\) be an error basis on Hilbert space \(\mathfrak{H}\) with local dimension \(q\). A _weight function_ is any function \(\mathrm{wt}:\mathcal{E}\to\mathbb{Z}_{\geq 0}^{k}\). We extend this (without introducing new notation) to \(\mathrm{wt}:\mathcal{E}^{n}\to\mathbb{Z}_{\geq 0}^{k}\) by
\[\mathrm{wt}(E_{1}\otimes\cdots\otimes E_{n})=\mathrm{wt}(E_{1})+\cdots+ \mathrm{wt}(E_{n}).\]
For a \(k\)-tuple of indeterminates \(\mathbf{u}=(u_{1},\ldots,u_{k})\) we write
\[\mathbf{u}^{\mathrm{wt}(E)}=u_{1}^{\mathrm{wt}(E)_{1}}\cdots u_{k}^{\mathrm{ wt}(E)_{k}}.\]
We can then define abstract enumerators of Hermitian operators \(M_{1},M_{2}\) for a weight function \(\mathrm{wt}\) as
\[A(\mathbf{u};M_{1},M_{2}) =\sum_{E\in\mathcal{E}^{n}}\mathrm{Tr}(EM_{1})\,\mathrm{Tr}(E^{ \dagger}M_{2})\mathbf{u}^{\mathrm{wt}(E)}\] \[B(\mathbf{u};M_{1},M_{2}) =\sum_{E\in\mathcal{E}^{n}}\mathrm{Tr}(EM_{1}E^{\dagger}M_{2}) \mathbf{u}^{\mathrm{wt}(E)}.\]
These polynomials satisfy a quantum MacWilliams identity. Let us restrict to the case where our error basis satisfies \(EFE^{\dagger}F^{\dagger}=\omega(E,F)I\) for a phase \(\omega(E,F)\). This includes the Pauli basis (of local dimension \(q\)) as well as general Heisenberg representations. Consider the (polynomial-valued) function \(f(E)=\mathbf{u}^{\mathrm{wt}(E)}\) for a weight function \(\mathrm{wt}:\mathcal{E}\to\mathbb{Z}_{\geq 0}^{k}\). Then the discrete Wigner transform of this function is
\[\hat{f}(D)=\frac{1}{q}\sum_{E}\omega(E,D)f(E)=\frac{1}{q}\sum_{E}\omega(E,D) \mathbf{u}^{\mathrm{wt}(E)}.\]
**Theorem 2.1**.: Suppose there exists an algebraic mapping \(\Phi(\mathbf{u})=(\Phi_{1}(\mathbf{u}),\ldots,\Phi_{k}(\mathbf{u}))\) such that
\[\Phi(\mathbf{u})^{\mathrm{wt}(D)}=\hat{f}(D)=\frac{1}{q}\sum_{E}\omega(E,D) \mathbf{u}^{\mathrm{wt}(E)}.\]
Then for any \(M_{1},M_{2}\) we have
\[B(\mathbf{u};M_{1},M_{2})=A(\Phi(\mathbf{u});M_{1},M_{2}). \tag{2.1}\]
Proof.: See [12].
The map \(\Phi\) is a generalization of the discrete Wigner transform. For the remainder of the work, we take \(\mathcal{E}\) to be the Pauli group. By considering different forms of the variable \(\mathbf{u}\), abstract weight function \(\mathrm{wt}\), and transformation \(\Phi\), one can recover existing scalar enumerator polynomials and their MacWilliams identities. For completeness, we review a few common enumerators in Appendix A that are used in this work.
### Generalized Abstract Weight Enumerators
Slightly extending the form in the previous section, we define a novel generalized weight enumerator.
\[\bar{A}(\mathbf{u};M_{1},M_{2}) =\sum_{E,F\in\mathcal{E}^{n}}\mathrm{Tr}[EM_{1}]\,\mathrm{Tr}[F^ {\dagger}M_{2}]\mathbf{u}^{wt(E,F)}\] \[\bar{B}(\mathbf{u};M_{1},M_{2}) =\sum_{E,F\in\mathcal{E}^{n}}\mathrm{Tr}[EM_{1}F^{\dagger}M_{2}] \mathbf{u}^{wt(E,F)},\]
where \(\mathrm{wt}(E,F)\) is an abstract function of the operators \(E,F\) and \(\mathbf{u}\) is a set of variables. It has no obvious classical analogues as far as we know. This type of enumerators are useful in analyzing qudit-wise general error channels. We further elaborate this connection in Sec 3.2 for coherent noise and other single qubit errors such as amplitude damping channels. We are not able to identify MacWilliams identities for these types of enumerator polynomials in general.
### Tensor Weight Enumerators
One can generalize the above scalar enumerator formalism to vectors and tensors. The reasons for this extension is two-fold: 1) the novel vector or tensor enumerators can probe code properties unavailable to their scalar counterparts and 2) the cost for computing scalar enumerators is generally expensive and scales exponentially with \(n-k\). However, by contracting suitable tensor weight enumerators, one can break down the computation of scalar enumerators into manageable pieces and render the process far more efficient. In this section, we briefly review the basic definitions of these vectorial and tensorial enumerators and introduce their graphical representations.
From [12], we define tensor enumerators
\[\mathbf{A}^{(J)}(\mathbf{u};M_{1},M_{2}) =\sum_{E,\bar{E}\in\mathcal{E}^{m}}\sum_{F\in\mathcal{E}^{n-m}} \operatorname{Tr}((E\otimes_{J}F)M_{1})\operatorname{Tr}((\bar{E}^{\dagger} \otimes_{J}F^{\dagger})M_{2})\mathbf{u}^{\operatorname{wt}(F)}e_{E,E}, \tag{2.2}\] \[\mathbf{B}^{(J)}(\mathbf{u};M_{1},M_{2}) =\sum_{E,\bar{E}\in\mathcal{E}^{m}}\sum_{F\in\mathcal{E}^{n-m}} \operatorname{Tr}((E\otimes_{J}F)M_{1}(\bar{E}^{\dagger}\otimes_{J}F^{ \dagger})M_{2})\mathbf{u}^{\operatorname{wt}(F)}e_{E,\bar{E}}\]
where \(\{e_{E,\bar{E}}\}\) are orthonormal basis vectors of a \(q^{4}\)-dimensional vector space. Here, \(\operatorname{wt}(F)\) is an abstract weight function we discussed in the previous section and \(\mathbf{u}\) can be an \(n\)-tuple of variables, and \(J\subseteq\{1,\ldots,n\}\) is a set of \(m\) qudits/locations. We write \(\otimes_{J}\) denotes the tensor product of length \(m\) Pauli string \(E\) interlaced with Pauli string \(F\) of length \(n-m\) at the positions marked in the set \(J\). Later we will also use \(\mathcal{E}^{n-m}[d]\) is the set of Pauli operators \(F\) on \(n-m\) sites that have weight \(d\).
To give a more concrete illustration of these objects consider the case of a rank-1 tensors (\(m=1\)), which we refer to as _vector_ enumerators. For simplicity consider the usual (quantum) Hamming weight where \(\mathbf{u}=z\) and \(\operatorname{wt}(E)\) returns the number of nonidentity tensor factors in the Pauli operator \(E\). For \(J=\{j\}\) the vector enumerators along leg \(j\) read
\[\mathbf{A}^{(j)}(z;M_{1},M_{2})\] \[=\sum_{E,\bar{E}\in\mathcal{E}}\sum_{d=0}^{n}A_{d}^{(j)}(E,\bar{ E};M_{1},M_{2})z^{d}e_{E,\bar{E}},\] \[\mathbf{B}^{(j)}(z;M_{1},M_{2})\] \[=\sum_{E,\bar{E}\in\mathcal{E}}\sum_{d=0}^{n}B_{d}^{(j)}(E,\bar{ E};M_{1},M_{2})z^{d}e_{E,\bar{E}}.\]
with coefficients (weights) here are defined as
\[A_{d}^{(j)}(E,\bar{E};M_{1},M_{2})\] \[=\sum_{F\in\mathcal{E}^{n-1}[d]}\operatorname{Tr}((E\otimes_{j}F )M_{1})\operatorname{Tr}((\bar{E}^{\dagger}\otimes_{j}F^{\dagger})M_{2}),\] \[B_{d}^{(j)}(E,\bar{E};M_{1},M_{2})\] \[=\sum_{F\in\mathcal{E}^{n-1}[d]}\operatorname{Tr}((E\otimes_{j} F)M_{1}(\bar{E}^{\dagger}\otimes_{j}F^{\dagger})M_{2}).\]
The \(\mathcal{E}^{n-1}[d]\) here is the set of operators that have weight \(d\) on the \(n-1\) qubits except the \(j\)th one, and \(E\otimes_{j}F\) is a Pauli string that has \(E\) inserted on the \(j\)-th position of the Pauli string:
\[E\otimes_{j}F=F_{1}\otimes F_{2}\otimes\ldots F_{j-1}\otimes E_{j}\otimes F_{ j+1}\otimes\cdots\otimes F_{n}.\]
Formally, it is also convenient to express these coefficients in coordinates, once we have chosen a standard basis \(\{\mathbf{\hat{e}}_{j}\}\). For example, one can denote
\[A_{d}^{(j)}(E,\bar{E};M_{1},M_{2}) \to A_{d}^{j},\] \[B_{d}^{(j)}(E,\bar{E};M_{1},M_{2}) \to B_{d}^{j}\]
by identifying \(j=0,\ldots,q^{4}\) where \(E,\bar{E}\) each has \(q^{2}\) distinct values. For simplicity, we abuse notation and use \(j\) as an open index that labels the dangling leg that comes from the \(j\)-th qudit. The corresponding vector enumerator polynomials are \(A^{j}(z;M_{1},M_{2}),B^{j}(z;M_{1},M_{2})\), which we represent graphically as a rank-1 tensors in Fig. 2.
In the same vein, the coefficients for a tensor enumerator of rank \(m\) may be written as
\[\sum_{d}A_{d}^{(J)}(E,E;M_{1},M_{2})z^{d}\] \[\quad\to\sum_{d}A_{d}^{j_{1}\ldots j_{m}}z^{d}=A^{j_{1},j_{2}, \ldots,j_{m}}(z),\] \[\sum_{d}B_{d}^{(J)}(E,E;M_{1},M_{2})z^{d}\] \[\quad\to\sum_{d}B_{d}^{j_{1}\ldots j_{m}}z^{d}=B^{j_{1},j_{2}, \ldots,j_{m}}(z),\]
where each tensor coefficient \(A^{j_{1},j_{2},\ldots,j_{m}}(z)\), \(B^{j_{1},j_{2},\ldots,j_{m}}(z)\) is a scalar enumerator. A graphical representation of \(A^{j_{1},j_{2},\ldots,j_{m}}(z)\) is given below in Figure 4 (top left).
In practice, it is often sufficient to consider reduced versions of these enumerators that only keep the diagonal terms with \(E=\bar{E}\), which we represent using the same graphical form, but now with reduced bond dimension \(j_{\ell}=1,\ldots,q^{2}\). Such enumerators are known as the _reduced enumerators_ and they are sufficient for studying
Figure 2: Vector Enumerators
Pauli errors in stabilizer codes. See [12] and App. B.3. In this work, we use the color blue to denote \(A\)-type enumerators and orange to denote \(B\)-type enumerators. We often drop the variable \(z\) or \(\mathbf{u}\) to avoid clutter, but it should be understood that the tensor components of these objects are polynomials.
One can also easily define other tensor enumerators such as the double and complete enumerators by choosing different expressions for the abstract forms \(\mathbf{u}\) and weight functions \(\mathrm{wt}(E)\). An extension to the generalized abstract tensor enumerator is also possible. Details are found in App. B.
### Tracing tensor enumerators
Let us define a trace operation \(\wedge_{j,k}\) over the tensor enumerators which connects any two legs \(j,k\) in the tensor network. Graphically, it is represented by a connected edge in the dual enumerator tensor network. Acting on the basis element \(e_{E,\bar{E}}\) we define
\[\wedge_{j,k}e_{E,\bar{E}}=e_{E\setminus\{E_{j},E_{k}\},\bar{E}\setminus\{\bar{ E}_{j},\bar{E}_{k}\}} \tag{2.3}\]
when \(E_{j}=E_{k}^{*}\) and \(\bar{E}_{j}=\bar{E}_{k}^{*}\) and zero otherwise.
Each contraction can be understood as tracing together two tensors. However we can also view the two tensors as a single tensor enumerator (using the tensor product) then performing a self-trace, which is necessary and sufficient to build up any tensor network. Informally, the trace of the tensor enumerator is the tensor enumerator of the traced network, which is formally stated as the following.
**Theorem 2.2**.: Suppose \(j,k\in J\subseteq\{1,\ldots,m\}\). Then
\[\wedge_{jk}\mathbf{A}^{(J)}(\mathbf{u};M_{1},M_{2})\] \[\quad=\mathbf{A}^{(J\setminus\{j,k\})}(\mathbf{u};\wedge_{j,k}M_ {1},\wedge_{j,k}M_{2}),\]
and similarly for \(\mathbf{B}^{(J)}\).
Proof.: See Theorem 7.1 of [12].
Theorem 2.2 allows us to compute the weight enumerator of a contracted tensor network by contracting the tensor enumerators of each quantum lego block. For example, to construct a scalar enumerator given the QL representation of an encoding map in Fig. 3, we first lay down its "shadow" that is the tensor enumerator for each \([[4,2,2]]\) lego block. Then we trace together these blocks following the same network connectivity.
The component form of contracting tensor enumerators can be expressed as the conventional sum over indices for a tensor trace. For reduced enumerators at \(q=2\) this reads,
\[A^{j_{l+1},\ldots,j_{m},r_{l+1},\ldots,r_{k}}(\mathbf{u}) \tag{2.4}\] \[= \sum_{j_{1},j_{2},\ldots,j_{l}}A^{j_{1},j_{2},\ldots,j_{l},\ldots j _{m}}(\mathbf{u})A^{j_{1},j_{2},\ldots,j_{l},r_{l+1}\ldots r_{k}}(\mathbf{u})\]
and similarly for \(\mathbf{B}(\mathbf{u};M_{1},M_{2})\), where the only difference from a traditional tensor network is the variables \(\mathbf{u}\) associated with the polynomial. One can connect these tensors sequentially; at each step a lego is glued to the (generically) bigger connected component, Fig. 4. For the full tensor enumerator, or when \(q>2\), we need to take more care in raising and lowering the indices to recast them into the proper covariant and contravariant forms before summing over repeated indices.
While it is natural to use symbolic packages to implement this formalism, we will also elaborate in Appendix C how to implement these objects as the usual multi-linear function without symbolic packages using conventional tensor network methods.
## 3 Applications of weight enumerators
### Code Distance from Enumerators
The genesis of quantum weight enumerators came from the case \(M_{1}=M_{2}=\Pi\), the projection onto a stabilizer code, and \(\mathbf{u}^{wt(E)}=z^{wt(E)}\). After an appropriate normalization, the enumerators \(A^{norm}(z)=A(z)/K^{2},\ B^{norm}(z)=B(z)/K\) encode the weight distributions of stabilizers (logical identities) and normalizers (all logical operators) of the code respectively [4]. The normalized polynomials \(A^{norm}(z),\ B^{norm}(z)\) have \(B_{0}=\)
Figure 3: WEP from tracing \([[4,2,2]]\) codes
\(A_{0}=1\). It follows that \(B^{norm}(z)-A^{norm}(z)\) yields the weight distributions of non-trivial logical Pauli operators. Therefore, the smallest \(d\) for which \(B_{d}\neq A_{d}\) is thus the (adversarial) code distance. This observation also generalizes to any quantum code [8]. Formally we capture this in the following result for later reference.
**Theorem 3.1**.: Let \(\mathcal{C}\) be a quantum code, \(\Pi_{\mathcal{C}}\) be the projection onto its code subspace and
\[A(z;\Pi_{\mathcal{C}},\Pi_{\mathcal{C}}) =\sum_{d}A_{d}z^{d}\] \[B(z;\Pi_{\mathcal{C}},\Pi_{\mathcal{C}}) =\sum_{d}B_{d}z^{d}\]
be its weight enumerator polynomials properly normalized. Then
* \(A_{0}=B_{0}=1\),
* \(B_{d}\geq A_{d}\geq 0\) for all \(d\), and
* the code distance is \(t+1\) where \(t\) is the largest integer for which \(B_{i}=A_{i}\) for all \(0\leq i\leq t\).
A similar version holds for the refined enumerator, as shown by [9], from which one can determine the biased distances for the code (Thm. A.1).
As one can read off the distances from the enumerators, our tensor network method provides a straightforward way to compute and verify adversarial distances for all quantum codes whose QL description is known. This provides the first viable method to compute distances for a quantum code that need not be a stabilizer code.
### Error Detection
With weight enumerators in hand, we can easily obtain the probability for uncorrectable errors [8]. For any quantum code \(\mathcal{C}\), let \(\Pi_{\mathcal{C}}\) be the projector onto the code subspace, and write the orthogonal projector onto \(\mathcal{C}^{\perp}\) as \(\Pi_{\mathcal{C}}^{\perp}\). We say an error \(E\)_uncorrectable_ if it cannot be detected, that is \(\Pi_{\mathcal{C}}E\Pi_{\mathcal{C}}\propto\Pi_{\mathcal{C}}\), and is not proportional to the logical identity. Operationally, one performs a measurement with respect to \((\Pi_{\mathcal{C}},\Pi_{\mathcal{C}}^{\perp})\). An error is detected if the result is contained in \(\mathcal{C}^{\perp}\). For stabilizer codes, this corresponds to errors with trivial error syndrome that perform a non-identity logical operation.
Consider depolarizing channel with unbiased noise which acts identically on any single qubit with
\[\rho_{j}\rightarrow(1-3p)\rho_{j}+pX\rho_{j}X+pY\rho_{j}Y+pZ\rho_{j}Z,\]
where \(\rho_{j}\) is the reduced density matrix on site \(j\). For stabilizer codes, it is easy to check that the probability of the random Pauli errors coinciding with a non-trivial logical operator is nothing but \(p_{ne}=(B^{norm}-A^{norm})(z=p,w=1-3p)\) because a Pauli error with weight \(d\) occurs with probability \(p^{d}(1-3p)^{n-d}\). As above, we have taken the enumerators to be normalized such that \(A_{0}=B_{0}=1\). In general, [8] shows that the error probability for any code with \(\dim\mathcal{C}=K\) is
\[p_{ne}=\frac{K}{(K+1)}\big{(}B^{norm}(p,1-3p)-A^{norm}(p,1-3p)\big{)}.\]
Note the overall multiplicative factor compared to our initial estimation using stabilizer code because some logical errors takes the initial codeword to a non-orthogonal state, but only the orthogonal component is counted as non-trivial logical error in this construction.
We can extend the argument of [8] to more general error models. Suppose the error channel is given by
\[\rho_{j}\rightarrow\sum_{i=1}^{q^{2}}K_{i}\rho_{j}K_{i}^{\dagger},\]
which acts identically across all physical qudits, then on the whole system, the errors act as
\[\mathcal{E}(\rho)=\sum_{\mathbf{i}}\mathcal{K}_{\mathbf{i}}\rho\mathcal{K}_{ \mathbf{i}}^{\dagger} \tag{3.1}\]
where
\[\mathcal{K}_{\mathbf{i}}=K_{i_{1}}\otimes K_{i_{2}}\otimes\cdots\otimes K_{i_ {n}},\]
Figure 4: Graphical representation of a type-\(A\) tensor enumerator (box). Tracing the type \(A\) tensors as in eqn (2.4). Green region can be seen as \(A^{j_{1},j_{2},\ldots,j_{l},\ldots,j_{m}}(\mathbf{u})\). Traced legs are red.
and \(\mathbf{i}\) is summed over all \(q^{2}\)-nary strings of length \(n\). It is important to note that for each \(\mathbf{i}\), the Kraus operator and its conjugate are the same, there are no cross terms.
**Theorem 3.2**.: The non-detectable error probabilities of any error channel with the above form is given by
\[p_{nd} =\frac{K}{K+1}\Big{(}\frac{1}{K}\sum_{\mathbf{i}}\operatorname{Tr }[\mathcal{K}_{\mathbf{i}}^{\dagger}\Pi\mathcal{K}_{\mathbf{i}}\Pi] \tag{3.2}\] \[-\frac{1}{K^{2}}\sum_{\mathbf{i}}\operatorname{Tr}[\mathcal{K}_ {\mathbf{i}}^{\dagger}\Pi]\operatorname{Tr}[\mathcal{K}_{\mathbf{i}}\Pi] \Big{)}\]
for a quantum code with dimension \(K\) with projector \(\Pi\).
Proof.: See Appendix D.
For instance, in the depolarizing channel (3.1) each \(\mathcal{K}_{\mathbf{i}}\) is simply a Pauli string \(E\in\mathcal{E}^{n}\) weighted by \(p^{wt(E)}(1-3p)^{n-wt(E)}\). Substituting we find that the two terms in (3.2) are simply the enumerator polynomials \(A\) and \(B\) evaluated at \(z=p\) and \(w=1-3p\) as expected.
#### 3.2.1 General error channels in the Pauli basis
For each \(K_{a}\), its Pauli decomposition \(K_{a}=\sum_{E}c_{E}^{a}E\) allows us to re-express the error probability in terms of the generalized weight enumerators in Sec 2.2. In such cases, we can re-organize the sum over \(i\) by Pauli types. Again, let the noise model be single qubit errors that are identical across all physical qubits such that
\[\rho\to\sum_{i}^{q^{2}}K_{i}\rho K_{i}^{\dagger}=\sum_{P,\bar{P}}k_{P\bar{P}} P\rho\bar{P}^{\dagger}. \tag{3.3}\]
Let us label each \(P\bar{P}\) pair as \(G\) so that \(|\{G\}|=q^{4}\) and so write \(k_{G}=k_{P\bar{P}}\). For example, \(\{G\}=\{II,IX,XI,IZ,ZI,XX,ZZ\dots\}\) (all 16 arrangements) for \(q=2\).
Then let \(wt_{G}^{n}\) be a weight function
\[wt_{G}^{n}(E\otimes F)=\sum_{i=1}^{n}wt_{G}(E_{i}\otimes F_{i}^{\dagger}) \tag{3.4}\]
where
\[wt_{G}(E_{i}\otimes F_{i})=\begin{cases}1\text{ if }E_{i}\otimes F_{i}=G\\ 0\text{ otherwise,}\end{cases} \tag{3.5}\]
and \(E\otimes F=\bigotimes_{i}E_{i}\otimes F_{i}\). Thus \(wt_{G}^{n}\) counts the number of times \(G=P\otimes\bar{P}\) appears in a string \(E\otimes F\) where \(E,F\) each has length \(n\). The relevant terms can then be expanded in this basis as
\[B(\{k_{G}\};\Pi,\Pi)=\sum_{\mathbf{i}}\operatorname{Tr}[\mathcal{ K}_{i}\Pi\mathcal{K}_{i}^{\dagger}\Pi] =\sum_{E,F\in\mathcal{E}^{n}}\operatorname{Tr}[E\Pi F^{\dagger} \Pi]\prod_{G}k_{G}^{wt_{G}^{n}(E\otimes F)} \tag{3.6}\] \[A(\{k_{G}\};\Pi,\Pi)=\sum_{\mathbf{i}}\operatorname{Tr}[ \mathcal{K}_{i}\Pi]\operatorname{Tr}[\mathcal{K}_{i}^{\dagger}\Pi] =\sum_{E,F\in\mathcal{E}^{n}}\operatorname{Tr}[E\Pi]\operatorname{Tr}[F^{ \dagger}\Pi]\prod_{G}k_{G}^{wt_{G}^{n}(E\otimes F)} \tag{3.7}\]
We can then distill a set of enumerators sufficient in describing the effect of all error channels
\[\bar{A}(\mathbf{u}_{G};M_{1},M_{2}) \tag{3.8}\] \[=\sum_{E,F\in\mathcal{E}^{n}}\operatorname{Tr}[EM_{1}] \operatorname{Tr}[F^{\dagger}M_{2}]\mathbf{u}^{wt_{G}^{n}(E\otimes F)}\] \[\bar{B}(\mathbf{u}_{G};M_{1},M_{2})\] (3.9) \[=\sum_{E,F\in\mathcal{E}^{n}}\operatorname{Tr}[EM_{1}F^{\dagger} M_{2}]\mathbf{u}_{G}^{wt_{G}^{n}(E\otimes F)},\]
where
\[\mathbf{u}_{G}^{wt_{G}^{0}(E\otimes F)}\] \[\quad=\underbrace{u_{II}^{wt_{II}^{n}(E\otimes F)}u_{IP_{1}}^{wt _{P_{1}}^{T}(E\otimes F)}\dots u_{P_{q}P_{q}}^{wt_{P_{q}}^{n}(E\otimes F)}}_{ \text{all }q^{4}\text{ terms}}.\]
We see this is nothing but a specific form of the generalized enumerator we introduced in Sec 2.2. Note that we only need to compute the relevant enumerators once. The effects of different error models are now completely captured by the polynomials and can be evaluated by inserting the relevant values of \(c_{G}\).
By substituting the proper expressions for Kraus operators, we are now in a position to rephrase all identical single qubit error channels in the form of weight enumerators. In practice, computing the generalized enumerator that accommodates arbitrary error channels can be rather expensive. Even for qubits, we would in general require 16 different variables in a polynomial. Fortunately for common channels, the computation simplifies and it is possible to express them with a much smaller set. As the Kraus representations are not unique it may be possible that some representations yield more succinct expressions than others. For pedagogical reasons, let us apply this to a few common error channels on qubits.
#### 3.2.2 Biased Pauli Errors
For a noise model where bit flip (\(X\)) error and phase (\(Z\)) error can occur independently on physical qubits with probability \(p_{x},p_{z}\) respectively. The error channel is
\[\rho\rightarrow (1-p_{x}-p_{z}+p_{x}p_{z})\rho+(p_{x}-p_{x}p_{z})X\rho X\] \[+p_{x}p_{z}Y\rho Y+(p_{z}-p_{x}p_{z})Z\rho Z.\]
For stabilizer codes, the probability that the Pauli error coincides with a non-trivial logical operation is given by the normalized double weight enumerator of [9]:
\[(D-D^{\perp})(x,y,z,w),\]
evaluated at \(x=1-p_{x}\), \(y=p_{x}\), \(z=1-p_{z}\) and \(w=p_{z}\). Applying Theorem 3.2, we see that the actual non-correctable logical error probability is the above but again modified by multiplicative factor \(K/(K+1)\) when taken into account the effect of non-orthogonal states.
Similarly, a channel where all Pauli errors have different independent error probabilities
\[\rho\rightarrow(1-p_{x}-p_{y}-p_{z})\rho+p_{x}X\rho X+p_{y}Y\rho Y+p_{z}Z\rho Z\]
have non-correctable error probability given by the complete enumerators,
\[p_{ne}= \frac{K}{K+1}\Big{(}F(p_{x},p_{y},p_{z},1-p_{x}-p_{y}-p_{z})\] \[- E(p_{x},p_{y},p_{z},1-p_{x}-p_{y}-p_{z})\Big{)}.\]
For definitions of \(D,D^{\perp},E,F\), see [9, 12] or App. A.
#### 3.2.3 Coherent error
Pauli errors are in some sense classical; for a coherent quantum device, unitary errors are also relevant. Compared to Pauli errors, studies of the impact of coherent errors are less common [19, 20, 21] partly hampered by the computational costs. Nevertheless various methods exist. Here we examine a special case of single qubit coherent error and express it in terms of weight enumerator polynomials. Suppose we have single qubit/qudit coherent error applied identically to all physical qubits
\[\rho_{i}\to U_{i}\rho_{i}U_{i}^{\dagger} \tag{3.10}\]
acting on each qubit \(i\), where each unitary can be decomposed as
\[U_{i}=aI_{i}+bX_{i}+cY_{i}+dZ_{i}. \tag{3.11}\]
The logical error probability is
\[\bar{p}_{nd}= \frac{K}{K+1}\Big{(}\frac{1}{K}\operatorname{Tr}[U^{\dagger}\Pi U \Pi]\] \[-\frac{1}{K^{2}}\operatorname{Tr}[U^{\dagger}\Pi]\operatorname{ Tr}[U\Pi])\Big{)}.\]
Expanding \(U=\bigotimes_{i}U_{i}\) in the Pauli basis, we have \(U^{\dagger}=\sum_{E}k_{E}^{*}E\) and \(U=\sum_{F}k_{F}F\) where we sum \(E,F\) over all length \(n\) Pauli strings. As coefficients \(k_{E},k_{F}\) only depends on the number of Paulis that appear in \(F\)
\[k_{F}=a^{n-w(F)}b^{w_{x}(F)}c^{w_{y}(F)}d^{w_{z}(F)}, \tag{3.12}\]
each term in the overall probability is
\[\mathrm{Tr}[U^{\dagger}\Pi U\Pi] =\sum_{E,F\in\mathcal{E}^{n}}\mathrm{Tr}[E\Pi F\Pi]k_{E}^{*}k_{F}\] \[=\sum_{E,F\in\mathcal{E}^{n}}\mathrm{Tr}[E\Pi F\Pi]a^{n-w(F)}b^{w_{ x}(F)}c^{w_{y}(F)}d^{w_{z}(F)}\bar{a}^{n-w(E)}\bar{b}^{w_{x}(E)}\bar{c}^{w_{y}(E)} \bar{d}^{w_{z}(E)}\] \[\mathrm{Tr}[U^{\dagger}\Pi]\,\mathrm{Tr}[U\Pi] =\sum_{E,F\in\mathcal{E}^{n}}\mathrm{Tr}[E\Pi]\,\mathrm{Tr}[F\Pi ]k_{E}^{*}k_{F}\] \[=\sum_{E,F\in\mathcal{E}^{n}}\mathrm{Tr}[E\Pi]\,\mathrm{Tr}[F\Pi ]a^{n-w(F)}b^{w_{x}(F)}c^{w_{y}(F)}d^{w_{z}(F)}\bar{a}^{n-w(E)}\bar{b}^{w_{x}( E)}\bar{c}^{w_{y}(E)}\bar{d}^{w_{z}(E)}.\]
These are nothing but the generalized versions of the complete weight enumerators
\[A(\mathbf{u}_{1},\mathbf{u}_{2};M_{1},M_{2}) \tag{3.13}\] \[=\sum_{E,F\in\mathcal{E}^{n}}\mathrm{Tr}[E^{\dagger}M_{1}]\, \mathrm{Tr}[FM_{2}]\mathbf{u}_{1}^{wt(F)}\mathbf{u}_{2}^{wt(E)}\] (3.14) \[B(\mathbf{u}_{1},\mathbf{u}_{2};M_{1},M_{2})\] \[=\sum_{E,F\in\mathcal{E}^{n}}\mathrm{Tr}[E^{\dagger}M_{1}FM_{2}] \mathbf{u}_{1}^{wt(F)}\mathbf{u}_{2}^{wt(E)}\]
evaluated at \(M_{1}=M_{2}=\Pi\), \(w_{1}=a,x_{1}=b,y_{1}=c,z_{1}=d\) and \(w_{2},x_{2},y_{2},z_{2}\) at their complex conjugates. To simplify the notation, we absorbed each 4-tuple of variables into abstract variables \(\mathbf{u}_{i}\) and weight functions \(wt(\cdot)\) such that
\[\mathbf{u}_{i}^{wt(E)}=x^{wt_{x}(E)}y^{wt_{y}(E)}z^{wt_{z}(E)}w^{n-wt(E)}.\]
#### 3.2.4 Amplitude damping and dephasing channels
Amplitude damping channel is relevant for superconducting qubits. Its has a Kraus representation with operators
\[K_{0}= \begin{pmatrix}1&0\\ 0&\sqrt{1-\gamma}\end{pmatrix}\] \[= \frac{1}{2}(1+\sqrt{1-\gamma})I+\frac{1}{2}(1-\sqrt{1-\gamma})Z\] \[K_{1}= \begin{pmatrix}0&\sqrt{\gamma}\\ 0&0\end{pmatrix}=\frac{\sqrt{\gamma}}{2}X+\frac{i\sqrt{\gamma}}{2}Y\]
In this case, we only need to keep 8 distinct variables \(\{u_{II},u_{IZ},u_{ZI},u_{ZZ},u_{XX},u_{XY},u_{YX},u_{YY}\}\) as the remaining coefficients \(k_{G}\) are 0. In fact, the nonzero coefficients further satisfy \(k_{IZ}=k_{ZI}=\lambda_{I}\lambda_{Z},k_{PP}=|\lambda_{P}|^{2},k_{XY}=\bar{k}_{ YX}=\lambda_{X}\bar{\lambda}_{Y}\) where \(\lambda_{P}\) are the coefficients in the Pauli expansion of the Kraus operators. Therefore, the end polynomial would only require 4 independent variables \(\{\lambda_{I},\lambda_{X},\lambda_{Y},\lambda_{Z}\}\). In other words, when summing the polynomial in practice, we only sum over the qudit strings of local dimension 4 where the coefficients for \(G\) are non-vanishing. Furthermore, one can rewrite the nonzero coefficients as
\[\prod_{G}k_{G}^{wt_{G}(E\otimes F)}=\prod_{P}\lambda_{P}^{wt_{P}(E)}\prod_{ \tilde{P}}\bar{\lambda}_{\tilde{P}}^{wt_{P}(F)} \tag{3.15}\]
which depends on 4 parameters and is no more complicated than the complete enumerator.
For a dephasing channel, \(K_{0}\) remains the same while
\[K_{1}=\sqrt{\gamma}\begin{pmatrix}0&0\\ 0&1\end{pmatrix}=\frac{\sqrt{\gamma}}{2}(I-Z). \tag{3.16}\]
Expanding and simplifying, we find that it only depends on two nonzero coefficients \(c_{II}=(1+\sqrt{1-\gamma})/2\) and \(c_{ZZ}=(1-\sqrt{1-\gamma})/2\). Thus this is even easier than computing the original weight enumerator! Furthermore, instead of summing over the full Pauli group, we only need to sum over \(E\in\mathcal{Z}^{n}\) where \(\mathcal{Z}\) is the set of Pauli strings that only contains \(I\) or \(Z\).
### Effective Distance
While adversarial distance is a useful measure of the goodness of a code, it is also informative to devise more refined measures like effective distances [22, 23] that serve as useful benchmarks of code performance with respect to different error profiles. For example, recall that [23] defines an effective distance
\[d^{\prime}=\mathcal{N}^{-1}\log(p_{0}(1-p)^{-n}) \tag{3.17}\]
for codes under depolarizing channel, where \(p=p_{X}+p_{Y}+p_{Z}\) and \(\mathcal{N}\) is some normalization factor
that depends on the physical error probabilities. In the original definition, \(p_{0}\) is the probability where the Pauli noise implements the most likely non-trivial logical operator. Using enumerators, we can also produce more precise effective distances under depolarizing noise, where \(p_{0}\) is replaced by the probability \(p_{ne}\) where Pauli noise implements _any_ non-trivial logical operator. Similar measures have been used to quantify effective code performance [22, 24]. For example, one can define another effective distance for some \(c_{1},c_{2}\)
\[d_{\text{eff}}=c_{1}\log(p_{L})+c_{2} \tag{3.18}\]
such that \(d_{\text{eff}}\) is higher for lower error rate \(p_{L}\).
Similar to [24], we also use the normalized logical error probability
\[p_{L}^{\text{norm}}=p_{L}/p_{s=0}\]
as a measure of code performance throughout this work. Here \(p_{s=0}\) is the probability of error non-detection and better protection corresponds to a smaller normalized error rate. This is not a distance measure and it corresponds to the probability of uncorrectable error where the "error correction" protocol simply discards the quantum state upon detecting an error.
### Subsystem codes and Mixed Enumerator
The above applications are general and can be used for any quantum code. Let us now focus on a few more applications that are most closely tied to stabilizer codes and subsystem codes.
Mixed enumerators are made by tracing together tensor enumerator of both \(\mathbf{A}(\mathbf{u})\) and \(\mathbf{B}(\mathbf{u})\) types.
**Proposition 3.1**.: Let \(M(\mathbf{u})\) be a mixed enumerator polynomial obtained from tracing tensor enumerators of \(A\) and \(B\) types. MacWilliams transform on \(M(\mathbf{u})\) produces a dual polynomial \(M^{\perp}(\mathbf{u})\) which, up to normalization, can be built from the same tensor network where we exchange the \(A\) and \(B\) type tensors.
Proof.: The MacWilliams transform commutes with trace as long as the generalized Wigner transform is its own self-inverse up to a constant multiple. This is clearly the case when the tensor enumerators are diagonal, when the generalized Wigner transform reduces to regular Wigner transform. The same must also hold true for the generalized transform, as the MacWilliams transform commutes with trace when the tensor enumerators are not mixed.
A key application is finding the distance of subsystem codes where we need to enumerate all gauge-equivalent representations of the logical operators. It is convenient to think of the subsystem code as a stabilizer code encoding multiple logical qubits where some of them are demoted to gauge qubits. To obtain its distance, we first enumerate all logical operators, which is given by \(B(\mathbf{u})\) of the stabilizer code. This can again be obtained by \(A(\mathbf{u})\) and applying the MacWilliams identity. We then need to enumerate all gauge equivalent logical identities of this subsystem code \(I(\mathbf{u})\). Technical details in obtaining \(I(\mathbf{u})\) can depend on the specific tensor network in question. However, it is rather straightforward if all logical legs in the network are independent, i.e., encoding map defined by the QL tensor network has trivial kernel. For example, this is the case for holographic code, but not for the Bacon-Shor code tensor network.
For encoding tensor networks that have trivial kernel, we can divide the input legs, which we call logical legs in [11], into two categories: (i) the ones where operator pushing produce logical operators, which we now call logical legs, and (ii) the ones that alter the state of gauge qubits, which we now call gauge legs. Let us first assume that each tensor has only one such an input leg that is either logical or gauge, which is the case for the holographic code. To enumerate the logical identity, we construct a mixed enumerator -- for each tensor in the QL network whose input leg is logical, we contract the tensor enumerator \(\mathbf{A}(\mathbf{u})\) of the local atomic lego (e.g. the \([[5,1,3]]\) code in HaPPY) on the corresponding vertex in the enumerator tensor network. If the tensor in the QL network has a gauge leg, then we contract the \(\mathbf{B}(\mathbf{u})\) tensor enumerator of the local lego (Figure 5). The resulting tensor network enumerates the weights of all \(g\bar{I}\) for \(g\in\mathcal{G}\). Then the difference \(\tilde{C}(\mathbf{u})=B(\mathbf{u})-I(\mathbf{u})\) between these enumerators only contains the weights of non-identity logical operators, which informs us about the distance. This is also known as the _word distance_[25, 26].
Similarly, if we want to compute the distance of a logical qubit in the stabilizer code (i.e. all logical, no gauge qubits), then we only enumerate
the stabilizer equivalent logical operations that act non-trivially on that qubit. For this we insert \(\mathbf{B}(\mathbf{u})\) on the vertex containing the logical qubit of interest and \(\mathbf{A}(\mathbf{u})\) everywhere else. This enumerator now only counts the stabilizer equivalent of that particular logical qubit, instead of all logical qubits, like \(B(\mathbf{u})\). This can be quite relevant in the holographic code, where the central bulk qubit can have a distance that scales with system size, whereas the ones on the peripheral have constant distance [25].
For instance, in the holographic HaPPY code, Figure 5, one can treat the system as a stabilizer code. Then the _stabilizer distance_ can be determined by counting all non-identity logical operators associated with a particular bulk qubit. In the figure we choose the logical qubit living on the central tile. The stabilizer distance is then the minimum power of \(z\) in \(C_{0}(z)\) for which the coefficient is non-zero. If we treat it as a subsystem code, then the distance should instead be counted by including the logical operators of other bulk qubits as gauge qubits using the enumerator \(\tilde{C}_{0}(z)\).
If each tile has multiple input legs, some of which are gauge and others logical, we then need to make slight modifications to the tensor enumerators used in the above prescription. For a word distance computation, we send \(\mathbf{A}(\mathbf{u})\rightarrow\mathbf{A}^{\prime}(\mathbf{u})\) such that \(\mathbf{A}^{\prime}(\mathbf{u})\) enumerates all logical identity operators of the local lego code, e.g. the \([[4,2,2]]\) lego on even columns of a 2d Bacon-Shor code tensor network (Fig. 18). In other words, we enumerate all elements of the non-abelian gauge group \(\mathcal{G}\). For Pauli operators, this modification is rather straightforward as we simply count the number of operators that act as identity on the logical legs. More precisely, let
\[\Pi^{\prime}=\frac{1}{|\mathcal{G}|}\sum_{g\in\mathcal{G}}g \tag{3.19}\]
and prepare \(\mathbf{A}^{\prime}(\mathbf{u})=\mathbf{A}^{(J)}(\mathbf{u};M_{1}=\Pi^{\prime },M_{2}=\Pi^{\prime})\) as a reduced tensor enumerator.
For stabilizer distance computations, we send \(\mathbf{B}(\mathbf{u})\rightarrow\mathbf{B}^{\prime}(\mathbf{u})\), the latter of which enumerates the number of logical operators that act as the identity on gauge qubits. This can be prepared by a similar reduced tensor enumerator such that \(\mathbf{B}^{\prime}(\mathbf{u})=\mathbf{A}^{(J)}(\mathbf{u};M_{1}=\Pi^{\prime \prime},M_{2}=\Pi^{\prime\prime})\) where
\[\Pi^{\prime\prime}\propto\sum_{g\in\mathcal{G}^{\prime}}g\]
and \(\mathcal{G}^{\prime}\) is generated by the center of \(\mathcal{G}\) and logical operators \(\{\mathcal{L}_{\text{logical}}\otimes I_{\text{gauge}}\}\) that act as identity on the gauge qubits. In other words, we construct a new gauge group \(\mathcal{G}^{\prime}\) where we swapped the roles of the gauge and logical qubits in the original code defined by \(\mathcal{G}\).
Finally, for a tensor network whose encoding map has a non-trivial kernel, i.e., the logical legs are inter-dependent, one should take extra care in applying the above recipe for building a useful mixed enumerator. For instance, in the Bacon-Shor code tensor network (Fig. 18), multiple input legs are inter-dependent and several of them correspond to the same logical or gauge degree of freedom. One then needs to make sure that the type of the legs (gauge vs logical) is being tracked consistently across different tensors when contracting the tensor network.
Instead of the mixed enumerators introduced above, we can also directly use tensor enumerators to study subsystem codes. This is similar to the approach by [18, 12]. The recipe for building a relevant tensor enumerator of the code is quite similar. For each tensor that contains the logical leg/qubit, we put down a tensor enumerator \(\mathbf{B}(z)\) of the _encoding tensor_ at that node (e.g. the tensor of the \([[6,0,4]]\) state in the HaPPY code) in the tensor network, except now that we keep the logical index open in addition to the contracted legs of the tensor. The components of the resulting tensor enumerator \(\mathbf{B}(z)\) now contains the weight distribution for each logical Pauli operator. This allows us to read off the distances for each logical operator, after subtracting off the part that enumerates the logical identity by fixing
Figure 5: Left: tensor network for a mixed enumerator of the holographic HaPPY pentagon code, where blue indicates insertion of \(\mathbf{A}(z)\) and orange \(\mathbf{B}(z)\). Right: different tensor networks compute the different distribution of logical operators. The same exercise can be repeated for logical qubits at different distances from the center (labelled 0,1,2,3,4).
certain tensor indices to \(0\). This can be performed efficiently if the number of open logical legs is not too many although the number of gauge qubits can still be high 1.
Footnote 1: In practice, it may be more efficient to compute the tensor enumerator \(\mathbf{A}(z)\) of the entire code, then perform a MacWilliams transform on the tensor enumerator.
### Higher Genus Enumerator
We can also study subsystem codes using higher genus enumerators. Just as in the classical case, we can extend this to higher genus weight enumerators by introducing weight functions that count the number of factors where tuples of error operators realize specific error patterns.
For concreteness, consider genus \(g=2\). We introduce \(q^{4}\) variables \(\mathbf{u}=(u_{G_{1},G_{2}}:G_{1},G_{2}\in\mathcal{E})\), and weight function \(\mathrm{wt}:\mathcal{E}^{n}\rightarrow\mathbb{Z}^{q^{4}}\) that counts factors
\[\mathrm{wt}(E,F)=(\#\{i:E_{i}=G_{1},F_{i}=G_{2}\}:G_{1},G_{2}\in\mathcal{E}).\]
The genus-2 weight enumerators of Hermitian operators \(M_{1},M_{2}\) on \(\mathfrak{H}\otimes\mathfrak{H}\) are
\[A^{(2)}(\mathbf{u};M_{1},M_{2}) =\sum_{E,F\in\mathcal{E}^{n}}\mathrm{Tr}((E\otimes F)M_{1}) \operatorname{Tr}((E\otimes F)^{\dagger}M_{2})\mathbf{u}^{\mathrm{wt}(E,F)}\] \[B^{(2)}(\mathbf{u};M_{1},M_{2}) =\sum_{E,F\in\mathcal{E}^{n}}\mathrm{Tr}((E\otimes F)M_{1}(E \otimes F)^{\dagger}M_{2})\mathbf{u}^{\mathrm{wt}(E,F)}.\]
Notice the coefficients of these enumerators are just what would use for a systems with \(2n\) factors. The interesting addition is the additional variables to track correlations in the weights of \(E\) and \(F\). Indeed if we were to ignore these correlations and evaluate \(u_{G_{1},G_{2}}=u_{G_{1}}u_{G_{2}}\) then we recover the ordinary enumerators:
\[A^{(2)}(\{u_{G_{1}}u_{G_{2}}\};M_{1}\otimes M_{1}^{\prime},M_{2} \otimes M_{2}^{\prime})\] \[\quad=A(\{u_{G}\};M_{1},M_{2})\cdot A(\{u_{G}\};M_{1}^{\prime},M _{2}^{\prime})\] \[\quad=A(\{u_{G}\};M_{1}\otimes M_{1}^{\prime},M_{2}\otimes M_{2} ^{\prime}).\]
and similarly for \(B^{(2)}\).
To capture new information in the higher genus enumerators, we evaluate their variables in interesting ways. For example, consider the case where \(M_{1}=M_{2}=\Pi_{1}\otimes\Pi_{2}\) where \(\Pi_{1}\) and \(\Pi_{2}\) are projections that need not commute. Evaluating
\[u_{G_{1},G_{2}}=\left\{\begin{array}{cl}u_{G_{1}}&\text{if }G_{1}=G_{2}\\ 0&\text{if }G_{1}\neq G_{2},\end{array}\right.\]
we have
\[\mathbf{u}^{\mathrm{wt}(E,F)} =\prod_{G_{1},G_{2}}u_{G_{1},G_{2}}^{\mathrm{wt}_{G_{1},G_{2}}(E,F)}\] \[=\left\{\begin{array}{cl}\Pi_{G}\,u_{G}^{\mathrm{wt}_{G}(E)}& \text{if }E=F\\ 0&\text{if }E\neq F.\end{array}\right.\]
Thus
\[A^{(2)}(\mathbf{u};\Pi_{1}\otimes\Pi_{2})\] \[=\sum_{E\in\mathcal{E}^{n}}\mathrm{Tr}(E\Pi_{1})^{2}\,\mathrm{ Tr}(E\Pi_{2})^{2}\mathbf{u}^{\mathrm{wt}(E)}\] \[B^{(2)}(\mathbf{u};\Pi_{1}\otimes\Pi_{2})\] \[=\sum_{E\in\mathcal{E}^{n}}\mathrm{Tr}(E\Pi_{1}E\Pi_{1})\, \mathrm{Tr}(E\Pi_{2}E\Pi_{2})\mathbf{u}^{\mathrm{wt}(E)}.\]
In particular, consider a subsystem code whose gauge group decomposes as \(\mathcal{G}=\mathcal{G}_{1}\cup\mathcal{G}_{2}\) where each of \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) are maximal Abelian subgroups and \(\mathcal{C}(\mathcal{G})=\mathcal{G}_{1}\cap\mathcal{G}_{2}\). This is the case for generalized Bacon-Shor codes [27] where \(\mathcal{G}_{1}\) consists of the \(X\)-type generators of \(\mathcal{G}\) and the row operators, while \(\mathcal{G}_{2}\) is the \(Z\)-type generators and the column operators.2 Each of \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) could be considered a stabilizer in its own right, however the weight enumerators of these have little to do with subsystem code of \(\mathcal{G}\).
Footnote 2: In fact this is true of every subsystem code: using the usual symplectic formalism of stabilizer groups, the gauge group becomes a subspace, and a Darboux basis for this subspace provides the two isotropic subspaces that characterize \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\).
Nonetheless, consider them as stabilizers of codes and write the projections onto their code subspaces as \(\Pi_{1}=\frac{1}{2^{n-k_{1}}}\sum_{S\in\mathcal{G}_{1}}S\) and \(\Pi_{2}=\frac{1}{2^{n-k_{2}}}\sum_{S\in\mathcal{G}_{2}}S\) where \(k_{1},k_{2}\) are the dimensions of
these codes. Then
\[\operatorname{Tr}(E\Pi_{1})^{2}=\left\{\begin{array}{cc}4^{k_{1}}&\text{if }E \in\mathcal{G}_{1}\\ 0&\text{otherwise},\end{array}\right.\]
and similarly for \(\operatorname{Tr}(E\Pi_{2})^{2}\), and therefore
\[A^{(2)}(\mathbf{u};\Pi_{1}\otimes\Pi_{2})=4^{k_{1}+k_{2}}\sum_{ E\in\mathcal{G}_{1}\cap\mathcal{G}_{2}}\mathbf{u}^{\text{wt}(E)}\] \[\quad=4^{k_{1}+k_{2}}\sum_{d=0}^{n}\#(\mathcal{E}^{n}[d]\cap \mathcal{C}(\mathcal{G}))w^{n-d}z^{d}\]
is the enumerator of the stabilizer of the subsystem code of \(\mathcal{G}\). Also
\[\operatorname{Tr}(E\Pi_{1}E\Pi_{1})=\left\{\begin{array}{cc}2^{k_{1}}& \text{if }E\in\mathcal{N}(\mathcal{G}_{1})\\ 0&\text{otherwise},\end{array}\right.\]
and similarly for \(\operatorname{Tr}(E\Pi_{2}E\Pi_{2})\). Hence
\[B^{(2)}(\mathbf{u};\Pi_{1}\otimes\Pi_{2})=2^{k_{1}+k_{2}}\sum_ {E\in\mathcal{N}(\mathcal{G}_{1})\cap\mathcal{N}(\mathcal{G}_{2})}\mathbf{u}^ {\text{wt}(E)}\] \[\quad=4^{k_{1}+k_{2}}\sum_{d=0}^{n}\#(\mathcal{E}^{n}[d]\cap \mathcal{N}(\mathcal{G}))w^{n-d}z^{d}\]
is the enumerator for the logical operators of the subsystem code.
### Coset Enumerator and errors with non-trivial syndrome
Until now, we have been working with particular instances of weight enumerator polynomials, that is, \(M_{1}=M_{2}=\Pi\) or related operators. For stabilizer codes, they recover the weight distributions of error operators that have trivial syndrome. However, for the purpose of decoding, it is also useful to learn the probability \(P(E|s)\) for any error syndrome \(s\).
Let \(E\) be a Pauli error that gives syndrome \(\sigma(E)=s\). We consider the probability \(P(E\bar{L})\) of errors that are stabilizer equivalent to \(E\bar{L}\), where \(\bar{L}\) is any logical operator. If we have this distribution, then we can construct a maximum likelihood decoder by undoing the \(E\bar{L}\) with the maximal probability of \(P(E\bar{L})\) given syndrome \(s\). Similarly, one could apply a Bayesian decoder where \(E\bar{L}\) is applied with the probability \(p(E\bar{L}|\sigma(E))\) for error correction.
**Definition 3.1**.: A coset weight enumerator for a stabilizer code is given by \(A^{s}(\mathbf{u};E_{s},\Pi)=A(\mathbf{u};M_{1},M_{2})\) where \(M_{1}=M_{2}^{\dagger}=E_{s}\Pi\) for some Pauli operator \(E_{s}\) with syndrome \(s\). Its "dual" enumerator is \(B^{s}(\mathbf{u};E_{s},\Pi)=B(\mathbf{u};M_{1},M_{2})\) where \(M_{1}=\Pi\), \(M_{2}=E_{s}\Pi E_{s}^{\dagger}\). Their tensorial versions are similarly defined with \(M_{1},M_{2}\) taking on these specific values. The same definition applies for the generalized enumerators \(\bar{A}^{s},\bar{B}^{s}\).
Note that \(M_{1},M_{2}\) here are no longer hermitian for \(A\) and the operators used for \(A,B\) are different. As a result, the "dual" enumerator is very different from its usual form. We do not use or prove a MacWilliams identity in this work, though it may be interesting to see if an analogous relation exists.
**Proposition 3.2**.: Up to an overall normalization, the coefficients of the coset enumerator \(A^{s}(\mathbf{u};E_{s},\Pi)\) counts the number of coset elements in \(E_{s}\mathcal{S}\) while \(B^{s}(\mathbf{u};E_{s},\Pi)\) enumerates the number of elements \(E_{s}\mathcal{N}(\mathcal{S})\).
Proof.: Let
\[\Pi_{s}=\frac{1}{|\mathcal{S}|}\sum_{D\in E_{s}\mathcal{S}}D\]
then for any \(E\in\mathcal{E}^{n}\)
\[\operatorname{Tr}[E\Pi_{s}]\operatorname{Tr}[E^{\dagger}\Pi_{s}^{\dagger}]=| \operatorname{Tr}[E\Pi_{s}]|^{2}=q^{2n}/|\mathcal{S}|^{2}\]
if \(E\in E_{s}\mathcal{S}\) and zero otherwise. Hence, up to a constant normalization factor, the coefficient of the coset enumerator counts the number of coset elements of a particular weight. As we do not track signs in the distribution, no generality is lost by choosing the left vs right coset.
The \(B\) type enumerators have coefficients
\[\operatorname{Tr}[E\Pi E^{\dagger}\Pi_{s}]=\operatorname{Tr}[\Pi_{E}\Pi_{s}]\]
where \(\Pi_{s}=E_{s}\Pi E_{s}^{\dagger},\Pi_{E}=E\Pi E^{\dagger}\). As these projectors are orthogonal for different syndromes in a stabilizer codes, this coefficient is only non-trivial when \(E\in E_{s}\mathcal{N}(\mathcal{S})\), i.e., when \(E\) is any logical error with syndrome \(s\). Therefore, up to normalization, we again obtain an enumerator that captures the weight distribution of \(E_{s}\mathcal{N}(\mathcal{S})\).
Practically, the process of preparing this enumerator using tensor network is the same as before except we modify the values of \(M_{1}\) and \(M_{2}\). First we identify the physical qubits on which \(E_{s}\) has support. Suppose \(E_{s}\) acts on a particular lego non-trivially with \(E_{s}^{T}\), then we prepare the \(A\)-type tensor coset enumerator of this lego with
\(M_{1}=M_{2}^{\downarrow}=E_{s}^{T}\Pi^{T}\) where \(\Pi^{T}\) is the projection onto the code subspace of the local quantum lego. Such a tensor enumerator counts elements in the coset \(E_{s}^{T}\mathcal{S}^{T}\). We then repeat this for all such tensors. For ones that \(E_{s}\) does not have support, we compute their tensor enumerator with \(M_{1}=M_{2}=\Pi^{T}\) as usual. Then we contract these tensor enumerators in the same way as we did for building \(A(\mathbf{u};M_{1},M_{2})\), e.g. Figure 8. The resulting enumerator polynomial is the desired \(A_{s}(\mathbf{u};E_{s}\Pi)\). Also note that \(M_{1},M_{2}\) take on a special form that satisfy Proposition B.1, hence we can compute it more efficiently using a tensor network with reduced bond dimension, much akin to its weight enumerator counterparts.
With these weight distributions, it is obvious that we can then compute \(P(E|s)\). For example, suppose we are given the coset enumerator \(A^{s}(z,w;E_{s},\Pi)\) for a code space defined by \(\Pi\), then under symmetric depolarizing channel with physical error rate \(p\),
\[p_{s}=B^{s}(z=p,w=1-3p)/K\]
is the probability of returning an error syndrome \(s\) with noiseless checks and \(A^{s}(z=p,w=1-3p)/K^{2}\) is the probability of errors that are stabilizer equivalent to \(E_{s}\). Indeed, this also extends trivially to double and complete enumerators by evaluating the polynomial at the respective parameters we used for the trivial syndrome examples in Sec. 3.2.
In fact, such kind of error probabilities generalize to any error channel. Similar to the non-detectable errors we have analyzed in the previous section, it is possible to compute \(p(\bar{L}|s)\) using generalized enumerators as long as we replace \(M_{1},M_{2}\) by the appropriate values used in the coset enumerators.
**Theorem 3.3**.: Consider a stabilizer code where \(\Pi\) is the projection onto its code subspace of dimension \(K\) and let \(E_{s}\) be an error operator with syndrome \(s\). Let the error channel be given by the Kraus form \(\mathcal{E}(\cdot)=\sum_{\mathbf{i}}\mathcal{K}_{\mathbf{i}}\cdot\mathcal{K}_ {\mathbf{i}}^{\dagger}\). Then
\[p(E_{s}\mathcal{S}\cap s) =\frac{1}{K(K+1)}\Big{(}\sum_{\mathbf{i}}\operatorname{Tr}[ \mathcal{K}_{\mathbf{i}}\Pi\mathcal{K}_{\mathbf{i}}^{\dagger}\Pi_{s}]\] \[\quad+\sum_{\mathbf{i}}\operatorname{Tr}[\mathcal{K}_{\mathbf{ i}}\Pi E_{s}^{\dagger}]\operatorname{Tr}[\mathcal{K}_{\mathbf{i}}^{\dagger}E_{s} \Pi]\Big{)}\] \[p_{s} =\frac{1}{K}\sum_{\mathbf{i}}\operatorname{Tr}[\Pi\mathcal{K}_{ \mathbf{i}}^{\dagger}\Pi_{s}\mathcal{K}_{\mathbf{i}}],\]
where \(\Pi_{s}=E_{s}\Pi E_{s}^{\dagger}\).
Note that \(E_{s}\) need not be a Pauli operator but can take on a general form \(P_{s}\bar{L}\) where \(P_{s}\) is a Pauli error with syndrome \(s\) and \(\bar{L}\) is any unitary logical operation. For proof and generalization where the logical error is a general quantum channel, see Appendix D.3. We see that these terms in the expression share a great deal of similarities with Theorem 3.2 which computes the logical error probability for trivial syndromes. Indeed, by expanding the Kraus operators in the Pauli basis, we see that these expressions can again be written as generalized weight enumerators (3.8) that we used to compute the uncorrectable error rates. To obtain these error probabilities, we follow the identical recipe by decomposing the Kraus operators in the Pauli basis \(\sum_{i}K_{i}\cdot K_{i}^{\dagger}=\sum_{P\bar{P}}k_{P\bar{P}}P\cdot\bar{P}\) and evaluate \(\mathbf{u}(k_{P\bar{P}})\) at the appropriate values based on that decomposition (c.f. eqn 3.3). Finally, we set \(M_{1}=\Pi,M_{2}=E_{s}\Pi E_{s}^{\dagger}\) for the \(B\) type enumerator and \(M_{1}=\Pi E_{s}^{\dagger},M_{2}=M_{1}^{\dagger}\) for the \(A\) type.
Formally,
\[p(E_{s}\mathcal{S}\cap s) =\frac{1}{K(K+1)}\Big{(}B(\mathbf{k};\Pi,\Pi_{s})\] \[\quad+A(\mathbf{k};\Pi E_{s}^{\dagger},E_{s}\Pi)\Big{)}\] \[p_{s} =\frac{1}{K}B(\mathbf{k};\Pi,\Pi_{s})\]
where \(\mathbf{k}=\{k_{P\bar{P}}\}\) are the coefficients from the Pauli expansion. With these coset enumerators in hand, we are now ready to discuss optimal decoders for general noise channels.
### Decoders from weight enumerators
We see that one can express the probability
\[p(E_{s}\mathcal{S}|s)=p(E_{s}\mathcal{S}\cap s)/p_{s} \tag{3.20}\]
entirely in terms of weight enumerators. Suppose \(E_{s}=P_{s}\bar{L}\) where \(P_{s}\) is any Pauli error with syndrome \(s\), which can be obtained by solving a set of \(n-k\) linear equations, and \(\bar{L}\) is a logical operator, then the set of probabilities \(\mathcal{P}_{s}=\{p(P_{s}\bar{L}\mathcal{S}|s)\}\), as \(\bar{L}\) runs over logical operators, is sufficient for us to perform error correction. It is customary to generate the set \(\mathcal{P}_{s}\) for the set of \(\bar{L}\) that are logical Pauli operations as they form an operator basis for the code subalgebra and are thus sufficient to generate the conditional probabilities for any unitary logical operators.
#### 3.7.1 Maximum likelihood and Bayesian discovery
It is straightforward to implement a maximum likelihood decoder where we identify the logical operator \(\bar{L}_{m}\) for which \(p(P_{s}\bar{L}_{m}\mathcal{S}|s)=\max\mathcal{P}_{s}\). Then error correction is performed by acting \(\bar{L}\) and \(P_{s}\) following the syndrome measurements. In this case, it is sufficient to compute just the \(A\) enumerator because the \(B\) enumerators are independent of \(\bar{L}\) and only add to an overall normalization that not impact our choice of the maximum element. When multiple global maxima exist, then we choose one at random.
One can also correct errors based on the probability distribution of \(p(P_{s}\bar{L}\mathcal{S}|s)\) where we act on the state using operator \(P_{s}\bar{L}\) with the selfsame probability. We call this the Bayesian decoder. As we require that \(\sum_{\bar{L}}p(P_{s}\bar{L}\mathcal{S}|s)=1\) when summing over all Paulis, it is again sufficient to only compute the type \(A\) enumerator as the constant from \(B\) can be obtained by solving the above normalization condition.
For each syndrome \(s\), the complexity in implementing these decoders is therefore the complexity \(\mathcal{C}(A^{s})\) of computing \(A^{s}\) from tensor contractions. For some codes this can be performed efficiently, which we further elaborate in Sec. 4. Nevertheless, even if each such contraction is efficient, we would still have to compute \(q^{2k}\) number of enumerators as there are \(q^{2k}\) number of distinct logical Pauli operators. Therefore the overall complexity estimate for such a decoder is \(O(\mathcal{C}(A^{s})q^{2k})\).
#### 3.7.2 Marginals
For a code where \(k\) is large, building the above maximum likelihood decoder remains challenging. However, it is possible to compute the "marginals" efficiently. Let us write any logical Pauli operator \(\bar{L}\) as
\[\bar{L}(\mathbf{a},\mathbf{b})\propto\bigotimes_{i=1}^{k}X_{i}^{a_{i}}Z_{i}^{ b_{i}},\;a_{i},b_{i}=0,\ldots q-1, \tag{3.21}\]
where \(\mathbf{a},\mathbf{b}\) are \(k\)-tuples with \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{k})\) and the same for \(\mathbf{b}\). Let \(\bar{L}_{i}(a_{i},b_{i})\) be the logical Pauli acting on the \(i\)th logical qubit. Consider the marginal error probability
\[p(E\bar{L}_{i})=\sum_{a_{j},b_{j};\forall j\neq i}p(E\bar{L}(\mathbf{a}, \mathbf{b})). \tag{3.22}\]
This can be computed using the mixed enumerator. Recall that one can compute \(p(\bar{L}_{i}|s=0)\) by inserting \(B\)-types tensor enumerators for all lego blocks whose logical leg correspond to qubits \(j\neq i\), and \(A\) type for blocks whose logical legs correspond to qubit \(i\). This enumerator, which we called \(I_{i}(z)\), records the weight distribution of logical operators \(\mathcal{N}_{i}\subset\mathcal{N}(S)\), which consists of all Pauli logical operators in the code that act as the identity on the \(i\)-th logical qubit. If we treat the other qubits \(\neq i\) as gauge qubits, we can think of it as recording all gauge equivalent representations of the identity operator. For example, this is what we have done for the Bacon-Shor code and the holographic code. For general cosets of operator \(E_{s}\), we now compute mixed coset enumerator for operator \(E_{s}\) such that \(\sigma(E_{s})=s\). Let us construct the mixed enumerators
\[M(\mathbf{u};E_{s},\Pi)=\wedge_{J,J_{j}} \Big{[}A^{J}_{(i)}(\mathbf{u};E^{A}_{s},\Pi)\] \[\bigotimes_{j}B^{J_{j}}_{\neq i}(\mathbf{u};E^{B}_{s},\Pi)\Big{]}\]
where \(E_{s}=E^{A}_{s}\otimes E^{B}_{s}\) and \(E^{A,B}_{s}\) are the Pauli substrings that only have support on physical legs of lego that are mapped to type \(A\) or \(B\) tensor enumerators respectively. We take \(\wedge_{J,J_{j}}\) to be tracing over the appropriate legs required by the tensor network. This now enumerates the weights of \(E_{s}\mathcal{N}_{i}\). We can repeat this a number of times for different \(E_{s}=\bar{L}_{i}P_{s}\mathbf{s}\), and the resulting enumerators would provide the requisite error probabilities \(p(P_{s}\bar{L}_{i}|\sigma(P_{s}))\).
For example, under symmetric depolarizing noise with probability \(p\),
\[p(P_{s}\bar{L}_{i}|\sigma(P_{s}))=M(z=p,w=1-3p;P_{s}\bar{L}_{i},\Pi).\]
For other error models, we again select the appropriate parameters for the abstract n-tuple \(\mathbf{k}\) and weight function. See Sec. 3.2 and App. A.
A decoder can then choose an operator with the highest probability then correct the error by acting \(E\bar{L}_{i}\) on the system. In the case where no other logical qubits are present, this reduces to the maximum likelihood decoder.
It is easy to generalize this such that \(\bar{L}_{i}\) can include multiple qubits logical qubits in some set
\(\kappa\) such that
\[M(\mathbf{u};E_{s},\Pi)=\] \[\wedge_{J_{i},J_{j}}\bigotimes_{i\in\kappa}A_{i}^{J_{i}}(\mathbf{u} ;E_{s}^{A},\Pi)\bigotimes_{j}B_{\not\in\kappa}^{J_{j}}(\mathbf{u};E_{s}^{B},\Pi).\]
However, in general, if we include \(|\kappa|\) qudits in the mixed enumerator such that we only integrate out \(j\not\in\kappa\), then we need to check \(q^{2|\kappa|}\) terms to find the error operator with the highest marginal probability.
### Logical Error Rates
#### 3.8.1 Exact Computations
We have seen previously how one can compute the trivial syndrome enumerators, which yield the uncorrectable error probability. We can interpret the value \(p_{D}=1-A/B\) as a logical error rate for a decoder with perfect syndrome measurements such that one discards the state whenever a non-trivial syndrome is measured. For such processes, one can define an error detection threshold \(p_{th}\) such that \(p_{D}\) is suppressed as a function of \(d\) for error rates below the threshold. One such example is shown in Fig. 15 for the surface code and 2d color code.
**Remark 3.1**.: If a class of quantum codes has an error detection threshold under i.i.d. depolarizing error, then the threshold is \(p_{\mathrm{th}}=1/6\).
Proof.: Let \(A^{*}(z),B^{*}(z)\) be the enumerators with normalization such that \(A_{0}^{*},B_{0}^{*}=1\). Then for a quantum code with dimension \(K\), \(A(z)=K^{2}A^{*}(z)\) and \(B(z)=KB^{*}(z)\). Thus
\[\frac{B^{*}(z)-A^{*}(z)}{B^{*}(z)}=1-\frac{1}{K}\frac{A(z)}{B(z)}.\]
Now, homogenizing, the MacWilliams identity has \(B(w,z)=A((w+3z)/2,(w-z)/2)\), and using \(z=p=1/6\) and \(w=(1-3p)=1/2\), we see \(A(1/2,1/6)=B(1/2,1/6)\) for every quantum code. Therefore all curves \(p_{L}(p,1-3p)\) cross at \((1/6,1-1/K)\).
We may similarly ask whether the current tensor network method can efficiently compute the exact logical error rate under other decoders. We do not provide such a method in this work, though it may be an interesting direction. A simple application of the current method fails to be efficient in the following examples.
The exact logical error rate with maximum likelihood decoder can be expressed as
\[p_{L}=\sum_{s}[B^{s}(\mathbf{u};P_{s},\Pi)-\max_{\forall\bar{L}\in\mathcal{L} }\{A^{s}(\mathbf{u};P_{s}\bar{L},\Pi)\}]\]
and the error rate for a Bayesian decoder is
\[p_{L}=1-\sum_{s}\frac{1}{B^{s}(\mathbf{u};P_{s},\Pi)}\sum_{\bar{L}}A^{s}( \mathbf{u};P_{s}\bar{L},\Pi)^{2}.\]
We see that both of them involve non-linear functions of the weight enumerators, which makes it difficult to compute efficiently through a tensor network method. It would appear that one has to sum over exponentially many syndromes even if each enumerator can be produced efficiently.
This does not mean that enumerators cannot improve the computation of logical error rates. In practical decoding, it is far more relevant to consider a sample with only polynomially many distinct syndromes after running the decoder for a reasonable amount of time. It is also the case for all sampling-based simulations that are currently used for error and threshold computations.
#### 3.8.2 Error rate estimation
In addition to computing exact error probabilities given syndrome \(s\), one can also use enumerators to provide more accurate estimates for logical error rates in conjunction with sampling-based methods.
Conventional sampling methods generate errors \(E\) based on particular noise models. Once the error is generated, its associated syndromes \(\sigma(E)\) are determined. Note that for noiseless syndrome measurements, \(\sigma(E)\) always outputs a syndrome \(s\) deterministically. However, for more realistic models with faulty measurements, the outcome \(s\) can depend also on the noisy measurement process. A decoder \(\mathcal{D}(\sigma(E))\) then takes the syndrome and suggests a recovery operator \(R\) with probability \(p_{\mathcal{D}}(R|s),\sum_{R}p(R|s)=1\). If \(RE\sim\bar{L}\) is equivalent to a non-identity logical operator, then a logical error has occured and this adds to the error probability \(p_{L}\). This process is repeated until a large enough sample size has been established such that the overall \(p_{L}\) estimate is believed to have sufficiently converged.
We can improve up this method, especially those derived from rare events/ syndromes with
enumerators. Given an error model (e.g. depolarizing noise with fixed error probability \(p\)) a set of errors are generated using existing sampling methods. Subsequent syndrome measurements (either noiseless or noisy) lead to a sampled syndrome distribution \(\mathcal{P}(s)\) such that \(\sum_{s}\mathcal{P}(s)=1\) and only has support over polynomially many distinct syndromes. In our case, we assume that we are given the distribution \(\mathcal{P}(s)\), the error correcting code (along with its tensor network construction), the error model in question, and a decoder \(\mathcal{D}\) of the user's choice.
The logical error rate estimates are thus given by
\[\bar{p}_{L}(\mathbf{k})=\sum_{s}\mathcal{P}(s)\sum_{R}p_{\mathcal{D}}(R|s) \Big{(}1-\frac{A_{s}(\mathbf{k};R,\Pi)}{B_{s}(\mathbf{k};\Pi,\Pi_{s})}\Big{)}\]
where \(A_{s}(R)/B_{s}\) is precisely the expected probability where the decoder's choice of \(R\) successfully corrects the error based on the syndrome. For a maximum likelihood decoder, \(p_{\mathcal{D}}(R|s)\) is also trivial except for one \(R\). For a pure sampling based method, the probability \(A_{s}(R)/B_{s}\) would usually require a large number of events before the estimate converges to its true value. Therefore, its estimate for rare syndromes can be wildly inaccurate. Here with the enumerator method, we can compute these quantities exactly, thereby improving the accuracy for \(\bar{p}_{L}\). It is also useful sometimes to further sort the logical error rate by operator types. This can be done by excluding certain terms from the above summation over \(R\). We do not provide its explicit forms here as the extension is somewhat trivial and situation-dependent.
In scenarios where the computation cost of enumerators are relatively expensive, one can complement, for instance, the Monte Carlo method, where only error rates associated with rare syndromes are computed using weight enumerators.
_Faulty measurements:_ With logical error probabilities in hand, we can compute error thresholds in the usual way by repeating such calculations or estimations for a class of codes with different distances. Note that the use of enumerators above is compatible with any error model composed of identical single qubit error channels. The computation also fully accommodates different models of noisy syndrome measurements, as they only affect the distribution \(\mathcal{P}(s)\). Furthermore, the impact of each decoder can be independently evaluated to produce the conditional probability \(p(R|s)\). We hasten to point out that the choice of decoder here is completely arbitrary and not limited to the decoders we constructed in Sec 3.7 based on weight enumerators.
Since the contributions from the error channel, noisy measurements, decoders, and enumerators can be separated into independent modules, one can prepare them separately. For example, one can prepare a syndrome distribution \(\mathcal{P}_{0}(s)\) with noiseless measurements. If the measurements are noisy, they are given by some set of transition probabilities \(p(s_{f}|s_{i})\) which depend solely on the noise model associated with the measurement. Composing these probabilities we get
\[\mathcal{P}(s)=\sum_{s_{i}}\mathcal{P}_{0}(s_{i})p(s|s_{i}).\]
Once the set of relevant syndromes have been established, which we take to be \(poly(n-k)\), we create the decoding table from which \(p_{\mathcal{D}}(R|s)\) can be obtained. At the same time, the enumerators that depend on \(s\) and \(R\) may be prepared in parallel, if needed. In many cases, exact contractions may not be needed as we may not require the same level of accuracy for distance verification. In such cases, approximate but efficient contraction algorithms maybe sufficient.
## 4 Computational Complexity
### General Comments
#### 4.1.1 Brute Force Method
For a generic stabilizer code, the construction of its weight enumerator polynomial is at least NP-hard. We thus expect the same for a generic quantum code. Indeed, as we see that constructing enumerators solve the optimal decoding problem [28], such tasks must be at least #P-complete. A simple brute force algorithm is exponential in the system size. For stabilizer codes, one can enumerate all of its stabilizer or normalizer elements, which is of \(O(q^{n-k})\) and \(O(q^{n+k})\) respectively. This extracts the relevant coefficients \(A_{d},B_{d}\). For a general quantum code, each coefficient \(A_{d},B_{d}\) is already hard, as it involves \(q^{n}\times q^{n}\) matrix multiplications. One then has to repeat this \(O(q^{2n})\) times for each error basis element. Therefore the complexity for the brute force method is \(O(q^{O(n)})\) for general quantum codes of local dimension \(q\). A slightly better strategy computes only the coefficients of \(A_{d}\) and then
perform a MacWilliams transform, which is polynomial in \(n\). Therefore, for complexity estimates, it is sufficient that we provide the estimate for computing \(A(\mathbf{u})\).
#### 4.1.2 Tensor network method
Now we analyze how our method improves this picture assuming the QL constructions are known.
Tensor preparation overhead.Let us first revisit the encoding tensor network of an \([[n,k]]\) stabilizer code with local dimension \(q\) where each tensor is obtained from a small stabilizer code. We assume that the degree of each tensor (including dangling legs) is bounded by some constant \(c\). This is to ensure that the complexity in constructing the tensor enumerator of each node is upper bounded by a constant overhead3. Then consider the graph \(G=(V,E)\) produced from the tensor network by removing all dangling legs such that the tensors are vertices and contracted legs are edges. Suppose the tensor network representation is one such that \(|V|\leq C(n+k)\) for some constant \(C\), then preparation of the lego blocks has worst case complexity \(O((n+k)q^{5c})\). In fact, many tensor networks consist of only a few types of tensors, e.g. recall that any quantum lego structure is constructible from a constant number distinct blocks, making even \(O(q^{5c})\) sufficient. Therefore the overhead for tensor preparation is usually constant while a generous upper bound is at most linear in the system size. Here we assume that the tensor network does not contain an overwhelming number of tensors that have no dangling legs, e.g. a deep quantum circuit. This assumption can always be satisfied (e.g. MPS).
Footnote 3: For stabilizer codes, if there are \(k_{v}\) logical legs on a tensor on a node \(v\), then building \(\mathbf{A}_{v}(z)\) is upper bounded by complexity \(O(q^{c-2k_{v}})\) and is less expensive compared to that of \(\mathbf{B}_{v}(z)\).For general quantum codes where one uses the full tensor enumerator, preparing the coefficients of \(\mathbf{A}_{v}(z)\) requires a worst case of \(O(q^{(5c-4k_{v})})\) operations.
Tensor Contraction.We now contract these tensors to build up the tensor network. Recall that each tensor contraction may be construed as a matrix multiplication. Suppose we have two tensors of \(p\leq m\) legs respectively, while we contract \(n\leq p\) legs. For the most general quantum code, we need to use the full tensor enumerators as building blocks, which have bond dimension \(\chi=q^{4}\) and can be reshaped as a multiplication of two matrices of size \(\chi^{(p-n)}\times\chi^{n}\) and \(\chi^{n}\times\chi^{(m-n)}\). Hence each contraction step with the same parameters above has worst case \(O(\chi^{(p+m-n)})\). For codes that only needed reduced enumerators, this can be done with \(\chi=q^{2}\). For stabilizer codes, these matrices are especially sparse and have at most \(q^{p},q^{m}\) nonzero elements, thus each such contraction is loosely upper bounded by \(O(q^{p+m+\min(p,m)})\). Therefore, the computational complexity scales exponentially with the number of uncontracted legs during tensor contraction.
To incorporate the symbolic functions, additional degrees of freedoms are often needed. The specifics can depend on the implementation. One method is to introduce a separate index with bond dimension \((n+1)^{\ell}\) to track the power of the polynomial (App. C). This adds another factor of \(n^{\ell}\) to the complexity counting above. The power \(\ell\) depends on the number of independent variables one needs to track. For Shor-Laflamme enumerators \(\ell=1\), but \(\ell>1\) for the refined enumerators. As this cost can vary depending on the treatment of symbolic objects, we do not include their contributions in the following estimates. One can easily restore them when needed.
Fully contracted tensor network.Aside from minor corrections related to symbolic manipulations and those associated with storing and manipulating for large numbers, the computational complexity would be determined by the contractibility of the tensor network, which is ultimately dominated by the cost of multiplying large matrices. Heuristically, the cost of tensor contraction scales linearly with the bond dimension of the uncontracted indices, or exponentially with the number of minimal edge cuts in the tensor network.
In the ensuing the discussion we will use base \(e\) exponential for complexity. For a tensor network with bond dimension \(\chi\), we can generally set \(e\to\chi\) to obtain the worst case complexity estimate. As we discussed earlier, the general rule of thumb for bond dimension is \(\chi=q^{4}\) for the full tensor enumerator, \(\chi=q^{2}\) for codes that only requires reduced enumerators. However, using a sparsity argument in stabilizer codes, the effective bond dimension needed in an efficient representation can even be as low as \(q\).
Let us represent a sequence \(\mathcal{S}_{G}\) of tensor contractions by a sequence of induced subgraphs \(H_{i}=(V_{i}^{H},E_{i}^{H})\) where \(V_{i}^{H}\subset V\), \(V_{i+1}^{H}=V_{i}^{H}\cup\{v_{i+1}\in V\setminus V_{i}^{H}\}\), and \(V_{0}^{H}=\{v_{0}:v_{0}\in V\}\). In other words, we construct a sequence of subgraphs by adding one additional vertex at a time. The sequence terminates at \(i=|V|-1\), when the subgraph contains \(G\). Let \(E_{c}(W,W^{\prime})=\{e=\{v_{a},v_{b}\}\in E:v_{a}\in W,v_{b}\in W^{\prime}\}\) denote the set of edges connecting any two sets of vertices \(W,W^{\prime}\) and let \(M_{i+1}\) be the connected component of \(H_{i+1}\) containing \(v_{i+1}\).
Then the complexity for the \(i\)th step of contraction is
\[\mathcal{C}_{i}\lesssim\exp(|E_{c}(V_{H_{i}}\cap V_{M_{i+1}},V\setminus V_{H_ {i}})|+deg(v_{i+1})-|E_{c}(V_{H_{i}}\cap V_{M_{i+1}},\{v_{i+1}\})|)\lesssim O( \exp(|C_{\max}|))\]
where \(|C_{\max}|=\max_{i}|E_{c}(V_{H_{i}},V\setminus V_{H_{i}})|\) is the largest possible cut through the tensor network during contraction. Then we see that the number of computations needed for calculating the final tensor enumerator of the tensor network is given by
\[\mathcal{C}=\sum_{i=0}^{|V|-1}\mathcal{C}_{i}\lesssim O(|V|\exp(|C_{\max}|)).\]
The upper bound is a pretty drastic overcounting especially if \(H_{i}\) contains many disconnected components, as many do not enter the contraction. In other words, as long as each connected component of the induced subgraph has only \(\log|V|\) connectivity with its complement throughout the sequence \(\mathcal{S}_{G}\), then the complexity is polynomial in \(|V|\).
### Cost for common codes
Tree tensor network.Tree tensor networks can be used to describe concatenated codes over \(n\) qubits (leaves). It is also known that these tensor networks can be contracted with polynomial complexity. A contraction algorithm would start from the leaves of the tree and contract into \(O(n)\) disconnected components of the graph. Each piece in this first layer of contraction has at most \(\mathcal{E}\sim O(c)\) open legs where \(c\) is the maximum degree or branching factor in the tree. Then at each iteration, we join the \(\leq c-1\) branches with another tensor. The maximum number of open legs on each connected component is always bounded by \(c\), therefore the complexity for each contraction is at most \(O(e^{2c})\). For a tree with \(n\) leaves, the overall complexity is \(O(ne^{2c})\) for tensors of bounded degree, Fig. 6. If the codes on each node are identical, then we only have to perform a separate contraction at each layer, yielding a complexity \(O(\log n)\), Table 1 (general and symmetric). The latter would be doubly exponentially faster than brute force enumeration.
Holographic code.For tensor networks of holographic codes [29, 30, 31, 32], the network is taken from a tessellation of the hyperbolic disk. This is slightly more connected than the tree tensor network (TTN) as it contains loops. The contraction strategy is similar to that of the TTN, except now minimum cuts depend on the system size such that each connected component has at most \(O(\alpha\log n)\) open legs during the contraction. The parameter \(\alpha\) depends on the tessellation. Then
\[\mathcal{C} \sim\sum_{m=1}^{n}\exp(\alpha\log m+2c)\leq\exp(2c)n^{\alpha+1}\] \[\sim O(n^{\alpha+1}).\]
A similar counting argument holds for the hyperbolic surface code, where minimal cuts remain logarithmic in the system size.
Figure 6: Tree tensor network for concatenated code. It is efficiently contractible from the bottom up and can be parallelized.
Codes with shallow local circuits.If the encoding circuit of a code is known (e.g. stabilizer code once the check matrices are given), then we can easily convert the circuit into a tensor network. If these circuits are shallow, say, of constant or \(\log n\) depth, then one can contract the circuit induced tensor network in the space-like direction where the minimal number of edge cuts would be given by the circuit depth. Thus the enumerators of such codes can be prepared in \(poly(n)\) time.
Codes on a flat geometry.These are codes on an Euclidean geometry of dimension \(D\) such as ones where the code words may be described by a PEPS. Some examples include the 2d color code, the surface code, Haah code [33], etc. Constructions like the Bacon-Shor code also fall under this category. Note that the worst case complexity holds for any such tensor network regardless of the specific tensor construction or its symmetries.
For codes whose discrete geometry are embeddable in the \(D\) dimensional Euclidean space, we simply "foliate" the lattice with co-dimension 1 objects. Each such object can be built up from \(O(n^{1-1/D})\) contractions where each contraction retains at most \(O(n^{1-1/D})\) open legs. Then \(\mathcal{C}\sim O(n\exp(n^{1-1/D}))\). Compared to the brute force method, this permits a sub-exponential speed up.
If the geometry of the network allows for fewer open edges during tensor contraction, then it is possible to get further speedups. Note the above counting assumes \(n\sim L^{D}\) for a system that has similar lengths in different directions. If all but one direction have bounded length \(L\) then we obtain an exponential speed up. For example, consider a rectangular surface code of size \(L\times n/L\) on a long strip where \(L\) is bounded, then each contraction along its shorter side is only \(O(\exp(L))\).
Note that the hardness of evaluating the weight enumerator polynomial here is directly tied to the hardness of the tensor network contraction. It was shown in [34] that contraction of PEPS is average case \(\#P\)-complete. Therefore there is strong reason to believe that an exponential speedup of this process is unlikely for both classical and quantum algorithmic approaches using tensor networks if one disallows post-selection and choose the tensors in a Gaussian random fashion. However, we also note that often the tensors are strictly derived from stabilizer codes. Therefore it is not impossible that these added structures in the discrete symmetry and contractible 2D tensor networks may permit further speedups.
Codes with volume law entanglement.For states that have volume law entanglement for any subsystem, let us assume that the number of edges connected to vertices in a subregion is proportional to the number of vertices in that region, i.e. \(\eta|V|\). For simplicity, let us also assume that the number of tensors and qubits are roughly equal. In general, \(\eta\) need not be less than one. This is because each node may be connected to multiple nodes in the complementary region, while the entanglement captured in each bond is not maximal. However, if a carefully crafted tensor network is efficiently capturing the entanglement of the state, such that each bond is roughly maximally entangled, then we could expect the number of bonds cut to be less than or equal to the total number of qubits in the region for large enough subregions. Then the cost for each con
\begin{table}
\begin{tabular}{|c|c|c|} \hline Network architecture & TN cost & code examples \\ \hline Tree & \(O(\log n)\) & concatenated (symmetric) \\ Tree, 1d area law & \(O(n)\) & concatenated (general), convolutional \\
2d hyperbolic & \(O(n^{\alpha+1}),\alpha>0\) & holographic, surface code (hyperbolic) \\ (hyper)cubic & \(O(n\exp(n^{1-1/D}))\) & topological (Euclidean), Bacon-Shor \\ (hyper)cubic (bounded \(L\)) & \(O(n\exp(L^{D-1}))\) & rectangular surface code \\ \(\delta\)-volume law & \(O(n\exp(\delta n)),\delta<1\) & non-degenerate code, random code \\ generic encoding circuit & \(O(n^{2}\exp(n)/\log n)\) & generic stabilizer code \\ \hline \end{tabular}
\end{table}
Table 1: tabulates the computational cost for enumerator preparation from tensor network contractions. There are additional complexity associated with the symbolic manipulation of the polynomial, storage of large numbers, and MacWilliams transforms, which can also contribution an additional cost that can be superlinear.
traction is \(O(\eta|V|)\). For \(\eta<1\), this provides a polynomial speed up. If the number of bonds \(\text{cut}\leq d\) for any subsystem and the code distance \(d=\delta n,\delta<1\), which is the case for random codes, then the overall complexity would be
\[\mathcal{C}\sim O(n\exp(\delta n)), \tag{4.1}\]
which again admits a polynomial speed up.
However, if the number of bonds cut for a subsystem is \(\geq n\), then we do not get any speed up. This would be case for all-to-all connected graphs where the edge cuts can be of size \((n/2)^{2}\), our algorithm at \(O(\exp(n^{2}/4))\) will actually be slower than the brute force algorithm. For another example, consider the encoding circuit of any stabilizer code has \(n^{2}/\log n\) complexity, which can be thought of as a tensor network. Suppose we simply contract the circuit tensor network timeslice by time slice, then we expect \(|C_{\text{max}}|\sim n\) because each time slice would correspond to a tensor network with \(O(n)\) legs and the worst case complexity scales as \(\sim O(n^{2}\exp(n)/\log n)\). This is fully expected, as we should not be able to solve a #P-complete problem in polynomial time. Therefore, in this regime, even if its tensor network description is optimal and minimizes the number of edge cuts for any subregion, the tensor network method would still only provide a polynomial speed up at best.
### Entanglement and Cost
In this work, we say that a tensor network representation is _good_ if its graph connectivity reflects the entanglement structure of the underlying state. In other words, the entanglement entropy of any subsystem can be reasonably well approximated by the number of edge cuts when bipartitioning the graph into the subsystem and its complement. This definition does not require the network to be efficiently contractible [34, 35]. If we use the tensor network connectivity interchangeably with subsystem entanglement then we see that the complexity for computing the weight enumerator can be connected with the amount of entanglement present in the codewords. For more highly entangled codewords/states, its tensor network will be more connected, and hence the number of edge cuts for each subsystem will be higher. This provides us a heuristic where the general expectation of its weight enumerator computation should scale as \(\sim\exp(S)\) where \(S\) is roughly the maximum amount of entanglement for subsystems we generate during tensor tracing. We see that this is indeed the case for our examples -- the complexity is polynomial for codes whose code words are weakly entangled, i.e., \(S\lesssim\log n\) and generally subexponential for states that satisfy an area law \(S\sim n^{1-1/D}\) for systems with \(D\)-dimensional Euclidean geometry.
For _non-degenerate_ quantum codes, the \((d-1)\)-site subsystem are maximally mixed, hence \(d\sim S\). Therefore, up to polynomial factor corrections, we expect the complexity lower bound for computing the enumerator polynomial to be comparable to that of finding the minimal distance in classical linear codes [36, 15], i.e.,
\[\mathcal{C}\sim\exp(O(S))\sim\exp(O(d)). \tag{4.2}\]
For this high level analysis, we will neglect other subleading terms and the dependence on rate \(R=k/n\). Because stabilizer codes can be identified with classical linear codes over \(GF(4)\)[36], it means that the tensor network method should have comparable complexity scaling with existing algorithms for non-degenerate stabilizer codes.
In _degenerate codes_, however, there exist subsystems where \(S\ll d\). For example, a gauge fixed Bacon-Shor code can be constructed from a TTN (Sec. 5.4). Although certain subsystems are highly entangled, its much weaker entanglement for some other subsystems allows one to engineer the network such that it is written in an efficiently contractible form, such that each step of the contraction is bounded by a constant. Depending on the gauge, we can get away with an enumerator with as few as \(2\sqrt{n}\) such contractions. Although the code has overall distance \(d\sim\sqrt{n}\), the cost in preparing its enumerator is only \(O(\sqrt{n})\) time, compared to a naive distance scaling of \(O(\exp(\sqrt{n}))\) (Figure 20). Therefore, we expect some degenerate codes to have \(\mathcal{C}\ll\exp(O(d))\), which is a substantial speedup compared to known methods.
## 5 Examples
Now we examine a few examples by computing the enumerators for codes that have order a hundred qubits or so. These analyses are to showcase the tensor enumerator method; they are not
meant to be exhaustive nor do they represent the largest possible codes one can study with this method.
### Surface code
Kitaev's Surface code.Recall from [1] that the tensor network for the surface code encoding map, Fig. 7 (left), is one where each tensor is a \([[5,1,2]]\) code and the boundaries are contracted with \(|0\rangle,|+\rangle\) states (red and blue triangles). The upward pointing dangling legs denote the logical inputs and downward pointing legs denote physical qubits, therefore the encoding map has a non-trivial kernel and a physical qubit sits on each node. For each lego block, we construct its tensor enumerator and contract them column by column to generate the entire network, Fig. 7 (right). For example, the quantum weight enumerators of a \([[181,1,10]]\) surface code are
\[A(z) =1+36z^{3}+180z^{4}+136z^{5}+1344z^{6}\] \[\quad+7084z^{7}+24001z^{8}+60432z^{9}\] \[\quad+286748z^{10}+\ldots\] \[B(z) =1+36z^{3}+180z^{4}+136z^{5}+1344z^{6}\] \[\quad+7084z^{7}+24001z^{8}+60432z^{9}\] \[\quad+286768z^{10}+\ldots,\]
where we count only 20 representations of non-trivial logical operators at weight 10.
Using a similar network, we can also find the coset weight distribution. Suppose a Pauli error acts on physical qubits in the form of Fig. 8 (left). Note that we do not contract the Pauli errors into the encoding tensor network when defining the encoding map; if we actually contract the Pauli errors onto the physical legs in the tensor network construction then obtain enumerators from those networks, it would correspond to finding the stabilizer weight distribution for surface codes that have extra minus signs on certain generators. To build the coset enumerator, we swap out the original tensors in Fig. 7 (right) for the proper coset tensor weight enumerator of each error node (red). The modified tensor network then computes the weight distribution of coset elements. For example, the coset distribution for a single \(X\) error at the bottom left corner for a \([[113,1,8]]\) surface code, is
\[A^{s_{\text{Xbl}}}(z)=z+z^{2}+2z^{3}+31z^{4}+146z^{5}+284z^{6}\] \[\quad+1258z^{7}+5180z^{8}+17627z^{9}+\ldots\]
These exercises can be easily repeated for the double and complete weight enumerators where the weights are counted differently. For example, see Fig. 3 of [12] and Fig. 10.
Rotated Surface code.In practice, it is easier to deal with rotated surface code as the distance scaling is better by a constant factor for a similar value \(n\), Fig. 9. Note that one only has to modify the boundary conditions compared to the original surface code. The rotated surface code tensor network is also easier to contract exactly. For reference, the enumerator for the \([[256,1,16]]\) rotated surface code at \(d=16\) can be computed on a laptop with a run time of \(\approx 20\) minutes. The weight enumerators for this code are
Figure 8: The coset enumerator of a particular error string that acts trivially on some qubits.
Figure 7: A surface code and the tensor network of its weight enumerator.
\[A(z) =1+30z^{2}+776z^{4}+15538z^{6}+276801z^{8}+4431408z^{10}+65676619z^{12}\] \[+912021486z^{14}+12003931907z^{16}+150911390280z^{18}+\ldots\] \[B(z) =1+30z^{2}+776z^{4}+15538z^{6}+276801z^{8}+4431408z^{10}+65676619z^ {12}\] \[+912021486z^{14}+12004980483z^{16}+150970896992z^{18}+\ldots\]
Indeed, we see that the two coefficients start deviating at \(d=16\).
One can also obtain an error detection threshold by assuming a decoder that performs no active error correction, but discards all instances that return a non-trivial syndrome assuming perfect measurements. Recall (Remark 3.1) that this threshold is at \(p=1/6\approx 16.67\%\), which is quite similar to the code-capacity thresholds [38] across various decoders under depolarizing noise.
Local Clifford deformations.We can perform local modifications [11] on each tensor to perturb the (rotated) surface code. These are represented by the circle tensors that act on each qubit. For the vanilla surface code, these tensors are trivial (identity). However, we may choose them at will. For instance, if they are random single qubit Clifford operators, then the tensor network reproduces the Clifford-deformed surface codes [23]. Similarly, if choosing every other tensor to be a Hadamard, then one arrives at the XZZX code [39].
Because the Shor-Laflamme enumerator is invariant under local unitary deformations, it is clear that the logical error probabilities of such locally deformed codes would be identical under unbiased noise. However, this local unitary invariance is broken when we consider more general enumerators with other weight functions, which indicate that their performances under biased noise differ. In Fig. 11, we see that the de-randomized Clifford deformed code (right) has fewer logical operators that have low \(Z\) weight, which is to be contrasted with the rotated surface code (left) and the XZZX code (middle). We use a de-randomized Clifford deformed code like the one shown in Fig. 9 (right) where yellow and white dots indicate local \(HSH\) and \(H\) rotations [37]. More general dimensions of the code follow from repeating the local patterns on the \(3\times 3\) blocks (enclosed by dashed lines) periodically.
For example, using the double enumerators, we contrast the performance of the XZZX code and the derandomized Clifford deformed code, Fig. 12, under biased noise with \(p=p_{X}+p_{Y}+p_{Z}\) and \(p_{X}=p_{Y}=p_{Z}/(2\eta)\). It is clear from the normalized uncorrectable error rate (and hence effective distances) that the Clifford deformed construction vastly outperforms the XZZX at high bias and small \(p\). Note that the weight function for these double enumerators is slightly different from the one used in App. A or [9] because it enumerates the \(X,Y\) weight separately from the
Figure 9: Tensor network of a rotated surface code where the legos are identical to those of the surface code. Only the boundary conditions are modified. One can also modify each tensor by contracting some other single qubit gate/tensor. The checks are given on the right where qubits (vertices) adjacent to red regions indicate \(Z\) checks and blue indicate \(X\) checks. For the derandomized local Clifford deformed code [37], white and yellow dots indicate local \(HSH\) and \(H\) deformations respectively.
\(Z\) weights.
Coherent error.General quantum errors are not limited to random Pauli noise, which are somewhat "classical". Here we compute the coherent error probability of the rotated surface code using techniques introduced earlier.
Efficient methods for computing unitary rotations along \(X\) or \(Z\) have been introduced by [21] using a Majorana fermion mapping. Here we instead consider i.i.d. coherent error of the form \(U=\exp(itY)=\cos(t)I+i\sin(t)Y\). Note that the normalized logical error rate differs for codes with even or odd \(X\) and \(Z\) distances because the abundance of \(Y\)-only operators differ for these codes, Fig. 13 (left).
When \(d_{x},d_{z}\) are odd, the normalized logical error rate under coherent noise with rotation angle \(t\) coincides with that under the \(Y\)-only Pauli noise with probability \(p_{Y}=\sin^{2}(t)\). This is because at odd distances, the only \(Y\) type logical operator acts globally on the system. When at least one of \(d_{X}\) or \(d_{Z}\) is even, then the coherent noise yields slightly higher logical error probability, Fig. 13 (right). However this only incurs a small correction with a similar order of magnitude, consistent with earlier results but in different settings [21]. A similar result holds for the XZZX code with coherent noise of \(Y\)-only rotations because up to a phase, \(Y\) is invariant under Hadamard conjugation.
Although the impact of coherent noise with \(Z\) or \(X\) only rotations produce very different logical error profiles than those produced by the \(Z\) or \(X\)-only Pauli noise in the rotated surface code, there exist XZZX codes where their impact are identical. For instance, for the system sizes tested, the effect of such coherent errors and Pauli errors coincide when we have a square lattice where the width is equal to height. It also holds for some rectangular lattices, though not all. The reason is similar as before, where there is a sole logical operator consists of only \(I\) and \(X\) (or \(Z\)), but the operator need not act globally. This may be due to special symmetries of the XZZX code, which indicates that local deformations can be tuned to reduce the impact of coherent noise. Though it is also likely that such symmetries are restricted to the \(s=0\) sector. We leave a more systematic characterization of such behaviours to future work.
### 2D color code
We first provide a novel tensor network construction for the hexagonal 2d color code, which is a self-dual CSS code constucted entirely from Steane codes, Fig. 14. The class of such tensor networks constructs a family of \([[3\ell(\ell+1)+1,1,2\ell]]\) codes. Similar color codes with hexagonal plaquettes can also be constructed by following the same contraction pattern in the bulk and imposing different boundary conditions. Just like the surface code construction, this tensor network represents an encoding map with a non-trivial kernel4. One can similarly construct a codeword of this code, e.g. \(|\bar{0}\rangle\) by contracting all the dangling logical legs with \(|0\rangle\). Recall that each Steane code can be built from contracting two \([[4,2,2]]\) legsos, which was used to construct the surface code. As such, this tensor network can indeed be construed as a double copy [40] of the surface code in some sense.
Footnote 4: A previous tensor network construction of the \([[19,1,5]]\) color code can be found in [18], which requires both the \([[7,1,3]]\) codes and \([[9,0,3]]\) stabilizer states as building blocks. However, the protocol does not generalize to \(d>5\) due to concavity of the polygonal region.
Each tensor in the left figure is a Steane code where the logical leg is suppressed. For the remaining 7 physical legs, 6 are drawn in-plane while the remaining one is represented as a dot that corresponds to a physical qubit in the color
Figure 10: Double enumerator of a 4 by 8 surface code at \(n=53\). Plotting log of operator weight distribution for non-trivial logical operators. A relatively small code is chosen for clarity in the figure.
code. Each stabilizer generator that acts on the plaquette of the \([[7,1,3]]\) code is mapped to a stabilizer that acts on the four physical legs adjacent to a colored quadrilateral in the tensor description. Given this QL construction, its enumerator can be computed using the same method. For example, the enumerators for a \([[91,1,11]]\) code are
\[A(z) =1+54z^{4}+297z^{6}+2889z^{8}+24258z^{10}+197493z^{12}+1629738z^{14} +13287999z^{16}\] \[+108647952z^{18}+\ldots\] \[B(z) =1+54z^{4}+297z^{6}+2889z^{8}+24258z^{10}+4176z^{11}+197493z^{12} +67242z^{13}+1629738z^{14}\] \[+1066740z^{15}+13287999z^{16}+14401674z^{17}+108647952z^{18}+\ldots\]
We see that the two coefficients start deviating at \(d=11\), thus verifying its adversarial distance. The computation time is only tens of seconds, but a better encoding is needed to avoid unnecessary allocation of memory space for \(0\)s in the sparse matrix. Also note that the cancellation at even weights between \(A\) and \(B\).
As we discussed in Remark 3.1, these codes admit a common error detection threshold at \(p=1/6\) (Fig. 15) thanks to the MacWilliams identity, and is close to the known code capacity threshold.
### Holographic code
To demonstrate the usefulness of mixed enumerators, we now look at a class of finite rate holographic code [29] also known as the HaPPY (pen
Figure 11: Truncated \(X,Z\) weight distribution of non-trivial logical operators for the \(9\times 9\) surface code (left), XZZX code (middle), and the Clifford deformed code (right). Horizontal axis: \(X\)-weight, vertical axis: \(Z\)-weight. Note that non-zero weights are invisible in this scale.
Figure 12: The ratio between the XZZX code normalized uncorrectable error rate \(p_{XZZX}\) and that of the Clifford deformed code \(p_{CD}\) as a function of physical error parameter \(p\) at different biases \(\eta\) for \(d=7\).
Figure 14: A \([[37,1,7]]\) 2d color code (left) tensor network construction where (right) its stabilizer generators are all \(X\) or all \(Z\) operators acting on the vertices of each colored plaquette.
Figure 13: Left: normalized logical error rate as a function of the rotation angle \(t\) for codes with size \(n=d_{x}\times d_{z}\). Right: differences in normalized logical error rates \(\Delta p_{L}=p_{L}^{\rm coherent}-p_{L}^{Y\ \rm only}\).
tagon) code, originally conceived as a toy model of the AdS/CFT correspondence. Different versions of this code have been proposed in various contexts [25; 31] where preliminary studies have examined some of its behaviours under erasure errors and symmetric depolarizing noise. However, the application of such codes in quantum error correction is far less understood compared to the surface code. Here we analyze the HaPPY code as a useful benchmark using our mixed weight enumerator technology and present some novel results.
This code can be constructed from purely \([[5,1,3]]\) legos. It is known that, as a stabilizer code, it has an adversarial distance 3 regardless of \(n\) because of the bulk qubits that are close to the boundary. However, from AdS/CFT, we expect the logical qubits deeper in the bulk to be better protected and hence having different "distances". We can analyze the distances of these bulk qubits in different ways.
First as a stabilizer code, we define the _stabilizer distance_\(d_{S}\) of each bulk qubit as the minimal weight of all stabilizer equivalent non-identity logical operator that acts on a bulk leg/qubit [25]. To enumerate such operators, we can build a mixed enumerator by contracting a \(B\)-type tensor enumerator associated with the bulk tile that contains the logical qubit for which we compute the distance, with \(A\)-type tensor enumerators on the other tiles. Subtracting the enumerator polynomial \(A(\mathbf{u})\) of the stabilizers, we then obtain a distribution for all the non-identity logical operators acting on that bulk qubit Fig. 5 (top right).
One can also define the word distance of this code, as in [25], where it is simply the distance of the resulting subsystem code if we isolate one bulk qubit as the logical qubit and the rest as gauge qubits. To compute the word distance, we construct an \(\tilde{A}(\mathbf{u})\) enumerator by contracting \(A\)-type tensor enumerator on the central tile with \(B\)-type tensor enumerator on the rest of the network. This enumerates the logical identities in the gauge code. Then subtracting it from the scalar \(B(\mathbf{u})\) enumerator of the whole code yields the distribution of all gauge equivalent non-trivial logical operators, Fig. 5 (bottom right).
For each code of a fixed size \(n\), we then repeat this for bulk qubits at different radii from the center of the graph measured in graph distance. An explicit labelling of the qubits we study is shown in Figure 5 left5. We give a summary for \(n=25\) and \(n=85\) in Table 2, where \(\mathcal{N}_{S},\mathcal{N}_{W}\) denote the number of minimal weight stabilizer or gauge equivalent representations of the non-identity logical operators.
Footnote 5: Note that this radius is different from that in [25] where on the central bulk qubit is singled out and its distances are computed with respect to codes of different \(n\)s.
Although the stabilizer distance decreases as a function of radius, the word distance is more or less constant with respect to the radius. This is a particular consequence of the tiling and the legos,
Figure 16: \(\Delta p_{L}=p_{L}^{r=0}-p_{L}^{r=3}\) as a function of \(p_{x},p_{z}\) the bit flip and phase error probabilities. The blue translucent plane marks \(\Delta p_{L}=0\), below which the bulk qubit at \(r=0\) provides better protection.
Figure 15: Error-detection thresholds coincide for the 2d color code (CC) and the surface code (RS). Zoomed out plot on the corner shows the error probability in a greater range. Only two distinct distances are shown in the plot, since other distances cross at the same value.
such that erasure of 4 certain boundary qubits can lead to the erasure of the inner most bulk qubit [29]. Under depolarizing noise with probability \(p\), the normalized uncorrectable error probability \(p_{L}\) is shown in Fig. 17 (left). We see that the central bulk qubit in fact suffers from more errors because it has a greater number of minimal weight equivalent representations despite having the same word distance as most other bulk qubits. We see a crossing because the outermost bulk qubit has a slightly lower distance compared to the rest.
Despite the constant word distance as a function of system size for logical qubits that are deep in the bulk, and presumably the lack of erasure threshold for the central bulk qubit6, a larger \(n\) does hint at a greater degree of error suppression. Let \(\Delta p_{L}=p_{L}(n=85)-p_{L}(n=25)\), we see that the error rate difference for the inner most bulk qubit has a slight suppression at small \(p\) while the outermost bulk qubit is the opposite. Intuitively, this is expected for general holographic codes as its construction is a slight generalization of code concatenation. As such, a crossing is expected, where adding more layers of code would generally lead to noisier bulk qubits in the deep IR when the physical error rates are sufficiently large while the opposite happens for the logical qubits in the UV. A more in-depth analysis of other holographic codes with varying word distances can be interesting as future work.
Footnote 6: This result assumes a particular decoder applied to small sized systems using Monte Carlo methods. It is possible that a different asymptotic behaviour can emerge with larger codes and greater accuracy.
Let us also briefly examine its properties under biased noise using the double enumerator. The asymmetric distances \(d^{X}/d^{Z}\) are recorded in Table 2. The XZ weight distribution is not symmetric, but the normalized logical error probability is fairly symmetric with respect to \(p_{X},p_{Z}\). Here we compare the logical error probability \(\Delta p_{L}=p_{L}^{r=0}-p_{L}^{r=3}\) for the \(n=85\) code, Fig. 16. Like the symmetric depolarizing noise, the bulk qubit deeper in the bulk provides slightly better protection for the encoded information, but becomes noisier at higher physical error rates. However, the bulk qubit at \(r=0\) does not provide better protection compared to the bulk qubits close to the boundary for any noise parameter in the heavily biased regime.
### 2d Bacon-Shor code
For another example of the subsystem code, we study the 2d Bacon-Shor code. The tensor network for this code is identical to that of the surface code except we designate the physical legs every other row as gauge legs; see Appendix G.4 of [11]. It is conceptually convenient to think of these blocks as \([[4,2,2]]\) stabilizer codes or \([[4,1,2]]\) Bacon-Shor codes. As a subsystem code, it is most relevant to obtain its word distance. To that end we construct its mixed enumerator \(I(z)\) for the logical identity. The enumerator for the non-trivial logical operators (non-identity logical operators multiplying any element of the gauge
Figure 17: Left: logical error probability of bulk qubits at different radii for a \([[85,41,3]]\) HaPPY code. Right: The difference between logical error rates for two HaPPY codes of radii 3 and 2. At higher \(n\), the innermost bulk qubit has lower logical error rate while that for the outermost is higher for sufficiently low physical error rate \(p\). The opposite is true at higher \(p\).
group) is \(C(z)=B(z)-I(z)\). It is most convenient to express these enumerators graphically, Fig. 18.
Computing \(B(z)\) is relatively straightforward, as we build it by contracting all \(\mathbf{B}(z)\) of the \([[5,1,2]]\) and \([[4,2,2]]\) codes in the tensor network and then renormalize \(B_{0}\) to \(1\). Practically, we compute \(A(z)\) by contracting all the \(\mathbf{A}(z)\) of these tensors then perform a MacWilliams transform. However \(I(z)\) requires extra care as we need to place \(\mathbf{B}(z)\) on the odd number rows for the regular \([[5,1,2]]\) codes and \(\mathbf{A}^{\prime}(z)\) for the \([[4,1,2]]\) Bacon-Shor codes on even rows. Although these tensors in the encoding map are identical, the downward pointing legs in the \([[5,1,2]]\) code now maps to a gauge leg in the \([[4,1,2]]\) code. Therefore we must account for its weight distributions appropriately. It can be checked that the logical legs on the odd rows and columns are correlated with the logical legs on the even rows and columns. Therefore, they only contribute to an overall normalization.
Above computations can also be easily generalized to double and complete enumerators for the Bacon-Shor code. For example, the \(X,Z\) weight distributions of all non-trivial logical Pauli operator representations in this subsystem code is shown in Fig. 19 for the 2d Bacon-Shor code of different sizes.
Note that it has a very different structure from the surface code operator weight distribution, a likely consequence of the even weight gauge generators.
#### 5.4.1 2d Compass code
Now we examine different instances of gauge fixed Bacon-Shor codes. For an \(\ell\times\ell^{\prime}\) Bacon-Shor code, let us fix the XX gauge by promoting \((\ell-1)(\ell^{\prime}-1)\) weight-\(2\)\(X\) type gauge operators to stabilizer generators. This yields a stabilizer group with \((\ell-1)+(\ell^{\prime}-1)+(\ell-1)(\ell^{\prime}-1)=\ell+\ell^{\prime}-2+ \ell\ell^{\prime}-\ell^{\prime}+1=\ell\ell^{\prime}-1\) generators, which is a \([[\ell\ell^{\prime},1,\min(\ell,\ell^{\prime})]]\) stabilizer code.
The tensor network for this gauge can be built from the tensor of two different repetition codes
\[W_{R} =|00\rangle\langle 0|+|11\rangle\langle 1|\] \[W_{B} =(|00\rangle+|11\rangle)\langle 0|/\sqrt{2}+(|10\rangle+|01 \rangle)\langle 1|/\sqrt{2}.\]
The code defined by \(W_{R}\) has stabilizer \(ZZ\), and \(\bar{X}=XX,\bar{Z}=IZ\) and the one with \(W_{B}\) has \(X\leftrightarrow Z\) with stabilizer \(XX\), \(\bar{X}=IX,\bar{Z}=ZZ\). Their tensors are colored red and blue respectively. The output legs of the tensors are connected into a ring while leaving the inputs dangling. This constructs tensors in the Tree Tensor Network, Fig. 20. Each of the bigger red nodes corresponds to a stabilizer state with stabilizer group \(\langle\text{all }X,\text{even weight }ZZ\rangle\). The same holds for the big blue nodes but with \(X\leftrightarrow Z\).
Although the code has \(d\sim\sqrt{n}\), the entanglement for some subsystems of size \(\sim d\) can be much weaker. This allows us to write down a more efficiently contractible tensor network by taking advantage of these low entanglement cuts7. The total time complexity for obtaining the enumerator is thus \(O(\ell+\ell^{\prime})\sim O(d)\) if \(\ell\approx\ell^{\prime}\).
Footnote 7: Note that this speed up would not be possible for non-degenerate codes as all subsystems of size \(d\) has the same entanglement \(\sim d\).
By fixing the gauges in other ways, one produces a class of codes known as the 2d compass codes [41] which includes a gauge that reproduces the surface code and the \(XX\) (or \(ZZ\)) gauge we examined. Coincidentally, these are also two
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{8}{|c|}{\([[25,11,3]]\)} & \multicolumn{8}{|c|}{\([[85,41,3]]\)} \\ \hline r & \(d_{S}\) & \(\mathcal{N}_{S}\) & \(d_{W}\) & \(\mathcal{N}_{W}\) & \(d_{S}^{X}/d_{S}^{Z}\) & \(d_{W}^{X}/d_{W}^{Z}\) & \(d_{S}\) & \(\mathcal{N}_{S}\) & \(d_{W}\) & \(\mathcal{N}_{W}\) & \(d_{S}^{X}/d_{S}^{Z}\) & \(d_{W}^{X}/d_{W}^{Z}\) \\ \hline
0 & 9 & 30 & 4 & 60 & 5/5 & 2/2 & 23 & 240 & 4 & 60 & 13/13 & 2/2 \\ \hline
1 & 5 & 6 & 4 & 54 & 3/3 & 2/2 & 13 & 48 & 4 & 36 & 7/7 & 2/2 \\ \hline
2 & 3 & 3 & 3 & 3 & 1/2 & 1/2 & 9 & 12 & 4 & 24 & 5/5 & 2/2 \\ \hline
3 & n/a & n/a & n/a & n/a & n/a & n/a & 3 & 12 & 3 & 12 & 1/2 & 1/2 \\ \hline \end{tabular}
\end{table}
Table 2: Tabulated stabilizer distances \(d_{S}\) and word distances \(d_{W}\) for two HaPPY pentagon codes at different sizes. \(\mathcal{N}_{S},\mathcal{N}_{W}\) denote the number of minimal weight stabilizer equivalent and gauge equivalent representations of non-trivial logical operators, respectively. We also provide the corresponding asymmetric stabilizer and word distances sorted by \(X\) and \(Z\) weights. Radial distance \(r\) is the graph distance of the bulk qubit from the central tile for a code of fixed \(n\). The qubits we studied are labelled according to Figure 5.
gages of the Bacon-Shor code with the highest \((O(n\exp(\sqrt{n})))\) and lowest \((O(\sqrt{n}))\) computational cost respectively. The entanglement structure of the underlying quantum state generally depends on the gauge choice. While the above speed up is not surprising, as the example can be built from code concatenation, we can estimate how cost would scale for other patterns of gauge fixings that are everywhere-in-between provided we have tensor networks whose connectivity captures the entanglement feature. Intuitively, we can roughly understand the speed up as a statement about entangled clusters. When the code is fixed in the pure XX gauge for instance, there is little entanglement across the columns or rows of the code. If we now introduce gauge fixing such that \(ZZ\) stabilizers can occur with some non-vanishing fraction, this introduces more entanglement across these clusters and the resulting tensor network minimal cut now has to cut through these additional bridges of entanglement. Generally, we then expect the complexity to scale exponentially as the width of these bridges, or the minimal cuts that separates these clusters. In the extreme case of the surface code, the bridges are of \(\sqrt{n}\), and in the pure XX or ZZ gauge, the bridge is of \(O(1)\). By slowly deforming from the \(XX\) or \(ZZ\) gauge, one may also explore the intermediate regime of complexities \(O(\sqrt{n})\to poly(n)\to O(\exp(\sqrt{n}))\)8. A more comprehensive study of this complexity transition and gauge fixing can be an interesting subject for future exploration.
Footnote 8: In this way, the \(\sim\exp(d)\) cost of computing the Bacon-Shor weight enumerator is not surprising as the unfixed tensor network encompasses all 2d compass code configurations.
Figure 19: Plotting \(\log(C_{w_{x},w_{x}})\) in log scale, where \(X\) and \(Z\) weights are labelled by the vertical and horizontal axes respectively.
Figure 18: Distribution of non-identity logical operators in the 2d Bacon-Shor code, where blue tensors indicate \(\mathbf{A}(z)\) of the \([[5,1,2]]\) code (odd columns) and \([[4,2,2]]\) codes (even columns). Green tensors are \(\mathbf{A}^{\prime}(z)\) of the \([[4,1,2]]\) subsystem code while orange tensors are \(\mathbf{B}(z)\) of the \([[5,1,2]]\) codes.
## 6 Discussion
In this work, we generalize the existing weight enumerator formalism to study cosets, subsystem codes, and all single qubit error channels. In conjunction with tensor networks, we extend their applications in quantum error correction. We show that weight enumerators can be computed more efficiently using tensor network methods once a QL construction of the code is known. The complexity can vary depending on the tensor network connectivity, and is dominated by the cost of tensor contractions. Assuming a QL construction can be found that faithfully reflects the entanglement structure of the code words, then the cost for finding their enumerator is \(\sim O(\exp(d))\) for non-degenerate codes and up to exponentially faster for degenerate codes. As a novel distance-finding protocol, our proposal constitutes the only and the best current algorithm for finding the distance beyond stabilizer codes. In the case of Pauli stabilizer codes, this provides a comparable performance for non-degenerate codes, and up to exponential speed up for degenerate codes.
Using the generalized coset enumerators, we also construct (optimal) decoders for all codes using weight enumerators for all i.i.d. single qubit error channels. As a corollary, it improves the simulation accuracy when estimating fault-tolerant thresholds if used in conjunction with existing methods. Since QL includes all quantum codes, and thus stabilizer codes, the enumerator method can also be understood as a generalization of tensor network decoders. Finally we applied our method numerically to codes with sizes of order \(\sim 100\) qubits, showing that it is practical to study codes of relevant sizes in near-to-intermediate term devices. We also provide novel analysis of the surface code, color code, holographic code and the Bacon-Shor code using exact analytical expressions. These include their full operator weight distributions and certain code performance under coherent or biased noise. For the holographic code, we also present new results on asymmetric distances and the varied behaviour of different bulk qubits under (biased) Pauli error.
### Connection with stat mech mapping
We comment on the connection between optimal decoding and distance from the point of view of the statistical mechanical mapping and weight enumerators. Recall that the coset weight enumerator polynomial \(A(\bar{E},\mathbf{u})\) of \(E\) captures the weight distribution of all operators that are stabilizer equivalent to \(E\). By plugging in the corresponding coefficients \(\mathbf{k}\) from decomposing the error channels, one obtains the probability of incurring any errors that are equivalent to \(E\). This is nothing but the partition function \(Z_{E}\) by solving the stat mech mapping [42] associated with a noise model that satisfies the Nishimori condition for all parameters \(\beta J_{i}\) where \(\beta\) is the inverse temperature and \(\{J_{i}\}\) are coupling strengths of the model.
Conversely, if the error probability from the stat mech model can be obtained exactly, then it must agree with \(A(\bar{E},\mathbf{u})\) in some domain that is a connected region near the origin. Since if two polynomials \(f,g\) agree in infinite number of points, \(f-g\) must have an infinite number of roots. This cannot happen for any non-trivial \(f-g\) because the degree \(deg(f-g)\leq\max(deg(f),deg(g))\) is bounded. This implies that the solution \(Z_{E}=A(\bar{E},\mathbf{u})\) must be unique. Therefore by solving the stat mech model and obtaining its partition function for different values of \(\beta J_{i}\), we must also have sufficient information to uniquely fix the enumerator polynomial. For example for symmetric Pauli noise, one can in principle fix the coefficients of the polynomial by
Figure 20: Tree tensor network for a \(m\times n\) Bacon-Shor code in the XX gauge. Some stabilizers are shown via operator pushing. The tensors are obtained from repetition code encoding maps.
computing the values of \(Z_{E}(\beta)\) at different temperatures. As there are only finitely many coefficients for \(A^{s}\), one can solve an overconstrained system of equations with integer solutions.
In practice, however, the expression for \(Z_{E}=\Pr(\bar{E})\) in the stat mech model is often obtained numerically. Therefore, unless \(P=NP\) (or \(NP=RP\)) the reverse process going from the stat mech output to the enumerator can only be trusted to produce the correct results only when the values of \(\Pr(\bar{E})\) hold to exponential accuracy generally. This is expected, because otherwise one can solve the minimal distance problem approximately [43] using the stat mech model in polynomial time with approximate tensor contraction. In instances where the partition functions can be (or have been) obtained with relatively high accuracy such that the cost is less expensive compared to the current enumerator method, one can also acquire polynomially many values of the partition functions at different coupling strengths. One can then fit the coefficients of the enumerator polynomial to these data points. This allows us to derive (an approximation of) the enumerator and thus also extrapolat the error probabilities to other regimes instead of evaluating those points individually using the stat mech model.
### Tensor networks from circuits
Our primary tool for speed up comes from the QL description of the code; however, constructing such a description may not always be easy. If we are given the encoding circuit of a code, then one can convert the gates in the circuit into a tensor network. In the context of stabilizer codes, it is often convenient to be given a check matrix. From there, one can easily derive the Clifford encoding circuit with [44].
We then turn the circuit elements into tensors of the tensor network; this is simply the contraction of tensors of Clifford gates and product \(|0\rangle\) states. For Clifford gates, the states dual to these tensors are stabilizer states. For instance, concatenated codes can naturally yield a log-depth tree tensor network. In general, the connectivity of a subregion of the network can scale linearly as the number of gates/tensors inside the region. For 1d (spatially) local circuit, it is in principle possible to cut through the network in the time direction. The edge cuts are upper bounded by the circuit depth \(T\), and hence each contraction is no costlier than \(O(\exp(T))\). For log-depth, this clearly yields exponential speed up. For \(d\) spatial dimensions, the number of edge cuts is upper bounded by the surface area of the space-time region \(\ell^{d-1}T\) where \(\ell^{d}\lesssim n\). Therefore the upper bound for the cost for each contraction is \(O(\exp(n^{1-1/d}T))\). This can still lead to a sub-exponential speedup as long as \(T\lesssim n^{1/d-\epsilon}\) asymptotically. Very generally, we do not expect the tensor networks constructed from circuits to be optimal, i.e., minimizes the edge cuts for some subregion. Therefore it is still of interest to identify an efficient recipe for building such networks for stabilizer codes, potentially in conjunction with compilation tools such as ZX calculus. A simple procedure of such a circuit-to-tensor-network conversion is constructed in Fig. 21 for all graph states as an example.
### Future directions
Recently, it was shown that asymptotically good quantum Low-Density-Parity-Check (LDPC) codes like [45] have a circuit depth lower bound that is \(\log n\). Since these codes are highly degenerate and may sustain linear distances even
Figure 21: Any graph state (yellow) can be converted to a tensor network using its encoding circuit constructed from \(CZ\)s acting on \(|+\rangle\)s. The tensor network consists of the GHZ tensors (red), Hadamard tensors (H) and contraction with \(|+\rangle\) (blue triangles). Note that multiple GHZ tensors, which are also Z-spiders, can be merged to create a larger Z-spider.
with a much lower entanglement along some cuts, it is possible that a good tensor network description, if found, may lead to a more efficient distance verification protocol for codes whose code words saturate the entanglement lower bound. However, we also note that small sized examples, their tensor network descriptions, and a tight entanglement lower bound are still open problems as of the time of writing, the advantage our method provides only remains a theoretical possibility9. Therefore, a general QL recipe for building qLDPC codes would be useful.
Footnote 9: The expansion property of these codes may naively indicate the edge cut to scale with volume. However, as they are not the corresponding tensor networks, and that edge cuts are only upper bounds of entanglement, it may be possible that a sparser tensor network can be found that permits fewer cuts.
As weight enumerators are applicable for non-(Pauli)-stabilizer codes, they can be used to study or search for such codes while providing crucial information on their distances. This would extend the examples in this work beyond stabilizer codes and would also have relevant applications in optimization-based methods that need not produce stabilizer codes [22]. For example, XS or XP codes [46, 47] do not have abelian stabilizer groups and currently lack a protocol for computing their code distances distances. However for general codes, reduced enumerators are likely insufficient, and a higher bond dimension will be needed.
Note that beyond QECC literature, Shor-Laflamme enumerators, also known as sector lengths in graph states [48, 49], have been used to study the structure as well as the robustness of entanglement in entangled resource states. Therefore, we expect an even wider utility of our method in the context of fault-tolerant resource state preparation for measurement/fusion-based quantum computations and quantum networks.
Tensor enumerator methods are also useful when used in conjunction with machine learning (especially reinforcement learning)-based methods for QECC search [24, 50]. As one would typically need to evaluate certain code properties, such as distance, that are resource intensive, the tensor enumerator method can be used to drastically decrease the time needed to evaluate the cost function. It is also of interest to study the effect of approximate tensor contractions and how they impact the accuracy of the weight distribution and related distance information.
While we have treated all i.i.d. single qubit errors, the current formalism does not tackle location-based or correlated error efficiently. For the former, a straightforward extension exists where one can either introduce an additional variable for each location that has independent error pattern, or to precontract the tensor with a fixed error parameter \(\{p_{i}\}\) instead of describing them as variables. The latter reduces to a more general tensor network decoder [51, 19, 52, 53]. In the same vein, further extension is needed to describe fault-tolerant processes which are fundamentally dynamical. Therefore, an enumerator framework compatible with space-time quantum error correction that incorporates gadgets that includes measurement errors, mid-circuit noise and POVMs will be needed.
Finally, while enumerators were first defined in classical coding theory, one yet needs an efficient method to compute them for classical codes. Therefore, it is natural to extend the current QL-based approach to classical codes and compute their weight enumerator polynomials. Such tasks may be accomplished by directly applying the current formalism for classical codes and rephrasing them as quantum stabilizer codes with trivial generators, or devising a more efficient method that performs the analog of the trace or conjoining operation for classical codes.
## Acknowledgement
We thank Y.D. Li, D. Miller, G. Sommers, and Y.J. Zou for helpful discussions and comments on the manuscript. C.C. acknowledges the support by the U.S. Department of Defense and NIST through the Hartree Postdoctoral Fellowship at QuICS, the Air Force Office of Scientific Research (FA9550-19-1-0360), and the National Science Foundation (PHY-1733907). M.J.G. acknowledges support from the National Science Foundation (QLCI grant OMA-2120757). The Institute for Quantum Information and Matter is an NSF Physics Frontiers Center.
## Appendix A Common Scalar Enumerators
For completeness, we review a few examples below that we we have used in this work.
### Shor-Laflamme weight enumerators
The original weight enumerators [54, 55] are important objects in classical coding theory. Their quantum counterparts were introduced by Shor and Laflamme [4], which capture some key properties of an error correcting code. They feature a duo of polynomials that take the forms of
\[A(z,w) =\sum_{d=0}^{n}A_{d}(M_{1},M_{2})z^{d}w^{n-d}\] \[B(z,w) =\sum_{d=0}^{n}B_{d}(M_{1},M_{2})z^{d}w^{n-d},\]
where
\[A_{d}(M_{1},M_{2}) =\sum_{E\in\mathcal{E}[d]}\operatorname{Tr}(EM_{1})\operatorname {Tr}(EM_{2}),\text{ and}\] \[B_{d}(M_{1},M_{2}) =\sum_{E\in\mathcal{E}[d]}\operatorname{Tr}(EM_{1}EM_{2})\]
for some Hermitian \(M_{1},M_{2}\) and \(\mathcal{E}[d]\) which denotes unitary errors of weight \(d\). Here without loss of generality we can simply choose the Pauli basis. Note that they are a special case of the abstract enumerator [12], and we may recover them by setting \(\mathbf{u}=(w,z)\) and
\[\operatorname{wt}(E)=\left\{\begin{array}{ll}(1,0)&\text{if }E=I\\ (0,1)&\text{otherwise}.\end{array}\right.\]
So that \(\mathbf{u}^{wt(E)}=w^{n-wt(E)}z^{wt(E)}\), where \(wt(E)\) is simply the operator weight of the Pauli string \(E\).
These polynomials are related by the MacWilliams identity
\[B(w,z)=A\left(\tfrac{w+(q^{2}-1)z}{q},\tfrac{w-z}{q}\right).\] (A.1)
Therefore, it is sufficient to obtain one of them, and perform MacWilliams transform to get the other. In practice, for a brute force algorithm, it is often easier to recover \(A(z,w)\).
Note that these polynomials are sometimes expressed in the inhomogeneous form where \(A(z)=A(w=1,z),B(z)=B(w=1,z)\). As it is simple to recover the homogenized form by setting \(A(z)\to w^{n}A(z/w)\) and similarly for \(B\), we refer to both of them weight enumerators as they contain the same information as encoded by the coefficients.
### Refined Enumerators
We can also consider a generalization of the above polynomial where we separate the weights by type [9]. One such example is the double weight enumerator. Using variables \(\mathbf{u}=(w,x,y,z)\)
\[\operatorname{wt}(E)=\left\{\begin{array}{ll}(0,1,0,1)&\text{if }E=I\\ (0,0,1,1)&\text{if }E=X\\ (1,0,1,0)&\text{if }E=Y\\ (1,1,0,0)&\text{if }E=Z\end{array}\right.\] (A.2)
This is useful when for instance, we consider a biased error model where bit flip \((X)\) and phase \((Z)\) errors occur independently with different probabilities. Depending on the form of the biased Pauli noise, other weight functions may be used for the weight function. Such double enumerators may be used as long as the biased Pauli noise only admits two independent physical error parameters. The polynomials are
\[D(x,y,z,w;M_{1}.M_{2})\] \[=\sum_{w_{x},w_{2}}^{n}D_{w_{x},w_{x}}y^{w_{x}}w^{w_{x}}x^{n-w_{x} }z^{n-w_{x}}\] \[D^{\perp}(x,y,z,w;M_{1},M_{2})\] \[=\sum_{w_{x},w_{z}}^{n}D^{\perp}_{w_{x},w_{z}}y^{w_{x}}w^{w_{x}}x^ {n-w_{x}}z^{n-w_{z}},\]
where
\[D_{w_{x},w_{z}} =\sum_{E\in\mathcal{E}[w_{x},w_{z}]}\operatorname{Tr}[EM_{1}] \operatorname{Tr}[E^{\dagger}M_{2}]\] \[D^{\perp}_{w_{x},w_{z}} =\sum_{E\in\mathcal{E}[w_{x},w_{z}]}\operatorname{Tr}[EM_{1}E^{ \dagger}M_{2}],\]
and \(\mathcal{E}[w_{x},w_{z}]\) is the set of Paulis that have \(X\) and \(Z\) weights \(w_{x},w_{z}\) respectively.
The MacWilliams identity was derived in [9] for local dimension 2 where \(M_{1}=M_{2}\) are projection operators onto the code subspace. In [12], it was extended arbitrary local dimension \(q\) and \(M_{1},M_{2}\). We reproduced the relation here for convenience
\[D^{\perp}(x,y,z,w)\] \[=D\left(\tfrac{x+(q-1)y}{\sqrt{q}},\tfrac{z-w}{\sqrt{q}},\tfrac{z+ (q-1)w}{\sqrt{q}},\tfrac{x-y}{\sqrt{q}}\right).\]
The inhomogeneous forms are
\[D(y,w)=\sum_{w_{x},w_{z}}^{n}D_{w_{x},w_{z}}y^{w_{x}}w^{w_{z}}\] (A.3) \[D^{\perp}(y,w)=\sum_{w_{x},w_{z}}^{n}D^{\perp}_{w_{x},w_{z}}y^{w _{x}}w^{w_{z}}.\] (A.4)
One can easily restore the \(x,z\) dependence as their powers are fixed by \(n,w_{x},w_{z}\).
**Theorem A.1**.: If \(t_{x},t_{z}\) are the two largest integers such that \(D_{w_{x},w_{z}}=D_{w_{x},w_{z}}^{\perp}\) for \(w_{x}<t_{x},w_{z}<t_{z}\), then \(d_{x}=t_{x},d_{z}=t_{z}\).
Proof.: See Theorem 8 of [9].
A even more refined weight function distinguish all the Pauli operators by their types
\[\operatorname{wt}(E)=\left\{\begin{array}{ll}(1,0,0,0)&\text{if }E=I\\ (0,1,0,0)&\text{if }E=X\\ (0,0,1,0)&\text{if }E=Y\\ (0,0,0,1)&\text{if }E=Z\end{array}\right.\]
This is known as the complete weight enumerator [9]. Again, let \(\mathbf{u}=(w,x,y,z)\)
\[E(x,y,z,w;M_{1},M_{2})\] \[=\sum_{w_{x},w_{y},w_{z}}E_{w_{x},w_{y},w_{z}}x^{w_{x}}y^{w_{y}}z ^{w_{x}}w^{n-w_{x}-w_{y}-w_{z}}\] \[\quad F(x,y,z,w;M_{1},M_{2})\] \[=\sum_{w_{x},w_{y},w_{z}}F_{w_{x},w_{y},w_{z}}x^{w_{x}}y^{w_{y}}z ^{w_{x}}w^{n-w_{x}-w_{y}-w_{z}},\]
where
\[E_{w_{x},w_{y},w_{z}}=\sum_{Q\in\mathcal{E}[w_{x},w_{y},w_{z}]} \operatorname{Tr}[QM_{1}]\operatorname{Tr}[Q^{\dagger}M_{2}]\mathbf{u}^{wt(Q)}\] \[F_{w_{x},w_{y},w_{z}}=\sum_{Q\in\mathcal{E}[w_{x},w_{y},w_{z}]} \operatorname{Tr}[QM_{1}Q^{\dagger}M_{2}]\mathbf{u}^{wt(Q)}.\]
and \(\mathcal{E}[w_{x},w_{y},w_{z}]\) are the Pauli operators with those \(X,Y\) and \(Z\) weights respectively. See [12] for general MacWilliams identities at any \(q\).
### Applications to stabilizer codes
Before we move on to tensor enumerators, let us build up some intuition as to what these polynomials are enumerating. Let us examine a special case where we set \(M_{1}=M_{2}=\Pi\) to be the projection onto the code subspace of a quantum code. Furthermore, let us suppose that this is a \([[n,k]]\) stabilizer code, meaning that
\[\Pi=\frac{1}{2^{n-k}}\sum_{\mathcal{S}\in\mathcal{S}}S\] (A.5)
It is clear that \(\operatorname{Tr}[E\Pi]\neq 0\) if and only if \(E\in\mathcal{S}\) is a stabilizer element and \(\operatorname{Tr}[E\Pi E^{\dagger}\Pi]\neq 0\) if and only if \(E\in\mathcal{N}(\mathcal{S})\) is a normalizer element. Therefore, we see that, up to a constant normalization factor, the coefficients \(A_{d}\) of \(A(z;\Pi,\Pi)\) is simply enumerating the number of stabilizer elements with weight \(d\) and \(B_{d}\) enumerating the number of logical operators with weight \(d\). Consequently, \(\sum_{d}A_{d}=2^{n-k}\) and \(\sum_{d}B_{d}=2^{n+k}\) for a \([[n,k]]\) code.
For the refined enumerators, the coefficients of the double enumerator \(D,D^{\perp}\) are simply recording the number of stabilizer and normalizer elements that have \(X,Z\)-weights \((w_{x},w_{z})\). Similarly, the complete enumerator coefficients \(E_{w_{x},w_{y},w_{z}},F_{w_{x},w_{y},w_{z}}\) count the elements with those corresponding \(X,Y\) and \(Z\) weights.
One can also set \(M_{1},M_{2}\) to different operators to extract different information about the code. For example, in coset enumerators, \(A_{d}^{s}\) counts the number of coset elements with a particular weight.
## Appendix B Instances of Tensor Enumerators
We have seen previously particular instances of tensor enumerators with \(\mathbf{u}=z\). One can extend examples in the main text to other enumerators, which we have used to study other error models.
### Refined Tensor Enumerators
Similar to the scalar forms, we apply \(\mathbf{u}=(w,x,y,z)\) and the weight function A.2 to 2.2. The tensor coefficients are
\[D_{w_{x},w_{z}}^{(J)}(E,\bar{E};M_{1},M_{2})\] \[=\sum_{F\in\mathcal{E}^{n-m}[w_{x},w_{z}]}\operatorname{Tr}((E \otimes_{J}F)M_{1})\operatorname{Tr}((\bar{E}\otimes_{J}F)^{\dagger}M_{2}),\] \[D_{w_{x},w_{z}}^{\perp(J)}(E,\bar{E};M_{1},M_{2})\] \[=\sum_{F\in\mathcal{E}^{n-m}[w_{x},w_{z}]}\operatorname{Tr}((E \otimes_{J}F)M_{1}(\bar{E}^{\dagger}\otimes_{J}F^{\dagger})M_{2}),\]
where \(J\subseteq\{1,2,\ldots n\}\) are the locations of open legs in the tensor enumerator. As in the main text, \(\otimes_{J}\) denotes the operation where we insert \(E\)s at corresponding positions of \(J\) indices to form a \(n\)-qubit Pauli string with \(F\) which has length \(n-m\) for \(m\) open indices.
For complete tensor enumerators, we replace \(\mathcal{E}[w_{x},w_{z}]\to\mathcal{E}[d_{x},d_{y},d_{z}]\) and
\[D_{w_{x},w_{z}}^{(J)}(E,\bar{E},M_{1},M_{2})\to E_{d_{x},d_{y},d_{z}}^{(J)}(E, \bar{E},M_{1},M_{2}),\] \[D_{w_{x},w_{z}}^{\perp(J)}(E,\bar{E},M_{1},M_{2})\to F_{d_{x},d_{y},d_{z} }^{(J)}(E,\bar{E},M_{1},M_{2})\]
in the above equations. \(\mathcal{E}[d_{x},d_{y},d_{z}]\) is the set of Pauli operators with \(X,Y,Z\) weights given by \(d_{x},d_{y},d_{z}\) respectively.
### Generalized Tensor Enumerators
For the most general noise model, it is also useful to define generalized abstract enumerator
**Theorem B.1**.: Suppose \(j,k\in J\subset\{1,\dots,n\}\). Then
\[\wedge_{j,k}\tilde{\mathbf{A}}^{(J)}(\mathbf{u};M_{1},M_{2})\] \[\quad=\tilde{\mathbf{A}}^{(J\setminus\{j,k\})}(\mathbf{u}; \wedge_{j,k}M_{1},\wedge_{j,k}M_{2})\]
and similarly for \(\tilde{\mathbf{B}}\).
Proof.: \[\wedge_{jk}\tilde{\mathbf{A}}^{(J)}(\mathbf{u};M_{1},M_{2})=\sum_ {E,\bar{E},F,\bar{F}}\left[\operatorname{Tr}((E\otimes_{J}F)M_{1}) \operatorname{Tr}((\bar{E}\otimes_{J}\bar{F})^{\dagger}M_{2}\right]\mathbf{u}^ {wt(F,\bar{F})}\left[\wedge_{j,k}e_{E,\bar{E}}\right]\] \[=\sum_{F,\bar{F}}\sum_{\begin{subarray}{c}E\setminus\{E_{j},E_{k }\},\\ E\setminus\{E_{j},E_{k}\}\end{subarray}}\sum_{G}\Big{\{}\operatorname{Tr}([((G \otimes G^{*})\otimes_{j,k}E\setminus\{E_{j},E_{k}\})\otimes_{J}F]M_{1})\] \[\qquad\qquad\cdot\sum_{\bar{G}}\operatorname{Tr}([((\bar{G} \otimes\bar{G}^{*})\otimes_{j,k}\bar{E}\setminus\{\bar{E}_{j},\bar{E}_{k}\}) \otimes_{J}\bar{F}]^{\dagger}M_{2})\Big{\}}\mathbf{u}^{wt(F,\bar{F})}e_{E \setminus\{E_{j},E_{k}\},\bar{E}\setminus\{\bar{E}_{j},\bar{E}_{k}\}}\] \[=\sum_{E^{\prime},E^{\prime},F}\operatorname{Tr}((E^{\prime} \otimes_{J\setminus\{j,k\}}F)(|\beta\rangle\langle\beta|_{j,k}M_{1})) \operatorname{Tr}((\bar{E}^{\prime}\otimes_{J\setminus\{j,k\}}\bar{F})^{ \dagger}(|\beta\rangle\langle\beta|_{j,k}M_{2}))\mathbf{u}^{wt(F,\bar{F})}e_{ E^{\prime},\bar{E}^{\prime}}\] \[=\sum_{E^{\prime},\bar{E}^{\prime},F}\operatorname{Tr}((E^{ \prime}\otimes_{J\setminus\{j,k\}}F)(\wedge_{j,k}M_{1}))\operatorname{Tr}(( \bar{E}^{\prime}\otimes_{J\setminus\{j,k\}}\bar{F})^{\dagger}(\wedge_{j,k}M_{ 2}))\mathbf{u}^{wt(F,\bar{F})}e_{E^{\prime},\bar{E}^{\prime}}\] \[=\tilde{\mathbf{A}}^{(J\setminus\{j,k\})}(\mathbf{u};\wedge_{jk} M_{1},\wedge_{jk}M_{2})\]
where the wedge acts on the vector basis in the usual way.
We used the fact that
\[|\beta\rangle\langle\beta|=\frac{1}{q}\sum_{P\in\mathcal{P}}P\otimes P^{*}.\]
Similarly, we can repeat this argument for \(B\)-type generalized enumerators. We do not use MacWilliams identity for this proof as we have not been able to identify any.
Note that it is often possible to cut down the computational cost when the weight function sat
isfies the form
\[\mathbf{u}^{wt(E,F)}=\mathbf{u}_{1}^{wt(E)}\mathbf{u}_{2}^{wt(F)}.\]
Then we can write the generalized enumerator as
\[\bar{A}(\mathbf{u};M_{1},M_{2})\] \[=\sum_{E\in\mathcal{E}^{n}}\operatorname{Tr}[EM_{1}]\mathbf{u}_{ 1}^{wt(E)}\sum_{F\in\mathcal{E}^{n}}\operatorname{Tr}[F^{\dagger}M_{2}] \mathbf{u}_{2}^{wt(F)}\]
that factorize into two separate sums such that each piece can be computed separately. For each \(M_{i}\), we can rewrite as a tensor network. This allows us to compute either sum using a tensor network of \(\chi=q^{2}\) by tracing reduced tensor enumerators. We see that for stabilizer codes, the coefficients for each term are identical to the usual \(A\)-type scalar weight enumerator up to a constant factor normalization.
### Stabilizer codes and reduced enumerators
Again, let us come back to stabilizer codes for intuition behind these constructions. Consider the reduced tensor enumerator polynomial with open indices \(J=\{j_{1},\ldots j_{m}\}\) where we set \(M_{1}=M_{2}=\Pi\) to be the projection onto stabilizer code subspace. We see that each coefficient \(A_{d}^{j_{1},\ldots,j_{m}}\) simply enumerates, up to a constant normalization, the number of stabilizer elements that has Pauli string \(\sigma^{(j_{1})}\otimes\cdots\otimes\sigma^{(j_{m})}\) on the 1st through \(m\)-th qubit/qudit and has weight \(d\) on the remaining qubit/qudits. Similarly for the reduced double, complete, and the generalized enumerators, the same intuition applies, except the weights are separated and recorded according to the types of the Pauli operators.
The tracing of reduced enumerators for stabilizer codes can be understood as a simple consequence of operator matching. Recall that stabilizers and logical operators in the QL construction come from matching such operators on the smaller tensors. Since the tensor enumerator is counting the number of stabilizers with weight \(d\) and a particular Pauli type on the open legs, tracing it with another tensor enumerator retains precisely the weight distribution of Pauli elements that are matching on the legs being glued. This in turn produces the desired weight distribution of the larger tensor network. Although Theorem 2.2 provides a construction that is sufficient for building weight enumerator of any quantum code, the above intuition suggests that the reduced enumerators are sufficient for stabilizer codes, which allows us to reduce the bond dimension from \(q^{4}\) to \(q^{2}\).
**Definition B.1**.: A diagonal trace is defined by
\[\wedge_{j,k}^{DT}e_{E,\bar{E}}\] \[=\left\{\begin{array}{cc}e_{E\setminus\{E_{j},E_{k}\},\bar{E} \setminus\{E_{j},\bar{E}_{k}\}}&\text{if }E_{j}=E_{k}^{\star}=\bar{E}_{j}=\bar{E}_{k}^{\star}\\ 0&\text{otherwise.}\end{array}\right.\]
**Proposition B.1**.: Suppose
\[M_{1} =\frac{1}{|S|}\sum_{S\in\mathcal{PS}}S\] \[M_{2} =\frac{1}{|S|}\sum_{S\in\mathcal{PS}}\omega_{S}S\]
for any coset \(P\mathcal{S}\) of Pauli operator \(P\) and \(\omega_{S}\in\mathbb{C}\). Let \(\wedge_{\text{all}}\) be the set of self-contractions that reduce an even rank tensor enumerator to a scalar, then
\[\wedge_{\text{all}}^{DT}\mathbf{A}^{J}(\mathbf{u};M_{1},M_{2})\propto A( \mathbf{u};\wedge_{\text{all}}M_{1},\wedge_{\text{all}}M_{2})\]
and similarly for \(\mathbf{B}\). The same holds if the forms of \(M_{1},M_{2}\) are switched.
Proof.: The proof is similar to Theorem 7.1 of [12]. Let us begin with the case where there are only two open legs in the tensor enumerator. It is clear that
\[\operatorname{Tr}[(G\otimes G^{\ast}\otimes F)M_{i}]\neq 0\] (B.3)
if and only if \(G\otimes G^{\ast}\otimes F\) is a coset element.
Suppose \(\mathcal{G}\) is the set of all \(G\)s for which the above trace is nonzero, then
\[\sum_{G,\bar{G}\in\mathcal{E}}\operatorname{Tr}[(G\otimes G^{ \ast}\otimes F)M_{1}]\operatorname{Tr}[(\bar{G}\otimes\bar{G}^{\ast}\otimes F )^{\dagger}M_{2}]\] \[= |\mathcal{G}|\sum_{G}\operatorname{Tr}[(G\otimes G^{\ast}\otimes F )M_{1}]\operatorname{Tr}[(G\otimes G^{\ast}\otimes F)^{\dagger}M_{2}]\]
which is proportional to the diagonal trace \(\wedge^{DT}\). We can see this by the following. For each \(G\in\mathcal{G}\), we sum over \(\bar{G}\), which leads to some constant \(\propto\sum_{S}\omega_{S}\). Repeating for each \(G\), we simply get back the same constant \(|\mathcal{G}|\) times. If we only sum over the diagonal terms with \(G=\bar{G}\), then we obtain the constant \(\propto\sum_{S}\omega_{S}\) once. This only works because one of \(M_{1},M_{2}\) is an equal superposition of Pauli operators.
Furthermore, note that for the \(F\) above, each \(\bar{G}\in\mathcal{G},\bar{G}=PG,P\neq I\), it is clear that \(P\otimes P^{*}\) is a stabilizer of the code. Therefore, for any other \(F\) such that \(G\otimes G^{*}\otimes F\in\mathcal{PS}\), it must follow that \((P\otimes P^{*}\otimes I)(G\otimes G^{*}\otimes F)=\bar{G}\otimes\bar{G}^{*} \otimes F\in\mathcal{PS}\) for each \(\bar{G}\in\mathcal{G}\). Therefore, the overcounting is identical for all \(F\)s by a factor of \(|\mathcal{G}|\). Therefore the diagonal elements contain sufficient information to reproduce the scalar enumerator.
For any tensor enumerator with four open legs that needs two self-traces on two pairs \(a_{0}\) and \(a_{1}\). From above arguments we know that a full trace on \(a_{1}\) followed by diagonal trace on \(a_{0}\) produces the correct scalar enumerator. Therefore it is sufficient to show that a diagonal trace on \(a_{1}\) produce the correct diagonal elements for the pair \(a_{0}\). Let \(E\) denote the Pauli for open legs associated with pair \(a_{0}\). Under a full trace on \(a_{1}\), the diagonal elements of the remaining tensor then come from coefficients of the form
\[\sum_{G,\bar{G}\in\mathcal{E}} \operatorname{Tr}[(G\otimes G^{*}\otimes E\otimes F)M_{1}]\] \[\times\operatorname{Tr}[(\bar{G}\otimes\bar{G}^{*}\otimes E \otimes F)^{\dagger}M_{2}]\]
where the sum comes from tracing over the legs of \(a_{1}\). We notice that the same argument above applies by setting \(E\otimes F\to F\) since \(F\) is arbitrary. Hence we conclude that the full trace on \(a_{1}\) produce the same diagonal elements on \(a_{0}\) as a diagonal trace up to a constant multiple. Proceed inductively with \(2k\) open legs, it is clear that the diagonal components are sufficient for generating the scalar weight enumerators.
To show that the B type enumerator is also correctly produced via diagonal trace, recall that diagonal trace is linear and commutes with the generalized Wigner transform as shown in the proof of Prop. VI.1, it can also be generated with only diagonal trace operations.
Therefore, for practical analysis of Pauli stabilizer codes, we only need to consider the reduced tensor enumerators, that is, restricting to the diagonal elements \(E=\bar{E}\) of each tensor enumerator in Definition 4.2 of [12].
The same proof does not apply for tracing generalized enumerators \(\bar{\mathbf{A}},\bar{\mathbf{B}}\) because two separate sums are required separately for \(F\) and \(\bar{F}\) whereas the argument is valid only when \(F=\bar{F}\). Therefore we have to perform a full tensor trace even for stabilizer codes. Such is needed to analyze more general error channels like coherent noise.
## Appendix C Tensor-only Implementation
### Multi-linear formulation
Although the enumerator polynomials can be implemented symbolically, it is also possible to rephrase them purely as tensors with complex coefficients.
For each tensor enumerator, one can take the coefficients in the polynomial, as a tensorial object by itself. For example, for Shor-Laflamme enumerators, coefficient \(A_{d}\) in the scalar enumerator can be treated as a rank-1 tensor with bond dimension \(\leq n+1\). Similarly, \(A_{d}^{j}\) in vector enumerator has two indices where \(j\) marks a bond dimension \(q^{4}\) index (or \(q^{2}\) in reduced enumerators) and \(d\) has bond dimension \(\leq n+1\). Generally, a(n) (abstract) tensor enumerators can be represented by the tensor components
\[A_{d_{\mathbf{u}}}^{j_{1},\ldots,j_{i}}\text{ and }B_{d_{\mathbf{u}}}^{j_{1}, \ldots,j_{i}},\]
where \(d_{\mathbf{u}}\) can be \(l\)-tuple of indices that tracks the powers of the monomials. For instance, for complete enumerator, \(d_{\mathbf{u}}\rightarrow(d_{x},d_{y},d_{z})\). For now, let us focus on reduced enumerators over \(q=2\) where the upper and lower indices carry no additional physical meaning. To avoid clutter, let us also drop the subscript of \(d_{\mathbf{u}}\). For concreteness one can take \(d\) to be the usual operator weight, but it is straightforward to restore it to the most general form.
In a tensor network, instead of tracing the tensor enumerator polynomials, we now trace together these tensors. However, we need an additional operation on the two legs \(d_{1},d_{2}\) that add the powers of the monomials during polynomial multiplication.
\[A_{d}^{j_{l+1},\ldots,j_{i},r_{l+1},r_{k}}\] \[=\sum_{d_{1},d_{2}}^{n_{1},n_{2}}M_{d}^{d_{1}d_{2}}\sum_{j_{1}, j_{2},\ldots,j_{l}}A_{d_{1}}^{j_{1},j_{2},\ldots,j_{l},\ldots,j_{i}}A_{d_{2}}^{j_{1}, j_{2},\ldots,j_{l},r_{l+1}\ldots r_{k}}\]
where \(M_{d}^{d_{1}d_{2}}\) is a tensor such that
\[M_{d}^{d_{1}d_{2}}=\begin{cases}1\text{ if }d=d_{1}+d_{2}\\ 0\text{ else.}\end{cases}\] (C.1)
on the formalism level, this trace with \(M\) tensor can be completed at any time. In practice, however, we perform such an operation every time a tensor trace like the above is completed.
The modified trace operation \(\tilde{\mathrm{Tr}}\) that reduces the tensor rank can also be performed by contracting another rank-3 tensor
\[T_{d^{\prime}dj}=\begin{cases}\delta_{d^{\prime}d}&\text{if }j=0\\ \delta_{d^{\prime}d-1}&\text{else}.\end{cases}\] (C.2)
For example, to recover the scalar enumerator from a vector enumerator (Fig. 22), we use \(A_{d^{\prime}}=T_{d^{\prime}dj}A_{d}^{j}\), where repeated indices are summed over.
The method for tracing other tensor enumerators, such as the double and complete tensor weight enumerators, is largely the same.
For example, the contraction of two reduced double enumerator is
\[D_{d^{\prime}_{x},d^{z}}^{j_{l+1};\dots;j_{i},r_{l+1},r_{k}}\] \[= M_{d^{2}}^{d_{1}d^{2}}M_{d^{2}}^{d_{1}d^{2}}D_{d^{2}_{1},d^{2}_{ 1},d^{2}_{1}}^{j_{1};j_{2},\dots;j_{l},\dots;j_{i}}D_{d^{2}_{x},d^{2}_{2}}^{j_ {1};j_{2},\dots;j_{l},r_{l+1}\dots r_{k}}\]
where repeated indices are summed. The modified trace is
\[D_{d^{\prime}_{x},d^{\prime}_{z}}=\sum_{d_{x},d_{z},j}T_{d^{\prime}_{x},d_{x}, d_{z},j}D_{d_{x},d_{z}}^{j}\] (C.3)
with
\[T_{d^{\prime}_{x}d^{\prime}_{z}d_{x}d_{z}j}=\begin{cases}\delta_{d^{\prime}_{ x}d_{x}}\delta_{d^{\prime}_{z}d_{z}}&\text{if }j=0\\ \delta_{d^{\prime}_{x}-1d_{x}}\delta_{d^{\prime}_{z}d_{z}}&\text{if }j=1\\ \delta_{d^{\prime}_{x}-1d_{x}}\delta_{d^{\prime}_{z}-1d_{z}}&\text{if }j=2\\ \delta_{d^{\prime}_{x}d_{x}}\delta_{d^{\prime}_{z}-1d_{z}}&\text{if }j=3\end{cases}.\]
If \(\mathbf{u}\) carries more variables, then an additional \(M\) contraction is needed for each separate variable index10.
Footnote 10: In the Matlab code implementation, \(j=3\) and \(j=2\) are swapped in the indexing convention such that \(Y\) is mapped to the last index and \(Z\) is mapped to the second last.
For the full tensor enumerator polynomial, one has to take extra care of potential sign changes where we have \(I\leftrightarrow I,X\leftrightarrow X,Z\leftrightarrow Z\) but \(-Y\leftrightarrow Y\) matchings. Suppressing the \(d\) index for now, we can think of each tensor enumerator index in a representation \(A^{j}\to A^{\alpha,\bar{\alpha}}\) with \(\alpha=1,\dots,q^{2}=4\). Furthermore, we prepare the Minkowski metric \(\eta_{\alpha,\beta}=diag(1,1,-1,1)\) so that tensor contractions are only performed between upper and lower indices. Indices are raised and lowered in the usual way with \(A_{\beta\bar{\beta}}=\eta_{\alpha\beta}\eta_{\bar{\alpha}\bar{\beta}}A^{ \alpha\bar{\alpha}}\) and contracting the two vector enumerators is by contracting the covariant vector with the contravariant one, i.e., \(A^{\alpha\bar{\alpha}}A^{\prime}_{\alpha\bar{\alpha}}\). We see the raised or lowered index does not matter for reduced enumerators because the diagonal elements for \(\eta_{\alpha\beta}\eta_{\bar{\alpha}\bar{\beta}}\) at \(\alpha=\bar{\alpha},\beta=\bar{\beta}\) only carry positive signs.
### MacWilliams Identity as a linear transformation
We derive the matrix representation of MacWilliams identities in the polynomial basis \(\{z^{d}w^{n-d}:0\leq d\leq n\}\) to facilitate Matlab numerics.
By Corollary 5 of [5],
\[A^{\prime}(w,z) =A((w+z)/q,z/q),\] \[B^{\prime}(w,z) =B((w+z)/q,z/q).\]
By Theorem 3 of [5], \(A^{\prime}(w,z)=B^{\prime}(z,w)\) and \(A^{\prime}_{d}=B^{\prime}_{n-d}\), which is equivalent to the quantum MacWilliams identity (Theorem 7 of [5]):
\[B(w,z)=A\left(\tfrac{w+(q^{2}-1)z}{q},\tfrac{w-z}{q}\right).\]
We chose to work with \(A^{\prime}\) and \(B^{\prime}\) because the relation \(A^{\prime}_{d}=B^{\prime}_{n-d}\) can be easily expressed by an anti-diagonal matrix with every element equal to 1 in the polynomial basis \(\{z^{d}w^{n-d}:0\leq d\leq n\}\).
To express \(B_{d}\) in terms of \(A_{d}\), we only need to express \(A_{d}\) in terms of \(A^{\prime}_{d}\), and \(B^{\prime}_{d}\) in terms of \(B_{d}\)
Figure 22: Modified trace operation.
By Corollary 5 of [5],
\[A^{\prime}(w,z)\] \[=\sum_{d=0}^{n}A_{d}\left(\frac{z}{q}\right)^{d}\left(w+\frac{z}{q} \right)^{n-d}\] \[=\sum_{d=0}^{n}A_{d}\left(\frac{z}{q}\right)^{d}\left(\sum_{e=0}^{ n-d}\binom{n-d}{e}\left(\frac{z}{q}\right)^{n-d-e}w^{e}\right)\] \[=\sum_{d=0}^{n}\sum_{e=0}^{n-d}A_{d}\binom{n-d}{e}\left(\frac{z}{q }\right)^{d}\left(\frac{z}{q}\right)^{n-d-e}w^{e}\] \[=\sum_{d=0}^{n}\sum_{e=0}^{n-d}A_{d}\binom{n-d}{e}\left(\frac{z}{q }\right)^{n-e}w^{e}\] \[=\sum_{e=0}^{n}\sum_{d=0}^{n-e}A_{d}\binom{n-d}{e}\left(\frac{z}{q }\right)^{n-e}w^{e}\] \[=\sum_{e=0}^{n}\left(\sum_{d=0}^{n-e}A_{d}\binom{n-d}{e}\right) \left(\frac{z}{q}\right)^{n-e}w^{e}\] \[=\sum_{e^{\prime}=0}^{n}\left(\sum_{d=0}^{e^{\prime}}A_{d}\binom{ n-d}{n-e^{\prime}}\right)\left(\frac{z}{q}\right)^{e^{\prime}}w^{n-e^{\prime}}\] \[=\sum_{d=0}^{n}\left(\frac{1}{q^{d}}\sum_{m=0}^{d}A_{m}\binom{n-m }{n-d}\right)z^{d}w^{n-d}.\]
Since
\[A^{\prime}(w,z)=\sum_{d=0}^{n}A^{\prime}_{d}z^{d}w^{n-d},\]
we have
\[A^{\prime}_{d}=\frac{1}{q^{d}}\sum_{m=0}^{d}A_{m}\binom{n-m}{n-d},0\leq d\leq n.\] (C.4)
In other words,
\[A^{\prime}_{d}=\sum_{m=0}^{d}T_{dm}A_{m},\] (C.5)
where
\[T_{dm}=\frac{1}{q^{d}}\binom{n-m}{n-d},0\leq m\leq d,0\leq d\leq n.\]
Similarly, \(B^{\prime}(w,z)=B(w+z/q,z/q)\) implies that
\[B^{\prime}_{d}=\sum_{m=0}^{d}T_{dm}B_{m}.\] (C.6)
Hence
\[A_{d}=\sum_{d^{\prime}=0}^{n}(T^{-1}JT)_{dd^{\prime}}B_{d^{\prime}},0\leq d,d ^{\prime}\leq n,\] (C.7)
where
\[J=\begin{pmatrix}0&\cdots&0&1\\ 0&\cdots&1&0\\ \vdots&\ddots&\vdots&\vdots\\ 1&\cdots&0&0\end{pmatrix}_{(n+1)\times(n+1)}\] \[T=\begin{pmatrix}\binom{n}{n}&0&\cdots&0\\ \frac{1}{q}\binom{n}{n-1}&\frac{1}{q}\binom{n-1}{n-1}&\cdots&0\\ \vdots&\ddots&\vdots&\vdots\\ \frac{1}{q^{n}}\binom{n}{0}&\frac{1}{q^{n}}\binom{n-1}{0}&\cdots&\frac{1}{q^{ n}}\binom{0}{0}\end{pmatrix}_{(n+1)\times(n+1)}\]
Similarly,
\[B_{d}=\sum_{d^{\prime}=0}^{n}(T^{-1}JT)_{dd^{\prime}}A_{d^{\prime}},0\leq d,d ^{\prime}\leq n,\] (C.8)
because
\[A=T^{-1}JTB\Longleftrightarrow B=T^{-1}JTA.\]
### Connection with Farrelly, Tuckett and Stace
Ref. [18] proposed a method to compute distance in local tensor network codes, which are qubit stabilizer codes obtained from contracting other smaller stabilizer codes in a manner similar to [11]. In particular, we see that when applied to an \([[n,k]]\) stabilizer code, the tensor \(C_{w}^{l_{1},\cdots,l_{k}}\) in [18] is exactly a reduced tensor enumerator in the multi-linear form where \(w\) is precisely the degree of the monomial and \(l_{i}=0,1,2,3\) are the open indices that track the Pauli type \(I,X,Y,Z\). This corresponds to computing the tensor weight enumerator \(B(z)^{l_{1},\cdots,l_{k}}\) if we keep all the logical legs open. Similarly, \(C_{w}^{0,\cdots,0}\) is the coefficient of an \(A\)-type tensor enumerator. Indeed, \(D_{w}\) which is obtainable from \(C_{w}^{l_{1},\cdots,l_{k}}\) is precisely the tensor coefficients of the scalar enumerators \(B_{d}-A_{d}\). Both enumerators are in the usual Shor-Laflamme form.
Although both approaches rely on the tensor network method to produce weight distributions, the detailed construction differs somewhat in how the tensors in the network is implemented -- we construct the enumerator from encoding maps while [18] directly enumerate the logical operators of an encoding tensor and then compute their weights by contracting with another weight tensor that tabulates \(4^{n}\) Pauli weights. Ref. [18] also produces a tensor network (See Fig 4d) with
a double bond on each contraction where each edge has bond dimension 4. Naively this appears to lead to bond dimension 16 objects (Fig 4b,c). While such a description is sufficient, we know from the tensor enumerator formalism that it is possible to obtain the enumerator for Pauli stabilizer codes with a reduced bond dimension 4 (Thm A.1), hence enabling more efficient tensor contractions.
It is unclear how the complexity estimates for these methods compare, as none was performed for [18] except for 1d codes that are prepared by log depth circuits. However, given the similarities in their structure and their overall efficiency for tree tensor networks and holographic codes, they should be polynomially equivalent in that regime. In practice, however, we note that even a constant factor difference can be quite substantial. Therefore, a more in-depth comparison in their performance can be an interesting problem for future work. In particular, it is important to understand whether these methods are optimal with respect to different networks.
Although Ref.[18] does not discuss weight distributions for other error models, it is possible to adapt their formalism to produce double and complete weight enumerators by modifying their weight tensor. For example, to obtain the double enumerator, one can replace \(W_{w}^{g_{1},\ldots,g_{m}}\) with \(W_{w_{x},w_{x}}^{g_{1},\ldots,g_{m}}\) such that the tensor coefficient is unity when a Pauli string \(\sigma^{g_{1}}\otimes\cdots\otimes\sigma^{g_{m}}\) has X and Z weights \(w_{x},w_{z}\) respectively. A similar extension should be possible for abstract enumerators. It is currently unclear whether a similar extension is possible for generalized abstract enumerators.
Another key difference lies in the use of MacWilliams transform in our work, which is polynomial11 in \(n\). Generally, the MacWilliams identity can help reduce computational cost. When the tensor network is efficiently contractible, and when we overlook the cost in manipulating large integers, the difference of keeping \(B\) vs \(A\) type tensor enumerators should be relatively insignificant. However, when the minimal cuts are large, e.g. when the tensor network represents a volume law or even some area law states, the \(B\) type tensor can become far more populated than the \(A\) type by as much as \(O(e^{k})\). For instance, in the limiting case where the cost of tensor network contraction approaches the that of the brute force method12, e.g. in random stabilizer codes, we see that \(B\) is \(2^{2k}\) more expensive to compute compared to \(A\). Hence in some instances, the MacWilliams identities can help reduce computational cost that is exponential in \(k\).
Footnote 11: The complexity roughly scales as \(O(n^{3})\) from matrix multiplication. However, this is not counting the cost needed to manipulate large integers.
The decoder [18] uses is formally similar to the usual tensor network decoder where error probability is computed for some fixed \(p\) using a tensor contraction whereas the enumerator method produces an analytical expression for the error probability as a function of \(\mathbf{u}\). The former is computationally advantageous when the error probabilities are heavily inhomogeneous and carry a strong locational dependence. The latter is more powerful for obtaining a continuous range of error probabilities when the physical errors are relatively uniform across the entire system.
Overall, the formalism based on tensor weight enumerator is more general as it applies to all quantum codes with uniform local dimensions. When specialized to the case of Pauli stabilizer codes over qubits, both methods can compute scalar and tensor enumerators associated with the code using tensor network methods. In this case, our method improves upon [18] with a reduced bond dimension and with the use of the MacWilliams identities. The former provides a polynomial speed up while the latter can provide an \(O(e^{k})\) speed up in some regimes. With the extension to biased error and general noise models, we also extend the maximum likelihood decoders for such stabilizer codes to general error channels. However, the enumerator method is less efficient in tackling highly inhomogeneous errors.
## Appendix D Error detection for general noise channels
### Non-detectable error
_Proof for Theorem 3.2_: Let us compute here the probability of incurring a non-correctable (logical) error. Suppose the error channel is \(\mathcal{E}(\rho)\), which can be written as the Kraus form in Theorem 3.2 being the tensor product of single site errors. Let the initial state be \(\rho=|\tilde{\psi}\rangle\langle\tilde{\psi}|\in L(\mathcal{C})\)
and \(\dim\mathcal{C}=K\). Then the probability of a non-detectable error is
\[p_{nd}(\rho) =\operatorname{Tr}[(\Pi-\rho)\Pi\mathcal{E}(\rho)\Pi]\] (D.1) \[=\sum_{\mathbf{i}}||(I-|\tilde{\psi}\rangle\langle\tilde{\psi}|) \Pi\mathcal{K}_{\mathbf{i}}|\tilde{\psi}\rangle||^{2},\]
where \(\Pi\) is the projection onto the code subspace. It is simply the overlap between the error state and the part of the code subspace that is orthogonal to the original codeword.
Now averaging over all initial codewords \(|\tilde{\psi}\rangle\) with respect to the normalized uniform measure \(\mu(|\tilde{\psi}\rangle)\), we have
\[p_{nd} =\sum_{\mathbf{i}}\int_{|\tilde{\psi}\rangle\in\mathcal{C}}||(I-| \tilde{\psi}\rangle\langle\tilde{\psi}|)\Pi\mathcal{K}_{\mathbf{i}}|\tilde{ \psi}\rangle||^{2}d\mu(|\tilde{\psi}\rangle)\] \[=\sum_{\mathbf{i}}\int_{|\tilde{\psi}\rangle\in\mathcal{C}} \langle\tilde{\psi}|\mathcal{K}_{\mathbf{i}}^{\dagger}\Pi(I-|\tilde{\psi} \rangle\langle\tilde{\psi}|)\] \[\quad\times(I-|\tilde{\psi}\rangle\langle\tilde{\psi}|)\Pi \mathcal{K}_{\mathbf{i}}|\tilde{\psi}\rangle d\mu(\tilde{\psi})\] \[=\sum_{\mathbf{i}}\int_{|\tilde{\psi}\rangle\in\mathcal{C}} \langle\tilde{\psi}|\mathcal{K}_{\mathbf{i}}^{\dagger}\Pi\mathcal{K}_{ \mathbf{i}}|\tilde{\psi}\rangle d\mu(|\tilde{\psi}\rangle)\] \[\quad-\sum_{\mathbf{i}}\int_{|\tilde{\psi}\rangle\in\mathcal{C}} \langle\tilde{\psi}|\mathcal{K}_{\mathbf{i}}^{\dagger}\Pi|\tilde{\psi}\rangle \langle\tilde{\psi}|\Pi\mathcal{K}_{\mathbf{i}}|\tilde{\psi}\rangle d\mu(| \tilde{\psi}\rangle)\]
Similar to [56], the above integral can be evaluated. The first term is
\[\sum_{\mathbf{i}}\int_{|\tilde{\psi}\rangle\in\mathcal{C}} \langle\tilde{\psi}|\mathcal{K}_{\mathbf{i}}^{\dagger}\Pi\mathcal{K}_{ \mathbf{i}}|\tilde{\psi}\rangle d\mu(|\tilde{\psi}\rangle)\] \[=\sum_{\mathbf{i}}\operatorname{Tr}[\mathcal{K}_{\mathbf{i}}^{ \dagger}\Pi\mathcal{K}_{\mathbf{i}}\int_{|\tilde{\psi}\rangle\in\mathcal{C}}| \tilde{\psi}\rangle\langle\tilde{\psi}|dv(|\tilde{\psi}\rangle)]\] \[=\frac{1}{K}\sum_{\mathbf{i}}\operatorname{Tr}[\mathcal{K}_{ \mathbf{i}}^{\dagger}\Pi\mathcal{K}_{\mathbf{i}}\Pi],\]
where we use Lemma 7 in [56] for the last step, which evaluates the integral.
The second term is
\[\sum_{\mathbf{i}}\int_{|\tilde{\psi}\rangle\in\mathcal{C}} \langle\tilde{\psi}|\mathcal{K}_{\mathbf{i}}^{\dagger}\Pi|\tilde{\psi}\rangle \langle\tilde{\psi}|\Pi\mathcal{K}_{\mathbf{i}}|\tilde{\psi}\rangle d\mu(| \tilde{\psi}\rangle)\] \[=\frac{1}{K(K+1)}(\sum_{\mathbf{i}}\operatorname{Tr}[\mathcal{K}_ {\mathbf{i}}^{\dagger}\Pi\mathcal{K}_{\mathbf{i}}\Pi]\] \[\quad+\sum_{\mathbf{i}}\operatorname{Tr}[\mathcal{K}_{\mathbf{i} }^{\dagger}\Pi]\operatorname{Tr}[\mathcal{K}_{\mathbf{i}}\Pi])\]
where we integrate over the same measure and use Lemma 8 in [56]. This completes our proof for Theorem 3.2.
### Errors with non-trivial syndromes
_Proof of Theorem 3.3 and decoder_: When \(\mathcal{C}\) is a stabilizer code, we can talk about syndrome measurements and decoding in the usual sense. While \(\Pi\) denotes the projection onto the code subspace, i.e., measuring trivial syndromes, we can similarly ask what the probability is for measuring some other syndromes \(s\) where the state is taken to a subspace \(\Pi_{s}=E_{s}\Pi E_{s}^{\dagger}\), where \(E_{s}\) is an error with syndrome \(s\).
Again, this is given by the overlap between the state suffering from the error and some final state in the error subspace.
\[\bar{p}_{s} =\sum_{\mathbf{i}}\int_{|\tilde{\psi}\rangle\in\mathcal{C}}d\mu( |\tilde{\psi}\rangle)\operatorname{Tr}[\Pi_{s}\mathcal{K}_{\mathbf{i}}\rho \mathcal{K}_{\mathbf{i}}^{\dagger}\Pi_{s}]\] \[=\sum_{\mathbf{i}}\int_{|\tilde{\psi}\rangle\in\mathcal{C}}d\mu( |\tilde{\psi}\rangle)\operatorname{Tr}[E_{s}\Pi E_{s}^{\dagger}\mathcal{K}_{ \mathbf{i}}\rho\mathcal{K}_{\mathbf{i}}^{\dagger}E_{s}\Pi E_{s}^{\dagger}]\] \[=\sum_{\mathbf{i}}\operatorname{Tr}[\int_{|\tilde{\psi}\rangle \in\mathcal{C}}d\mu(|\tilde{\psi}\rangle)|\tilde{\psi}\rangle\langle\tilde{ \psi}|\mathcal{K}_{\mathbf{i}}^{\dagger}\Pi_{s}\mathcal{K}_{\mathbf{i}}]\] \[=\frac{1}{K}\sum_{\mathbf{i}}\operatorname{Tr}[\Pi\mathcal{K}_{ \mathbf{i}}^{\dagger}\Pi_{s}\mathcal{K}_{\mathbf{i}}].\]
Therefore, this quantity can be easily obtained from the \(B\)-type generalized complete enumerator when we replace one of \(\Pi\) by \(\Pi_{s}\). Note that this recovers the syndrome probability with Pauli errors using the coset enumerator.
It is also useful for decoding purposes to consider the probability \(p(\bar{L}|s)\) so as to correct the most likely logical error. The probability that \(\tilde{E}_{s}=E_{s}\tilde{L}\) occurs is
\[\bar{p}(\bar{L}\cap s) =\sum_{\mathbf{i}}\int_{|\tilde{\psi}\rangle\in\mathcal{C}}d\mu(| \tilde{\psi}\rangle)\] \[\quad\times\operatorname{Tr}[\tilde{E}_{s}|\tilde{\psi}\rangle \langle\tilde{\psi}|\tilde{E}_{s}^{\dagger}\Pi_{s}\mathcal{K}_{\mathbf{i}}| \tilde{\psi}\rangle\langle\tilde{\psi}|\mathcal{K}_{\mathbf{i}}^{\dagger}\Pi_{s}]\] \[=\sum_{\mathbf{i}}\int_{|\tilde{\psi}\rangle\in\mathcal{C}}d\mu(| \tilde{\psi}\rangle)\] \[\quad\times((\tilde{\psi}|\tilde{E}_{s}^{\dagger}\Pi_{s} \mathcal{K}_{\mathbf{i}}|\tilde{\psi}\rangle)(\langle\tilde{\psi}|\mathcal{K}_{ \mathbf{i}}^{\dagger}\Pi_{s}\tilde{E}_{s}|\tilde{\psi}\rangle)\]
Because \(\Pi_{s}\tilde{E}_{s}|\tilde{\psi}\rangle=\tilde{E}_{s}|\tilde{\psi}\rangle\).
\[\bar{p}(\tilde{L}\cap s)\] \[= \sum_{\mathbf{i}}\int_{|\tilde{\psi}\rangle\in\mathcal{C}}d\mu(| \tilde{\psi}\rangle)((\tilde{\psi}|\tilde{E}_{s}^{\dagger}\mathcal{K}_{ \mathbf{i}}|\tilde{\psi}\rangle))((\tilde{\psi}|\mathcal{K}_{\mathbf{i}}^{ \dagger}\tilde{E}_{s}|\tilde{\psi}\rangle)\] \[= \frac{1}{K(K+1)}\Big{(}\sum_{\mathbf{i}}\mathrm{Tr}[\tilde{E}_{s }^{\dagger}\mathcal{K}_{\mathbf{i}}\Pi\mathcal{K}_{\mathbf{i}}^{\dagger}\tilde{ E}_{s}\Pi]\] \[+\sum_{\mathbf{i}}\mathrm{Tr}[\tilde{E}_{s}^{\dagger}\mathcal{K}_ {\mathbf{i}}\Pi]\,\mathrm{Tr}[\mathcal{K}_{\mathbf{i}}^{\dagger}\tilde{E}_{s} \Pi]\Big{)}\] \[= \frac{1}{K(K+1)}\Big{(}\sum_{\mathbf{i}}\mathrm{Tr}[\mathcal{K}_ {\mathbf{i}}\Pi\mathcal{K}_{\mathbf{i}}^{\dagger}\Pi_{s}]\] \[+\sum_{\mathbf{i}}\mathrm{Tr}[\mathcal{K}_{\mathbf{i}}\Pi\tilde{E }_{s}^{\dagger}]\,\mathrm{Tr}[\mathcal{K}_{\mathbf{i}}^{\dagger}\tilde{E}_{s} \Pi]\Big{)}.\]
Hence
\[\bar{p}(\tilde{L}|s) =\bar{p}(\tilde{L}\cap s)/\bar{p}_{s}\] \[=\frac{1}{K+1}(1+\frac{\sum_{\mathbf{i}}\mathrm{Tr}[\mathcal{K}_ {\mathbf{i}}\Pi\tilde{E}_{s}^{\dagger}]\,\mathrm{Tr}[\mathcal{K}_{\mathbf{i}}^ {\dagger}\tilde{E}_{s}\Pi]}{\sum_{\mathbf{i}}\mathrm{Tr}[\mathcal{K}_{\mathbf{ i}}\Pi\mathcal{K}_{\mathbf{i}}^{\dagger}\Pi_{s}]}).\]
Each term in the above expression can be computed by setting \(M_{1},M_{2}\) to the appropriate values in the weight enumerator \(M_{1}=\Pi\tilde{E}_{s}^{\dagger},M_{2}=\tilde{E}_{s}\Pi\) for the \(\bar{A}\)-type enumerator and \(M_{1}=\Pi,M_{2}=\Pi_{s}\) for the \(\bar{B}\)-type enumerator.
For the purpose of building a decoder, we do not care about the overall normalization, hence computing the \(\bar{A}\)-type enumerator will be sufficient. The first term in \(\bar{p}(\tilde{L}\cap s)\) is independent on the logical operation \(\tilde{L}\), and thus does not modify our decision based on the maximum likelihood.
### General Logical Error Channel
_Non-unitary logical error:_ Under this more general channel, it is also natural to consider a more general logical error where for some \(\rho_{s}\) in the error subspace with syndrome \(s\) beyond the kind of coherent logical error \(\tilde{L}\). For instance, we can discuss the error probability that the logical information suffers from a logical error channel in that subspace
\[\tilde{\mathcal{N}}(\tilde{\rho})=\tilde{\rho}\rightarrow\sum_{j}\tilde{\eta }_{j}\tilde{\rho}\tilde{\eta}_{j}^{\dagger},\]
after obtaining syndrome \(s\) by measuring the checks. More precisely, we find that
\[\bar{p}(\tilde{\mathcal{N}}(\cdot)\cap s)\] \[= \sum_{\mathbf{i}}\int_{|\tilde{\psi}\rangle\in\mathcal{C}}d\mu(| \tilde{\psi}\rangle)\,\mathrm{Tr}[E_{s}\tilde{\mathcal{N}}(\tilde{\rho})E_{s }^{\dagger}\Pi_{s}\mathcal{K}_{\mathbf{i}}|\tilde{\psi}\rangle\langle\tilde{ \psi}|\mathcal{K}_{\mathbf{i}}^{\dagger}\Pi_{s}]\] \[= \sum_{\mathbf{i},j}\int_{|\tilde{\psi}\rangle\in\mathcal{C}}d\mu( |\tilde{\psi}\rangle)\,\mathrm{Tr}[E_{s}\tilde{\eta}_{j}|\tilde{\psi}\rangle \langle\tilde{\psi}|\tilde{\eta}_{j}^{\dagger}E_{s}^{\dagger}\mathcal{K}_{ \mathbf{i}}|\tilde{\psi}\rangle\langle\tilde{\psi}|\mathcal{K}_{\mathbf{i}}^{ \dagger}]\] \[= \sum_{\mathbf{i},j}\int_{|\tilde{\psi}\rangle\in\mathcal{C}}d\mu( |\tilde{\psi}\rangle)\,\mathrm{Tr}[O_{j\mathbf{i}}^{s}|\tilde{\psi}\rangle \langle\tilde{\psi}|O_{j\mathbf{i}}^{s\dagger}|\tilde{\psi}\rangle\langle \tilde{\psi}|]\] \[= \frac{1}{K(K+1)}\sum_{\mathbf{i},j}\Big{(}\,\mathrm{Tr}[O_{j, \mathbf{i}}^{s\dagger}\Pi O_{j,\mathbf{i}}^{s}\Pi]\] \[+\mathrm{Tr}[O_{j,\mathbf{i}}^{s\dagger}\Pi]\,\mathrm{Tr}[O_{j, \mathbf{i}}^{s}\Pi]\Big{)}\]
where we defined \(O_{j,\mathbf{i}}^{s}=\mathcal{K}_{\mathbf{i}}^{\dagger}E_{s}\tilde{\eta}_{j}\). Note that just like for calculating error probability of syndrome \(s\) under depolarizing noise with \(B\) type enumerators, the first term can be expressed as a B type and one can perform the sum over \(j\) in defining
\[\Pi_{\eta}^{s}=\sum_{j}E_{s}\tilde{\eta}_{j}\Pi\tilde{\eta}_{j}^{\dagger}E_{s }^{\dagger}\]
and then substitute and compute the first term as
\[\sum_{\mathbf{i}}\mathrm{Tr}[\mathcal{K}_{\mathbf{i}}\Pi\mathcal{K}_{\mathbf{i}} ^{\dagger}\Pi_{\eta}^{s}],\] (D.2)
which is basically identical to our computation of the non-detectable error probability except we set \(M_{2}=\Pi_{\eta}^{s}\) and the remaining procedures for decomposing \(\mathcal{K}_{\mathbf{i}}\) carries over identically.
For the second term, however, we have to repeat the enumerator computations for each \(j\) by summing over \(\mathbf{i}\). If we set \(\tilde{E}_{s}^{j}=E_{s}\tilde{\eta}_{j}\), then it has the identical form as the coset enumerator we analyzed earlier except for the j dependence,
\[\mathrm{Tr}[\mathcal{K}_{\mathbf{i}}\Pi\tilde{E}_{s}^{j\dagger}]\,\mathrm{Tr}[ \mathcal{K}_{\mathbf{i}}^{\dagger}\tilde{E}_{s}^{j}\Pi].\]
For generic errors, \(j=1,\ldots,4^{k}\), so it is more relevant for \(k\) small.
For a fixed error channel, the enumerator fully captures the likelihood of all error channels by decomposing \(\tilde{\eta}_{j}\) into Paulis and varying their coefficients. It could be interesting to analyze the extrema of error probabilities with respect to these variables to find the most likely error channel. We can imagine building a decoder that seeks to undo the effect of the most likely error channel, though it is unclear when such recovery procedures exist in general. To correct such errors, we
apply first a coset element of \(E_{s}\). Then depending on the availability of the recovery map given the logical error, we (partially) reverse the effect of the logical error channel based on the relevant information of the code and syndrome measurement outcomes.
|
2301.09263 | On the solutions of $x^2= By^p+Cz^p$ and $2x^2= By^p+Cz^p$ over totally
real fields | In this article, we study the solutions of certain type over $K$ of the
Diophantine equation $x^2= By^p+Cz^p$ with prime exponent $p$, where $B$ is an
odd integer and $C$ is either an odd integer or $C=2^r$ for $r \in \mathbb{N}$.
Further, we study the non-trivial primitive solutions of the Diophantine
equation $x^2= By^p+2^rz^p$ ($r\in {1,2,4,5}$) (resp., $2x^2= By^p+2^rz^p$ with
$r \in \mathbb{N}$) with prime exponent $p$, over $K$. We also present several
purely local criteria of $K$. | Narasimha Kumar, Satyabrat Sahoo | 2023-01-23T04:25:08Z | http://arxiv.org/abs/2301.09263v2 | # On the solutions of \(x^{2}=By^{p}+Cz^{p}\) and \(2x^{2}=By^{p}+Cz^{p}\) over totally real fields
###### Abstract.
In this article, we study the solutions of certain type over \(K\) of the Diophantine equation \(x^{2}=By^{p}+Cz^{p}\) with prime exponent \(p\), where \(B\) is an odd integer and \(C\) is either an odd integer or \(C=2^{r}\) for \(r\in\mathbb{N}\). Further, we study the non-trivial primitive solutions of the Diophantine equation \(x^{2}=By^{p}+2^{r}z^{p}\) (\(r\in 1,2,4,5\)) (resp., \(2x^{2}=By^{p}+2^{r}z^{p}\) with \(r\in\mathbb{N}\)) with prime exponent \(p\), over \(K\). We also present several purely local criteria of \(K\).
Key words and phrases:Diophantine equations, Semi-stability, Irreducibility of Galois representations, Modularity of elliptic curves, Level lowering 2010 Mathematics Subject Classification: Primary 11D41, 11R80; Secondary 11F80, 11G05, 11R04
## 1. Introduction
Throughout this article, \(K\) and \(p\) denote a totally real field and a rational prime, respectively. The study of non-trivial solutions to the Diophantine equations is one of the exciting and most interesting areas in Mathematics. A prominent and most interesting example is the Fermat equation \(x^{p}+y^{p}=z^{p}\) with exponent \(p\). In [25, Theorem 0.5], Wiles used the modularity of elliptic curves to show that \(\mathbb{Z}\)-integral solutions of \(x^{p}+y^{p}=z^{p}\) are trivial. In [19, Theorem 1.3], Jarvis and Meekin showed the same result that continues to hold even over \(\mathbb{Z}[\sqrt{2}]\). A similar study over \(K\) has been initiated by Freitas and Siksek in [17, Theorem 3] and showed that an asymptotic version of these results continues to hold over \(K\), by employing some explicit bounds on the solutions of \(S\)-unit equation. In [18], Deconinck generalized the work of [17] to \(Ax^{p}+By^{p}=Cz^{p}\) with \(2\nmid ABC\).
In [25, Theorem 3], Ribet showed that \(x^{p}+2^{r}y^{p}+z^{p}=0\) with exponent \(p\) has no non-trivial \(\mathbb{Z}\)-solution for \(1\leq r<p\). Over \(K\), in [17, Theorem 3.2], we show that the equation \(x^{p}+y^{p}=2^{r}z^{p}\) (\(r\in\mathbb{N}\)) has no asymptotic solution in \(W_{K}\) (cf. [17, Definition 3.2] for \(W_{K}\)). Furthermore, we show that the equation \(x^{p}+y^{p}=2^{r}z^{p}\) has no asymptotic solution in \(\mathcal{O}_{K}^{3}\) for \(r=2,3\) (cf. [17, Theorem 3.3]). The proofs of our results also depend on certain explicit bounds on the solutions of the \(S\)-unit equation.
Similarly, Ivorra in [25] studied \(\mathbb{Z}\)-solutions of \(x^{2}=y^{p}+2^{r}z^{p}\) and \(2x^{2}=y^{p}+2^{r}z^{p}\) for \(0\leq r\leq p\). In [24, Theorem 1], Siksek established that the only non-trivial primitive \(\mathbb{Z}\)-solutions of \(x^{2}=y^{p}+2^{r}z^{p}\) are \(r=3,\ x=\pm 3,\ y=z=1\), where \(p\in\mathbb{P}\) is arbitrary.
Let us now make sense of what "asymptotic" would actually mean.
**Definition 1.1**.: _We say a Diophantine equation \(Ax^{2}=By^{p}+Cz^{p}\) with exponent \(p\) has no asymptotic solution in a set \(S\subseteq\mathcal{O}_{K}^{3}\), if there exists a constant \(V_{K,A,B,C}>0\) (depending on \(K,A,B,C\)) such that for primes \(p>V_{K,A,B,C}\), the equation \(Ax^{2}=By^{p}+Cz^{p}\) with exponent \(p\) has no non-trivial primitive solution in \(S\)._
In [19], Darmon and Merel showed that the equation \(x^{n}+y^{n}=z^{2}\) with exponent \(n\geq 4\) has no non-trivial primitive \(\mathbb{Z}\)-solutions. In [16, Theorem 1.1], Isik, Kara, and Ozman proved that the "asymptotic" FLT holds for \(x^{p}+y^{p}=z^{2}\) with exponent \(p\) of certain type over \(K\), whenever the narrow class number \(h_{K}^{+}=1\) and there exists a
Introduction
Let \(f\) be a Hilbert modular newform over \(K\) of parallel weight \(2\), let \(\mathfrak{n}\) and \(\mathfrak{q}\) be the minimal discriminant of \(E\) at \(\mathfrak{q}\). Let
\[\mathfrak{m}_{p}:=\prod_{p\mid v_{\mathfrak{q}}(\Delta_{\mathfrak{q}}),\ \mathfrak{q}\mid \mathfrak{n}}\mathfrak{q}\text{ and }\mathfrak{n}_{p}:=\frac{\mathfrak{n}}{ \mathfrak{m}_{p}}. \tag{2.1}\]
We state a conjecture, which is an extension of the Eichler-Shimura theorem over \(\mathbb{Q}\).
**Conjecture 2.1** (Eichler-Shimura).: _Let \(f\) be a Hilbert modular newform over \(K\) of parallel weight \(2\), let \(\mathfrak{n}\) and with coefficient field \(\mathbb{Q}_{f}=\mathbb{Q}\). Then, there exists an elliptic curve \(E_{f}/K\) with conductor \(\mathfrak{n}\) having same \(L\)-function as \(f\)._
In [10, Theorem 7.7], Darmon showed that Conjecture 2.1 holds over \(K\), if either \([K:\mathbb{Q}]\) is odd or there exists some \(\mathfrak{q}\in P\) such that \(v_{\mathfrak{q}}(\mathfrak{n})=1\). In [14, Corollary 2.2], Freitas and Siksek provided a partial answer to Conjecture 2.1 in terms of mod \(p\) Galois representations attached to \(E\).
## 3. Solutions of the Diophantine equation \(x^{2}=By^{p}+Cz^{p}\) over \(W_{k}\)
In this section, we study the solutions of the following Diophantine equation
\[x^{2}=By^{p}+Cz^{p} \tag{3.1}\]
with prime exponent \(p\geq 3\) and \(B,C\in\mathbb{Z}\). Throughout, we assume that \(B\) is odd.
* We say the equation (3.1) with exponent \(p\) is of Type I, if \(C\) is odd.
* We say the equation (3.1) with exponent \(p\) is of Type II, if \(C=2^{r}\) for some \(r\in\mathbb{N}\).
For \(n\in\mathbb{Z}\), define \(S_{K}(n):=\{\mathfrak{P}\in P:\ \mathfrak{P}|2n\}\). Let \(S_{K}:=S_{K}(1)\), \(S_{K}^{\prime}:=S_{K}(BC)\) and \(U_{K}:=\{\mathfrak{P}\in S_{K}:(3,v_{\mathfrak{P}}(2))=1\}\).
**Definition 3.1** (Trivial solution).: _We call a solution \((a,b,c)\in\mathcal{O}_{K}^{3}\) to the equation (3.1) with exponent \(p\) is trivial, if \(abc=0\), otherwise non-trivial. We say \((a,b,c)\in\mathcal{O}_{K}^{3}\) is primitive if \(a,b,c\) are pairwise coprime._
**Definition 3.2**.: _Let \(W_{K}\) be the set of all non-trivial primitive solutions \((a,b,c)\in\mathcal{O}_{K}^{3}\) to the equation (3.1) with exponent \(p\) of Type I or II with \(\mathfrak{P}|bc\) for every \(\mathfrak{P}\in S_{K}\). Note that, for any \(\mathfrak{P}\in S_{K}\) and \((a,b,c)\in W_{K}\), \(\mathfrak{P}\) divides exactly one of \(b\) and \(c\)._
### Main result
For any set \(S\subseteq P\), let \(\mathcal{O}_{S}:=\{\alpha\in K:v_{\mathfrak{P}}(\alpha)\geq 0\text{ for all }\mathfrak{P}\in P\backslash S\}\) be the ring of \(S\)-integers in \(K\) and \(\mathcal{O}_{S}^{*}\) be the \(S\)-units of \(\mathcal{O}_{S}\). Let \(\mathrm{Cl}_{S}(K):=\mathrm{Cl}(K)/\langle[\mathfrak{P}]\rangle_{\mathfrak{P} \in S}\) and \(\mathrm{Cl}_{S}(K)[n]\) be its \(n\)-torsion points. We now show that the equation (3.1) with exponent \(p\) of Type I or II has no asymptotic solution in \(W_{K}\). More precisely;
**Theorem 3.3**.: _Let \(K\) be a totally real field with \(\mathrm{Cl}_{S_{K}^{\prime}}(K)[2]=1\). Suppose for every solution \((\alpha,\beta,\gamma)\in\mathcal{O}_{S_{K}^{*}}^{*}\times\mathcal{O}_{S_{K}^{ *}}^{*}\times\mathcal{O}_{S_{K}^{\prime}}\) to \(\alpha+\beta=\gamma^{2}\), there exists \(\mathfrak{P}\in S_{K}\) that satisfies_
\[\left|v_{\mathfrak{P}}\left(\alpha\beta^{-1}\right)\right|\leq 6v_{ \mathfrak{P}}(2). \tag{3.2}\]
_Then, the Diophantine equation \(x^{2}=By^{p}+Cz^{p}\) with exponent \(p\) of Type I or II has no asymptotic solution in \(W_{K}\)._
By [13, Theorem 39], for any finite set \(S\subseteq P\), the equation \(\alpha+\beta=\gamma^{2}\) with \((\alpha,\beta,\gamma)\in\mathcal{O}_{S}^{*}\times\mathcal{O}_{S}^{*}\times \mathcal{O}_{S}\) has only finitely many solutions. The following proposition, which is a consequence of Theorem 3.3 will be relevant in SS6.1. We say that \(S\subseteq P\) is principal if \(\mathfrak{P}\) is principal for every \(\mathfrak{P}\in S\).
**Proposition 3.4**.: _Let \(K\) be a field such that \(S_{K}^{\prime}=S_{K}\) is principal and \(2\nmid h_{K}\). Suppose for every solution \((\alpha,\gamma)\in\mathcal{O}_{S_{K}^{\prime}}^{*}\times\mathcal{O}_{S_{K}^{ \prime}}\) to \(\alpha+1=\gamma^{2}\), there exists \(\mathfrak{P}\in S_{K}\) that satisfies_
\[\left|v_{\mathfrak{P}}(\alpha)\right|\leq 6v_{\mathfrak{P}}(2). \tag{3.3}\]
_Then, the Diophantine equation \(x^{2}=By^{p}+Cz^{p}\) with exponent \(p\) of Type I or II has no asymptotic solution in \(W_{K}\)._
### Steps to prove Theorem 3.3
For any non-trivial and primitive solution \((a,b,c)\in\mathcal{O}_{K}^{3}\) to the equation (3.1) with exponent \(p\), the Frey curve \(E:=E_{a,b,c}\) is given by
\[E:=E_{a,b,c}:Y^{2}=X(X^{2}+2aX+Bb^{p}), \tag{3.4}\]
with \(c_{4}=2^{4}(Bb^{p}+4Cc^{p}),\ \Delta_{E}=2^{6}(B^{2}C)(b^{2}c)^{p}\) and \(j_{E}=2^{6}\frac{(Bb^{p}+4Cc^{p})^{3}}{B^{2}C(b^{2}c)^{p}}\), where \(j_{E}\) (resp., \(\Delta_{E}\)) denote the \(j\)-invariant (resp., discriminant) of \(E\). We now prove the modularity of the Frey curve \(E:=E_{a,b,c}\) in (3.4) associated to \((a,b,c)\in W_{K}\).
**Theorem 3.5**.: _Let \((a,b,c)\in W_{K}\), and let \(E:=E_{a,b,c}\) be the Frey curve attached to \((a,b,c)\) as in (3.4). Then, there exists a constant \(A=A_{K,B,C}\) (depending on \(K,B,C\)) such that for primes \(p>A\), \(E/K\) is modular._
Proof.: By [15, Theorem 5], there exists finitely many elliptic curves over \(K\) (up to \(\bar{K}\)-isomorphism) which are not modular. Let \(j_{1},\ldots,j_{t}\in K\) be the \(j\)-invariants of these. The \(j\)-invariant \(j_{E}=2^{6}\frac{(Bb^{p}+4Cc^{p})^{3}}{B^{2}C(b^{2}c)^{p}}=2^{6}\frac{(4-\lambda (E))^{3}}{\lambda(E)^{2}}\) for \(\lambda(E)=-\frac{Bb^{p}}{Cc^{p}}\). For \(i=1,2,\ldots,t\), the equation \(j_{E}=j_{i}\) has at most three solutions in \(K\). Hence, there exists \(\lambda_{1},\lambda_{2},...,\lambda_{m}\in K\) with \(m\leq 3t\) such that \(E\) is modular for all \(\lambda(E)\notin\{\lambda_{1},\lambda_{2},...,\lambda_{m}\}\). If \(\lambda(E)=\lambda_{k}\) for some \(k\in\{1,2,\ldots,m\}\), then \(\left(\frac{b}{c}\right)^{p}=\frac{-C\lambda_{k}}{B}\). This equation determines \(p\) uniquely, denote it by \(p_{k}\). Suppose \(p\neq q\) are primes such that \(\left(\frac{b}{c}\right)^{p}=\left(\frac{b}{c}\right)^{q}\), which means \(\left(\frac{b}{c}\right)\) is a root of unity. Since \(K\) is totally real, we get \(b=\pm c\). For \(\mathfrak{P}\in S_{K}\), \(\mathfrak{P}\mid bc\) implies that \(\mathfrak{P}|a\), contradicts \((a,b,c)\in W_{K}\). Now, the proof of the theorem follows by taking \(A=\max\{p_{1},...,p_{m}\}\).
### Reduction type
The following lemma characterizes the type of reduction of the Frey curve \(E:=E_{a,b,c}\) at primes \(\mathfrak{q}\) away from \(S_{K}^{\prime}\).
**Lemma 3.6**.: _Let \((a,b,c)\in\mathcal{O}_{K}^{3}\) be a non-trivial primitive solution to the equation (3.1) with exponent \(p\) of Type I or II, and let \(E\) be the associated Frey curve. Then at all primes \(\mathfrak{q}\notin S_{K}^{\prime}\), \(E\) is minimal, semi-stable at \(\mathfrak{q}\) and satisfies \(p|v_{\mathfrak{q}}(\Delta_{E})\). Let \(\mathfrak{n}\) be the conductor of \(E\) and \(\mathfrak{n}_{p}\) be as in (2.1). Then,_
\[\mathfrak{n}=\prod_{\mathfrak{P}\in S_{K}^{\prime}}\mathfrak{P}^{\mathfrak{P} \mathfrak{P}}\prod_{\mathfrak{q}|bc,\ \mathfrak{q}\notin S_{K}^{\prime}}\mathfrak{q},\ \mathfrak{n}_{p}=\prod_{\mathfrak{P}\in S_{K}^{\prime}} \mathfrak{P}^{r_{\mathfrak{P}}^{\prime}}, \tag{3.5}\]
_where \(0\leq r_{\mathfrak{P}}^{\prime}\leq r_{\mathfrak{P}}\) with \(r_{\mathfrak{P}}\leq 2+6v_{\mathfrak{P}}(2)\) for \(\mathfrak{P}|2\) and \(r_{\mathfrak{P}}\leq 2+3v_{\mathfrak{P}}(3)\) for \(\mathfrak{P}\nmid 2\)._
Proof.: Let \(\mathfrak{q}\in P\setminus S_{K}^{\prime}\). If \(\mathfrak{q}\not|\Delta_{E}\), then \(E\) has good reduction at \(\mathfrak{q}\) and \(p|v_{\mathfrak{q}}(\Delta_{E})=0\).
If \(\mathfrak{q}|\Delta_{E}=2^{6}B^{2}C(b^{2}c)^{p}\), then \(\mathfrak{q}\) divides precisely one of \(b\) and \(c\), since \((a,b,c)\) is primitive and \(\mathfrak{q}\nmid 2BC\). This implies that \(\mathfrak{q}\nmid c_{4}=2^{4}(Bb^{p}+4Cc^{p})\), hence \(E\) is minimal and has multiplicative reduction at \(\mathfrak{q}\). Since \(v_{\mathfrak{q}}(\Delta_{E})=pv_{\mathfrak{q}}(b^{2}c)\), \(p|v_{\mathfrak{q}}(\Delta_{E})\).
By the definition of \(\mathfrak{n}_{p}\) in (2.1), we get \(\mathfrak{q}\nmid\mathfrak{n}_{p}\) for all \(\mathfrak{q}\notin S_{K}^{\prime}\). Finally, for \(\mathfrak{P}\in S_{K}^{\prime}\), the bounds on \(r_{\mathfrak{P}}\) follow from [21, Theorem IV.10.4].
#### 3.3.1. Type of reduction with image of inertia
For any elliptic curve \(E/K\), let \(\bar{\rho}_{E,p}:G_{K}\to\operatorname{Aut}(E[p])\simeq\operatorname{GL}_{2}( \mathbb{F}_{p})\) be the residual Galois representation of \(G_{K}\), induced by the action of \(G_{K}\) on \(E[p]\), the \(p\)-torsion of \(E\). We first recall [16, Lemmas 3.4, 3.6], which will be useful for the types of the reduction of the Frey curve at \(\mathfrak{P}\in P\).
**Lemma 3.7**.: _Let \(E/K\) be an elliptic curve and \(p>5\) be a prime. For \(\mathfrak{q}\in P\) with \(\mathfrak{q}\nmid p\), \(E\) has potentially multiplicative reduction at \(\mathfrak{q}\) and \(p\nmid v_{\mathfrak{q}}(j_{E})\) if and only if \(p|\#\bar{\rho}_{E,p}(I_{\mathfrak{q}})\)._
**Lemma 3.8**.: _Let \(E/K\) be an elliptic curve and \(p\geq 3\) be a prime. Suppose \(E\) has potential good reduction at \(\mathfrak{P}\) for some \(\mathfrak{P}\in S_{K}\). Then, \(3\nmid v_{\mathfrak{P}}(\Delta_{E})\) if and only if \(3|\#\bar{\rho}_{E,p}(I_{\mathfrak{P}})\)._
The following lemma determines the type of reduction of \(E_{a,b,c}\) at primes \(\mathfrak{q}\nmid 2pBC\).
**Lemma 3.9**.: _Let \((a,b,c)\in\mathcal{O}_{K}^{3}\) be a non-trivial primitive solution to the equation (3.1) with exponent \(p>5\) of Type I or II, and let \(E\) be the associated Frey curve. Suppose \(\mathfrak{q}\in P\) with \(\mathfrak{q}\nmid 2pBC\). Then \(p\nmid\#\bar{\rho}_{E,p}(I_{\mathfrak{q}})\)._
Proof.: By Lemma 3.7, it is enough to show that either \(v_{\mathfrak{q}}(j_{E})\geq 0\) or \(p|v_{\mathfrak{q}}(j_{E})\). Recall that \(\Delta_{E}=2^{6}(B^{2}C)(b^{2}c)^{p}\) and \(c_{4}=2^{4}(Bb^{p}+4Cc^{p})\). If \(\mathfrak{q}\nmid\Delta_{E}\), then \(E\) has good reduction at \(\mathfrak{q}\), and hence \(v_{\mathfrak{q}}(j_{E})\geq 0\). If \(\mathfrak{q}|\Delta_{E}\) then \(\mathfrak{q}|bc\), and hence \(\mathfrak{q}\) divides exactly one of \(b\) and \(c\). Therefore, \(\mathfrak{q}\nmid c_{4}\) and \(p|v_{\mathfrak{q}}(j_{E})=-pv_{\mathfrak{q}}(b^{2}c)\), which completes the proof.
We discuss the type of reduction of the Frey curve \(E_{a,b,c}\) at \(\mathfrak{P}\in S_{K}\) with \((a,b,c)\in W_{K}\).
**Lemma 3.10**.: _Let \((a,b,c)\in W_{K}\), and let \(E:=E_{a,b,c}\) be the Frey curve attached to \((a,b,c)\) as in (3.4). For \(\mathfrak{P}\in S_{K}\), if \(p>6v_{\mathfrak{P}}(2)+v_{\mathfrak{P}}(C)\), then \(v_{\mathfrak{P}}(j_{E})<0\) and \(p\nmid v_{\mathfrak{P}}(j_{E})\), equivalently \(p|\#\bar{\rho}_{E,p}(I_{\mathfrak{P}})\)._
Proof.: Since \((a,b,c)\in W_{K}\), \(\mathfrak{P}\) divides exactly one of \(b\) and \(c\).
Suppose the equation (3.1) with exponent \(p\) is of Type I. Recall that \(j_{E}=2^{6}\frac{(Bb^{p}+4Cc^{p})^{3}}{B^{2}C(b^{2}c)^{p}}\). If \(\mathfrak{P}|b\), then \(\mathfrak{P}\nmid c\). Since \(p>6v_{\mathfrak{P}}(2)\), \(v_{\mathfrak{P}}(j_{E})=6v_{\mathfrak{P}}(2)+6v_{\mathfrak{P}}(2)-2pv_{ \mathfrak{P}}(b)=2(6v_{\mathfrak{P}}(2)-pv_{\mathfrak{P}}(b))\), \(v_{\mathfrak{P}}(j_{E})<0\) and \(p\nmid v_{\mathfrak{P}}(j_{E})\). Similar proof works for \(\mathfrak{P}|c\) as well.
Suppose the equation (3.1) with exponent \(p\) is of Type II, i.e., \(C=2^{r}\) for some \(r\in\mathbb{N}\). If \(\mathfrak{P}|b\), then \(v_{\mathfrak{P}}(j_{E})=(6-r)v_{\mathfrak{P}}(2)+3(r+2)v_{\mathfrak{P}}(2)-2pv_ {\mathfrak{P}}(b)=(12+2r)v_{\mathfrak{P}}(2)-2pv_{\mathfrak{P}}(b)<0\) and \(p\nmid v_{\mathfrak{P}}(j_{E})\), since \(p>(6+r)v_{\mathfrak{P}}(2)\). If \(\mathfrak{P}|c\) then \(v_{\mathfrak{P}}(j_{E})=(6-r)v_{\mathfrak{P}}(2)-pv_{\mathfrak{P}}(c)<0\) since \(p>(6+r)v_{\mathfrak{P}}(2)\). Since \(p>(6+r)v_{\mathfrak{P}}(2)>(6-r)v_{\mathfrak{P}}(2)\geq 0\) for \(1\leq r\leq 6\) and \(-p<(-6-r)v_{\mathfrak{P}}(2)<(6-r)v_{\mathfrak{P}}(2)<0\) for \(r>6\), we get \(p\nmid v_{\mathfrak{P}}(j_{E})\). Hence, by Lemma 3.7, we get \(p\nmid\bar{\rho}_{E,p}(I_{\mathfrak{P}})\).
### Proof of Theorem 3.3
The proof of this theorem depends on the following result.
**Theorem 3.11**.: _Let \(K\) be a totally real field. Then, there is a constant \(V=V_{K,B,C}>0\) (depending on \(K,B,C\)) such that the following hold. Let \((a,b,c)\in W_{K}\) be a solution to the equation (3.1) with exponent \(p>V\) of Type I or II, and let \(E\) be the Frey curve as in (3.4). Then, there exists an elliptic curve \(E^{\prime}/K\) such that:_
1. \(E^{\prime}\) _has good reduction away from_ \(S^{\prime}_{K}\) _and has a non-trivial_ \(2\)_-torsion point;_
2. \(\bar{\rho}_{E,p}\sim\bar{\rho}_{E^{\prime},p}\)_, and_ \(v_{\mathfrak{P}}(j_{E^{\prime}})<0\) _for_ \(\mathfrak{P}\in S_{K}\)_._
Proof of Theorem 3.11.: By Theorem 3.5, \(E\) is modular for primes \(p>A=A_{K,B,C}\) with \(A\gg 0\). By Lemma 3.6, \(E\) is semi-stable away from \(S^{\prime}_{K}\). If necessary, we can take the Galois closure of \(K\) to ensure that \(\bar{\rho}_{E,p}\) is irreducible for \(p\gg 0\) (cf. [14, Theorem 2]).
By [14, Theorem 7], there exists a Hilbert modular newform \(f\) of parallel weight \(2\), level \(\mathfrak{n}_{p}\) and some prime \(\omega\) of \(\mathbb{Q}_{f}\) such that \(\omega|p\) and \(\bar{\rho}_{E,p}\sim\bar{\rho}_{f,\omega}\) for \(p\gg 0\), where \(\bar{\rho}_{f,\omega}\) denote the residual Galois representation attached to \(f,\omega\). By allowing \(p\) to be sufficiently large, we can assume \(\mathbb{Q}_{f}=\mathbb{Q}\) (cf. [14, SS4] for more details).
Let \(\mathfrak{P}\in S_{K}\). Then \(E\) has potential multiplicative reduction at \(\mathfrak{P}\) and \(p|\#\bar{\rho}_{E,p}(I_{\mathfrak{P}})\) for \(p\gg 0\) (cf. Lemma 3.10). The existence of \(E_{f}\) then follows from [14, Corollary 2.2] for all \(p\gg 0\) after leaving primes \(p\) with \(p\mid(\operatorname{Norm}(K/\mathbb{Q})(\mathfrak{P})\pm 1)\). Therefore, \(\bar{\rho}_{E,p}\sim\bar{\rho}_{E_{f},p}\) for some elliptic curve \(E_{f}\) with conductor \(\mathfrak{n}_{p}\) for \(p>V=V_{K,B,C}\), where \(V_{K,B,C}\) is the maximum of all the above implicit/explicit lower bounds.
* Since the conductor of \(E_{f}\) is \(\mathfrak{n}_{p}\) given in (3.5), \(E_{f}\) has good reduction away from \(S^{\prime}_{K}\). Now, arguing as in [10, page 1247], we can enlarge the constant \(V\) and by possibly replacing \(E_{f}\) with an isogenous curve, say \(E^{\prime}\), we get \(E^{\prime}/K\) has a non-trivial \(2\)-torsion point. Since \(E_{f}\sim E^{\prime}\), \(E^{\prime}\) has good reduction away from \(S^{\prime}_{K}\).
* Since \(E_{f}\) is isogenous to \(E^{\prime}\) and \(\bar{\rho}_{E,p}\sim\bar{\rho}_{E_{f},p}\) implies \(\bar{\rho}_{E,p}\sim\bar{\rho}_{E^{\prime},p}\). As a result, we obtain \(p|\#\bar{\rho}_{E,p}(I_{\mathfrak{P}})=\#\bar{\rho}_{E^{\prime},p}(I_{ \mathfrak{P}})\) for any \(\mathfrak{P}\in S_{K}\). Finally, by Lemma 3.7, we have \(v_{\mathfrak{P}}(j_{E^{\prime}})<0\) for any \(\mathfrak{P}\in S_{K}\).
This completes the proof of the theorem.
We now prove Theorem 3.3, and its inspired from that of [10, Theorem 3].
Proof of Theorem 3.3.: Suppose \((a,b,c)\in W_{K}\) is a solution to the equation (3.1) with exponent \(p>V\) of Type I or Type II, where \(V=V_{K,B,C}\) be the constant as in Theorem 3.11. By Theorem 3.11, there exists an elliptic curve \(E^{\prime}/K\) having a non-trivial \(2\)-torsion point and good reduction away from \(S^{\prime}_{K}\). Then the elliptic curve \(E^{\prime}/K\) has a model of the form
\[E^{\prime}:y^{2}=x^{3}+cx^{2}+dx \tag{3.6}\]
for some \(c,d\in K\) with \(j\)-invariant \(j_{E^{\prime}}=2^{8}\frac{(c^{2}-3d)^{3}}{d^{2}(c^{2}-4d)}\). Since \(E^{\prime}\) has good reduction away from \(S^{\prime}_{K}\), \(j_{E^{\prime}}\in\mathcal{O}_{S^{\prime}_{K}}\).
Take \(\lambda:=\frac{c^{2}}{d}\) and \(\mu:=\lambda-4\in\mathcal{O}^{*}_{S^{\prime}_{K}}\) (cf. [10, Lemma 16(i)]). By [10, Lemma 17(i)], we get \(\lambda\mathcal{O}_{K}=I^{2}J\) for some fractional ideal \(I\) and \(S^{\prime}_{K}\)-ideal \(J\). Since \(J\) is \(S^{\prime}_{K}\)-ideal,
\(1=[I]^{2}\in\operatorname{Cl}_{S^{\prime}_{K}}(K)\). By hypothesis \(\operatorname{Cl}_{S^{\prime}_{K}}(K)[2]=1\) which gives \(I=\gamma I_{1}\) for some \(\gamma\in\mathcal{O}_{K}\) and \(S^{\prime}_{K}\)-ideal \(I_{1}\). Thus, \(\lambda\mathcal{O}_{K}=\gamma^{2}I_{1}^{2}J\) and hence \((\frac{\lambda}{\gamma^{2}})\mathcal{O}_{K}\) is an \(S^{\prime}_{K}\)-ideal. Therefore, \(u=\frac{\lambda}{\gamma^{2}}\in\mathcal{O}^{*}_{S^{\prime}_{K}}\). Now, divide the equation \(\mu+4=\lambda\) by \(u\) to obtain \(\alpha+\beta=\gamma^{2}\), where \(\alpha=\frac{\mu}{u}\in\mathcal{O}^{*}_{S^{\prime}_{K}}\) and \(\beta=\frac{4}{u}\in\mathcal{O}^{*}_{S^{\prime}_{K}}\), which implies \(\alpha\beta^{-1}=\frac{\mu}{4}\). By (3.2), there exists \(\mathfrak{P}\in S_{K}\) with \(|v_{\mathfrak{P}}(\alpha\beta^{-1})|=|v_{\mathfrak{P}}(\frac{\mu}{4})|\leq 6v_ {\mathfrak{P}}(2)\). This means
\[-4v_{\mathfrak{P}}(2)\leq v_{\mathfrak{P}}(\mu)\leq 8v_{\mathfrak{P}}(2) \tag{3.7}\]
We now show that the bounds on \(v_{\mathfrak{P}}(\mu)\) would imply that \(v_{\mathfrak{P}}(j_{E^{\prime}})\geq 0\). Write \(j_{E^{\prime}}\) in terms of \(\mu\) yields \(j_{E^{\prime}}=2^{8}\frac{(\mu+1)^{3}}{\mu}\), which means \(v_{\mathfrak{P}}(j_{E^{\prime}})=8v_{\mathfrak{P}}(2)+3v_{\mathfrak{P}}(\mu+ 1)-v_{\mathfrak{P}}(\mu)\).
* If \(v_{\mathfrak{P}}(\mu)<0\), then \(v_{\mathfrak{P}}(\mu+1)=v_{\mathfrak{P}}(\mu)\). By (3.7), we get \(v_{\mathfrak{P}}(j_{E^{\prime}})\geq 0\).
* If \(v_{\mathfrak{P}}(\mu)=0\), then \(v_{\mathfrak{P}}(\mu+1)\geq 0\), hence \(v_{\mathfrak{P}}(j_{E^{\prime}})\geq 8v_{\mathfrak{P}}(2)\geq 0\).
* If \(v_{\mathfrak{P}}(\mu)>0\), then \(v_{\mathfrak{P}}(\mu+1)=0\). By (3.7), we get \(v_{\mathfrak{P}}(j_{E^{\prime}})=8v_{\mathfrak{P}}(2)-v_{\mathfrak{P}}(\mu)\geq 0\).
In all cases, we get \(v_{\mathfrak{P}}(j_{E^{\prime}})\geq 0\), which is a contradiction to Theorem 3.11. This completes the proof of the theorem.
Proof of Proposition 3.4.: By Theorem 3.3, it suffices to show that for every solution \((\alpha,\beta,\gamma)\in\mathcal{O}^{*}_{S_{K}}\times\mathcal{O}^{*}_{S_{K}} \times\mathcal{O}_{S_{K}}\) to the equation \(\alpha+\beta=\gamma^{2}\), there exists \(\mathfrak{P}\in S_{K}\) such that \(|v_{\mathfrak{P}}(\alpha\beta^{-1})|\leq 6v_{\mathfrak{P}}(2)\). Let \(\mathfrak{P}\in S_{K}\). If necessary, by scaling even powers of \(\mathfrak{P}\) and swapping \(\alpha,\beta\), we can assume \(0\leq v_{\mathfrak{P}}(\beta)\leq v_{\mathfrak{P}}(\alpha)\) with \(v_{\mathfrak{P}}(\beta)=0\) or \(1\).
1. Suppose \(v_{\mathfrak{P}}(\beta)=1\) for some \(\mathfrak{P}\in S_{K}\). If \(v_{\mathfrak{P}}(\alpha)>6v_{\mathfrak{P}}(2)>1\) then \(v_{\mathfrak{P}}(\gamma^{2})=v_{\mathfrak{P}}(\alpha+\beta)=v_{\mathfrak{P}}( \beta)=1\), which cannot occur since \(v_{\mathfrak{P}}(\gamma^{2})\) is even. The inequality \(v_{\mathfrak{P}}(\alpha)\leq 6v_{\mathfrak{P}}(2)\) implies \(\left|v_{\mathfrak{P}}(\alpha\beta^{-1})\right|\leq 6v_{\mathfrak{P}}(2)-1<6v_{ \mathfrak{P}}(2)\).
2. Suppose \(v_{\mathfrak{q}}(\beta)=0\) for all \(\mathfrak{q}\in S_{K}\), i.e., \(\beta\) is a unit in \(K\). * If \(\beta\) is a square, then divide the equation \(\alpha+\beta=\gamma^{2}\) by \(\beta\) to obtain an equation of the form \(\alpha^{\prime}+1=\gamma^{\prime 2}\), where \(\alpha^{\prime}=\alpha\beta^{-1}\in\mathcal{O}^{*}_{S^{\prime}_{K}}\) and \(\gamma^{\prime}\in\mathcal{O}_{S^{\prime}_{K}}\). By (3.3), we obtain \(|v_{\mathfrak{P}}(\alpha\beta^{-1})|=|v_{\mathfrak{P}}(\alpha^{\prime})|\leq 6v_ {\mathfrak{P}}(2)\) for some \(\mathfrak{P}\in S_{K}\). * Suppose \(\beta\) is not a square. If \(v_{\mathfrak{P}}(\alpha)\leq 6v_{\mathfrak{P}}(2)\) for some \(\mathfrak{P}\in S_{K}\), then we are done. Otherwise, \(v_{\mathfrak{q}}(\alpha)>6v_{\mathfrak{P}}(2)>1\) for all \(\mathfrak{q}\in S_{K}\). This gives \(\alpha\equiv 0\pmod{2^{6}}\) and \(\gamma^{2}=\alpha+\beta\equiv\beta\pmod{2^{6}}\). Since \(v_{\mathfrak{q}}(\gamma^{2})=v_{\mathfrak{q}}(\alpha+\beta)=0\) for all \(\mathfrak{q}\in S_{K}\) and \(S_{K}=S^{\prime}_{K}\), we get \(\gamma\in\mathcal{O}_{K}\). Now, consider the field \(L=K(\theta)\), where \(\theta:=\frac{\gamma-\sqrt{\beta}}{2}\). The minimal polynomial of \(\theta\) is \(m_{\theta}(x)=x^{2}-\gamma x+\frac{\gamma^{2}-\beta}{4}\). Then \(m_{\theta}(x)\in\mathcal{O}_{K}[x]\) with discriminant \(\beta\). Therefore, \(L\) is everywhere unramified extension of degree \(2\) over \(K\) implying \(2|h_{K}\), which contradicts our hypothesis that \(2\nmid h_{K}\).
This completes the proof of the proposition.
## 4. Solutions of the Diophantine equation \(x^{2}=By^{p}+2^{r}z^{p}\) over \(K\)
In this section, we examine the solutions of the Diophantine equation (3.1) with exponent \(p\) of Type II, i.e., \(x^{2}=By^{p}+2^{r}z^{p}\) over \(K\). Here, \(S^{\prime}_{K}=\{\mathfrak{P}\in P:\mathfrak{P}|2B\}\). Let \(h_{K}^{+}\) be the narrow class number of \(K\). We follow notations as in SS3.
### Main result
We write \((ES)\) for "either \([K:\mathbb{Q}]\equiv 1\pmod{2}\) or Conjecture 2.1 holds for \(K\)". For \(r\in\mathbb{N}\), let \(S_{r}:=\{(\pm\sqrt{2^{r}+B},1,1),\ (\pm\sqrt{2^{r}-B},-1,1),\\ (\pm\sqrt{-2^{r}+B},1,-1),\ (\pm\sqrt{-2^{r}-B},1,1)\}\). For \(r\in\{1,2,4,5\}\), we show that the equation (3.1) with exponent \(p\) of Type II has no asymptotic solution in \(\mathcal{O}^{3}_{K}\setminus S_{r}\). More precisely;
**Theorem 4.1**.: _Let \(K\) be a totally real field satisfying \((ES)\) with \(\operatorname{Cl}_{S^{\prime}_{K}}(K)[2]=1\). Suppose for every solution \((\alpha,\beta,\gamma)\in\mathcal{O}^{*}_{S^{\prime}_{K}}\times\mathcal{O}^{*}_{S^{ \prime}_{K}}\times\mathcal{O}_{S^{\prime}_{K}}\) to \(\alpha+\beta=\gamma^{2}\), there exists \(\mathfrak{P}\in U_{K}\) that satisfies_
\[\left|v_{\mathfrak{P}}(\alpha\beta^{-1})\right|\leq 6v_{\mathfrak{P}}(2)\text{ and }v_{ \mathfrak{P}}\left(\alpha\beta^{-1}\right)\equiv 0\pmod{3}. \tag{4.1}\]
_Then for \(r\in\{1,2,4,5\}\), the Diophantine equation (3.1) with exponent \(p\) of Type II has no asymptotic solution in \(\mathcal{O}_{K}^{3}\setminus S_{r}\)._
The following proposition is a consequence of Theorem 4.1 and will be relevant in SS6.2.
**Proposition 4.2**.: _Let \(K\) be a field satisfying \((ES)\) with degree \(n>1\) and \(2\nmid h_{K}^{+}\). Assume \(B=\pm 1\), and \(2\) is inert in \(K\). Suppose for every solution \((\alpha,\gamma)\in\mathcal{O}_{S_{K}}^{*}\times\mathcal{O}_{S_{K}}\) to \(\alpha+1=\gamma^{2}\), there exists \(\mathfrak{P}\in U_{K}\) that satisfies_
\[|v_{\mathfrak{P}}(\alpha)|\leq 6v_{\mathfrak{P}}(2)\text{ and }v_{\mathfrak{P}} \left(\alpha\right)\equiv 0\pmod{3}. \tag{4.2}\]
_Then for \(r\in\{1,2,4,5\}\), the Diophantine equation (3.1) with exponent \(p\) of Type II has no asymptotic solution in \(\mathcal{O}_{K}^{3}\setminus S_{r}\). In particular, if \([K:\mathbb{Q}]\) is odd, then \(S_{1}=\{(\pm 1,-1,1)\}\) (resp., \(\{(\pm 1,1,1)\}\)) for \(B=1\) (resp., \(B=-1\)), and \(S_{r}=\phi\) for \(r=2,4,5\)._
### Steps to prove Theorem 4.1
We now prove the modularity of the Frey curve \(E:=E_{a,b,c}\) in (3.4) associated to any non-trivial primitive solution \((a,b,c)\in\mathcal{O}_{K}^{3}\setminus S_{r}\).
**Theorem 4.3**.: _Let \((a,b,c)\in\mathcal{O}_{K}^{3}\setminus S_{r}\) be a non-trivial primitive solution to the equation (3.1) with exponent \(p\) of Type II, and let \(E:=E_{a,b,c}\) be the associated Frey curve. Then, there exists a constant \(A=A_{K,B}\) (depending on \(K,B\)) such that for primes \(p>A\), \(E/K\) is modular._
Proof.: Arguing as in the proof of Theorem 3.5, there exists \(\lambda_{k}\in K\) with \(1\leq k\leq m\) such that \(E/K\) is modular for all \(\lambda(E)\notin\{\lambda_{1},\lambda_{2},...,\lambda_{m}\}\). If \(\lambda=\lambda_{k}\) for some \(k\in\{1,2,\ldots,m\}\), then \(\left(\frac{b}{c}\right)^{p}=\frac{-\lambda_{k}}{B}\). The above equation determines \(p\) uniquely, denote it \(p_{k}\). Otherwise, we get \(b=\pm c\). Since \(a^{2}=Bb^{p}+2^{r}c^{p}\) and \((a,b,c)\) is primitive, we get \(b=\pm 1\) and \(c=\pm 1\), hence \((a,b,c)\in S_{r}\), which is not possible. Finally, the proof of the theorem follows by taking \(A_{K}=\max\{p_{1},...,p_{m}\}\).
#### 4.2.1. Type of reduction with image of inertia
The following lemma specifies the type of reduction of the Frey curve \(E:=E_{a,b,c}\) given in (3.4) at \(\mathfrak{P}\in U_{K}\), when \((a,b,c)\in\mathcal{O}_{K}^{3}\) and \(r\in\{1,2,4,5\}\). More precisely;
**Lemma 4.4**.: _Let \(r\in\{1,2,4,5\}\). Let \((a,b,c)\in\mathcal{O}_{K}^{3}\) be a non-trivial primitive solution to the equation (3.1) with exponent \(p>(6+r)v_{\mathfrak{P}}(2)\) of Type II, and let \(E\) be the associated Frey curve. If \(\mathfrak{P}\in U_{K}\), then either \(p|\#\bar{\rho}_{E,p}(I_{\mathfrak{P}})\) or \(3|\#\bar{\rho}_{E,p}(I_{\mathfrak{P}})\)._
Proof.: Recall that \(\Delta_{E}=2^{r+6}B^{2}(b^{2}c)^{p}\) and \(j_{E}=2^{6-r}\frac{(Bb^{p}+2^{r+2}c^{p})^{3}}{B^{2}(b^{2}c)^{p}}\). If \(\mathfrak{P}|bc\), then \(p|\#\bar{\rho}_{E,p}(I_{\mathfrak{P}})\) by Lemma 3.10 and due to the fact that \(p>(6+r)v_{\mathfrak{P}}(2)\). If \(\mathfrak{P}\nmid bc\), then \(v_{\mathfrak{P}}(j_{E})=(6-r)v_{\mathfrak{P}}(2)\) and \(v_{\mathfrak{P}}(\Delta_{E})=(6+r)v_{\mathfrak{P}}(2)\). Since \(\mathfrak{P}\in U_{K}\) and \(r\in\{1,2,4,5\}\), we get \(v_{\mathfrak{P}}(j_{E})\geq 0\) and \(3\nmid v_{\mathfrak{P}}(\Delta_{E})\). Hence, by Lemma 3.8, we get \(3|\#\bar{\rho}_{E,p}(I_{\mathfrak{P}})\).
### Proof of Theorem 4.1
The proof of this theorem depends on the following result.
**Theorem 4.5**.: _Let \(K\) be a totally real field satisfying \((ES)\), and let \(r\in\{1,2,4,5\}\). Then, there is a constant \(V=V_{K,B}>0\) (depending on \(K,B\)) such that the following hold. Let \((a,b,c)\in\mathcal{O}_{K}^{3}\setminus S_{r}\) be a non-trivial primitive solution to the equation (3.1) with exponent \(p>V\) of Type II, and let \(E\) be the Frey curve as in (3.4). Then there exists an elliptic curve \(E^{\prime}/K\) such that:_
1. \(E^{\prime}/K\) _has good reduction away from_ \(S^{\prime}_{K}\) _and has a non-trivial_ \(2\)_-torsion point, and_ \(\bar{\rho}_{E,p}\sim\bar{\rho}_{E^{\prime},p}\)_;_
2. _For_ \(\mathfrak{P}\in U_{K}\)_, either_ \(v_{\mathfrak{P}}(j_{E^{\prime}})<0\) _or_ \(3\nmid v_{\mathfrak{P}}(j_{E^{\prime}})\)_._
Proof.: Arguing as in the proof of Theorem 3.11, the first part of Theorem 4.5 follows from [11, Theorem 8], Theorem 3.5, Theorem 4.3 and Lemma 4.4. Let \(\mathfrak{P}\in U_{K}\). If \(p|\#\bar{\rho}_{E,p}(I_{\mathfrak{P}})=\#\bar{\rho}_{E^{\prime},p}(I_{ \mathfrak{P}})\), then by Lemma 3.7, we get \(v_{\mathfrak{P}}(j_{E^{\prime}})<0\). If \(p\nmid\#\bar{\rho}_{E,p}(I_{\mathfrak{P}})\), then by Lemma 4.4, we conclude that \(3|\#\bar{\rho}_{E,p}(I_{\mathfrak{P}})=\#\bar{\rho}_{E^{\prime},p}(I_{\mathfrak{ P}})\). If \(v_{\mathfrak{P}}(j_{E^{\prime}})<0\), then we
are done. If \(v_{\mathfrak{P}}(j_{E^{\prime}})\geq 0\), then by Lemma 3.8, we have \(3\nmid v_{\mathfrak{P}}(\Delta_{E^{\prime}})\). Since \(j_{E^{\prime}}=\frac{c_{4}^{3}}{\Delta_{E^{\prime}}}\) and \(3\nmid v_{\mathfrak{P}}(\Delta_{E^{\prime}})\), we get \(3\nmid v_{\mathfrak{P}}(j_{E^{\prime}})\). This completes the proof of the theorem.
Proof of Theorem 4.1.: Let \(r\in\{1,2,4,5\}\). Let \((a,b,c)\in\mathcal{O}_{K}^{3}\setminus S_{r}\) be a non-trivial primitive solution to the equation (3.1) with exponent \(p>V\) of Type II, where \(V=V_{K,B}\) be the constant as in Theorem 4.5. By Theorem 4.5, there exists an elliptic curve \(E^{\prime}/K\) having a non-trivial \(2\)-torsion point and good reduction away from \(S^{\prime}_{K}\). By (4.1), there exists some \(\mathfrak{P}\in U_{K}\) that satisfies \(|v_{\mathfrak{P}}(\alpha\beta^{-1})|\leq 6v_{\mathfrak{P}}(2)\) and \(v_{\mathfrak{P}}(\alpha\beta^{-1})\equiv 0\pmod{3}\).
Now, arguing as in the proof of Theorem 3.3, we find \(v_{\mathfrak{P}}(j_{E^{\prime}})\geq 0\) by using \(|v_{\mathfrak{P}}(\alpha\beta^{-1})|\leq 6v_{\mathfrak{P}}(2)\). Recall that \(j_{E^{\prime}}=2^{8}\frac{(1+\mu)^{3}}{\mu}\) with \(\frac{\mu}{4}=\alpha\beta^{-1}\). This implies \(v_{\mathfrak{P}}(j_{E^{\prime}})\equiv 2v_{\mathfrak{P}}(2)-v_{\mathfrak{P}}( \mu)=-v_{\mathfrak{P}}(\alpha\beta^{-1})\pmod{3}\). Since \(v_{\mathfrak{P}}(\alpha\beta^{-1})\equiv 0\pmod{3}\), \(v_{\mathfrak{P}}(j_{E^{\prime}})\equiv 0\pmod{3}\) and hence \(3|v_{\mathfrak{P}}(j_{E^{\prime}})\). Therefore, \(v_{\mathfrak{P}}(j_{E^{\prime}})\geq 0\) and \(3|v_{\mathfrak{P}}(j_{E^{\prime}})\), which contradicts Theorem 4.5. This completes the proof of the theorem.
Proof of Proposition 4.2.: Since \(B=\pm 1\), and \(2\) is inert in \(K\), we find \(S^{\prime}_{K}=S_{K}\) is principal. Let \(\mathfrak{P}\in U_{K}\) be the unique prime lying above \(2\). Let \((\alpha,\beta,\gamma)\in\mathcal{O}_{S_{K}}^{*}\times\mathcal{O}_{S_{K}}^{*} \times\mathcal{O}_{S_{K}}\) be a solution to \(\alpha+\beta=\gamma^{2}\). According to Theorem 4.1, it suffices to show that \(|v_{\mathfrak{P}}(\alpha\beta^{-1})|\leq 6v_{\mathfrak{P}}(2)\) and \(v_{\mathfrak{P}}(\alpha\beta^{-1})\equiv 0\pmod{3}\). If necessary by scaling even powers of \(\mathfrak{P}\) and swapping \(\alpha,\beta\), we can assume \(0\leq v_{\mathfrak{P}}(\beta)\leq v_{\mathfrak{P}}(\alpha)\) with \(v_{\mathfrak{P}}(\beta)=0\) or \(1\).
1. Suppose \(v_{\mathfrak{P}}(\beta)=1\). If \(v_{\mathfrak{P}}(\alpha)>1\), then \(v_{\mathfrak{P}}(\gamma^{2})=v_{\mathfrak{P}}(\alpha+\beta)=v_{\mathfrak{P}}( \beta)=1\), which cannot occur because \(v_{\mathfrak{P}}(\gamma^{2})\) is even. As a result, \(v_{\mathfrak{P}}(\alpha)=1\). Thus \(|v_{\mathfrak{P}}(\alpha\beta^{-1})|=0<6v_{\mathfrak{P}}(2)\) and \(v_{\mathfrak{P}}(\alpha\beta^{-1})\equiv 0\pmod{3}\).
2. Suppose \(v_{\mathfrak{P}}(\beta)=0\), i.e., \(\beta\) is a unit in \(K\). * Assume \(\beta\) is a square. Then divide the equation \(\alpha+\beta=\gamma^{2}\) by \(\beta\) to obtain an equation of the form \(\alpha^{\prime}+1=\gamma^{2}\), where \(\alpha^{\prime}=\alpha\beta^{-1}\in\mathcal{O}_{S_{K}}^{*}\), \(\gamma^{\prime}\in\mathcal{O}_{S_{K}}\). Using (4.2), we get \(|v_{\mathfrak{P}}(\alpha\beta^{-1})|\leq 6v_{\mathfrak{P}}(2)\) and \(v_{\mathfrak{P}}(\alpha\beta^{-1})\equiv 0\pmod{3}\). * Assume \(\beta\) is not a square. Consider the field \(L=K(\sqrt{\beta})\). The minimal polynomial of \(\beta\) is \(m_{\beta}(x)=x^{2}-\beta\). Then \(m_{\beta}(x)\in\mathcal{O}_{K}[x]\) with discriminant \(4\beta\). Hence \(L\) is unramified away from \(2\). By [13, Theorem 9(b)], we conclude that \(2\) totally ramified in \(K\), which contradicts \(2\) being inert in \(K\).
This completes the proof of the proposition.
## 5. Solutions of the Diophantine equation \(2x^{2}=By^{p}+2^{r}z^{p}\) over \(K\)
In this section, we study the solutions of the following Diophantine equation
\[2x^{2}=By^{p}+2^{r}z^{p}, \tag{5.1}\]
with exponent \(p\geq 3\), \(B\) is an odd integer and \(r\in\mathbb{N}\). Here, \(S^{\prime}_{K}=\{\mathfrak{P}\in P:\mathfrak{P}|2B\}\).
**Definition 5.1** (Trivial solution).: _We call a solution \((a,b,c)\in\mathcal{O}_{K}^{3}\) to the equation (5.1) with exponent \(p\) is trivial, if \(abc=0\), otherwise non-trivial._
### Main result
We now show that the equation (5.1) with exponent \(p\) has no asymptotic solution in \(\mathcal{O}_{K}^{3}\). More specifically;
**Theorem 5.2**.: _Let \(K\) be a totally real field with \(\operatorname{Cl}_{S^{\prime}_{K}}(K)[2]=1\). Suppose for every solution \((\alpha,\beta,\gamma)\in\mathcal{O}_{S^{\prime}_{K}}^{*}\times\mathcal{O}_{S^{ \prime}_{K}}^{*}\times\mathcal{O}_{S^{\prime}_{K}}\) to \(\alpha+\beta=\gamma^{2}\), there exists \(\mathfrak{P}\in S_{K}\) that satisfies_
\[\big{|}v_{\mathfrak{P}}\left(\alpha\beta^{-1}\right)\big{|}\leq 6v_{\mathfrak{P}}(2). \tag{5.2}\]
_Then, the Diophantine equation (5.1) with exponent \(p\) has no asymptotic solution in \(\mathcal{O}_{K}^{3}\)._
The following proposition is a consequence of Theorem 5.2, which will be useful in SS6.1.
**Proposition 5.3**.: _Let \(K\) be a field with \(S^{\prime}_{K}=S_{K}\) is principal and \(2\nmid h_{K}\). Suppose for every solution \((\alpha,\gamma)\in\mathcal{O}^{*}_{S^{\prime}_{K}}\times\mathcal{O}_{S^{\prime}_ {K}}\) to \(\alpha+1=\gamma^{2}\), there exists \(\mathfrak{P}\in S_{K}\) that satisfies_
\[|v_{\mathfrak{P}}(\alpha)|\leq 6v_{\mathfrak{P}}(2). \tag{5.3}\]
_Then, the Diophantine equation (5.1) with exponent \(p\) has no asymptotic solution in \(\mathcal{O}^{3}_{K}\)._
**Remark 5.4**.: _There are no non-trivial primitive solutions \((a,b,c)\in\mathcal{O}^{3}_{K}\) to the equation \(2x^{2}=By^{p}+Cz^{p}\) with exponent \(p>[K:\mathbb{Q}]\) with \(\mathfrak{P}|bc\) for every \(\mathfrak{P}\in S_{K}\), where \(B,C\) are odd integers. If not, let \((a,b,c)\) be such a solution such that \(\mathfrak{P}|bc\) for every \(\mathfrak{P}\in S_{K}\). Since \(B,C\) are odd, \(\mathfrak{P}\) divides both \(b\) and \(c\), which implies \(\mathfrak{P}^{p}|Bb^{p}+Cc^{p}=2a^{2}\). Since \(p>[K:\mathbb{Q}]\), we have \(\mathfrak{P}|a\), which is a contradiction to \((a,b,c)\) is primitive._
### Steps to prove Theorem 5.2
For any non-trivial and primitive solution \((a,b,c)\in\mathcal{O}^{3}_{K}\) to the equation (5.1) with exponent \(p\), the Frey curve \(E:=E_{a,b,c}\) is given by
\[E=E_{a,b,c}:Y^{2}=X(X^{2}-4aX+2Bb^{p}), \tag{5.4}\]
with \(c_{4}=2^{5}(Bb^{p}+2^{r+2}c^{p}),\ \Delta_{E}=2^{9+r}B^{2}(b^{2}c)^{p}\) and \(j_{E}=2^{6-r}\frac{(Bb^{p}+2^{r+2}c^{p})^{3}}{B^{2}(b^{2}c)^{p}}\).
### Modularity of the Frey curve
We now prove the modularity of the Frey curve \(E:=E_{a,b,c}\) in (5.4), which is analogous to that of Theorem 3.5.
**Theorem 5.5**.: _Let \((a,b,c)\in\mathcal{O}^{3}_{K}\) be a non-trivial primitive solution to the equation (5.1) with exponent \(p>[K:\mathbb{Q}]\), and let \(E:=E_{a,b,c}\) be the associated Frey curve. Then, there exists a constant \(A=A_{K,B,r}\) (depending on \(K,B,r\)) such that for primes \(p>A\), \(E/K\) is modular._
Proof.: Arguing as in the proof of Theorem 3.5, there exists \(\lambda_{k}\in K\) with \(1\leq k\leq m\) such that \(E/K\) is modular for all \(\lambda(E)\notin\{\lambda_{1},\lambda_{2},...,\lambda_{m}\}\). If \(\lambda(E)=\lambda_{k}\) for some \(k\in\{1,2,\ldots,m\}\), then \(\left(\frac{b}{c}\right)^{p}=\frac{-2^{r}\lambda_{k}}{B}\). The above equation determines \(p\) uniquely, denote it \(p_{k}\). If not, since \(K\) is totally real, we get \(b=\pm c\). Since \(2a^{2}=Bb^{p}+2^{r}c^{p}\) for some \(r\in\mathbb{N}\) and \(B\) is odd, we get \(\mathfrak{P}|b\) for any \(\mathfrak{P}\in S_{K}\). Since \(b=\pm c\), \(\mathfrak{P}^{p}|Bb^{p}+2^{r}c^{p}=2a^{2}\). As \(p>[K:\mathbb{Q}]\), we get \(\mathfrak{P}|a\), which contradicts \((a,b,c)\) is primitive. Finally, the proof of the theorem follows by taking \(A=\max\{p_{1},...,p_{m},[K:\mathbb{Q}]+1\}\).
### Reduction type
The following lemma describes the type of reduction of the Frey curve \(E:=E_{a,b,c}\) in (5.4) at primes \(\mathfrak{q}\) away from \(S^{\prime}_{K}\).
**Lemma 5.6**.: _Let \((a,b,c)\in\mathcal{O}^{3}_{K}\) be a non-trivial primitive solution to the equation (5.1) with exponent \(p\), and let \(E\) be the associated Frey curve. Then at all primes \(\mathfrak{q}\notin S^{\prime}_{K}\), \(E\) is minimal, semi-stable at \(\mathfrak{q}\) and satisfies \(p|v_{\mathfrak{q}}(\Delta_{E})\). Let \(\mathfrak{n}\) be the conductor of \(E\) and \(\mathfrak{n}_{p}\) be as in (2.1). Then,_
\[\mathfrak{n}=\prod_{\mathfrak{P}\in S^{\prime}_{K}}\mathfrak{P}^{r_{\mathfrak{P }}}\prod_{\mathfrak{q}|bc,\ \mathfrak{q}\notin S^{\prime}_{K}}\mathfrak{q},\ \mathfrak{n}_{p}=\prod_{\mathfrak{P}\in S^{\prime}_{K}} \mathfrak{P}^{r^{\prime}_{\mathfrak{P}}}, \tag{5.5}\]
_where \(0\leq r^{\prime}_{\mathfrak{P}}\leq r_{\mathfrak{P}}\) with \(r_{\mathfrak{P}}\leq 2+6v_{\mathfrak{P}}(2)\) for \(\mathfrak{P}|2\), and \(r_{\mathfrak{P}}\leq 2+3v_{\mathfrak{P}}(3)\) for \(\mathfrak{P}\nmid 2\)._
Proof.: The proof of Lemma 5.6 is similar to that of Lemma 3.6, except that here \(c_{4}=2^{5}(Bb^{p}+2^{r+2}c^{p})\) and \(\Delta_{E}=2^{9+r}B^{2}(b^{2}c)^{p}\).
#### 5.4.1. Type of reduction with image of inertia
The following lemma, whose proof is similar to that of Lemma 3.9, specifies the type of reduction of \(E_{a,b,c}\) at primes \(\mathfrak{q}\nmid 2pB\).
**Lemma 5.7**.: _Let \((a,b,c)\in\mathcal{O}^{3}_{K}\) be a non-trivial primitive solution to the equation (5.1) with exponent \(p>5\), and let \(E\) be the associated Frey curve. Suppose \(\mathfrak{q}\in P\) with \(\mathfrak{q}\nmid 2pB\). Then \(p\nmid\#\bar{\rho}_{E,p}(I_{\mathfrak{q}})\)._
We will now discuss the type of reduction of the Frey curve \(E_{a,b,c}\) at \(\mathfrak{P}\in S_{K}\).
**Lemma 5.8**.: _Let \((a,b,c)\in\mathcal{O}_{K}^{3}\) be a non-trivial primitive solution to the equation (5.1) with exponent \(p>\max\{(6+r)v_{\mathfrak{P}}(2),\ [K:\mathbb{Q}]\}\), and let \(E\) be the associated Frey curve. For \(\mathfrak{P}\in S_{K}\), we have \(v_{\mathfrak{P}}(j_{E})<0\) and \(p\nmid v_{\mathfrak{P}}(j_{E})\), equivalently \(p|\#\bar{\rho}_{E,p}(I_{\mathfrak{P}})\)._
Proof.: Since \((a,b,c)\) is a solution, we get \(\mathfrak{P}|b\). As observed earlier, since \(p>[K:\mathbb{Q}]\), \(\mathfrak{P}\nmid c\). Recall \(j_{E}=2^{6-r}\frac{(Bb^{p}+2^{r+2}c^{p})^{3}}{B^{2}(b^{2}c)^{p}}\). Since \(p>(6+r)v_{\mathfrak{P}}(2)\), \(v_{\mathfrak{P}}(j_{E})=(12+2r)v_{\mathfrak{P}}(2)-2pv_{\mathfrak{P}}(b)<0\) and \(p\nmid v_{\mathfrak{P}}(j_{E})\). Hence, by Lemma 3.7, we get \(p|\#\bar{\rho}_{E,p}(I_{\mathfrak{P}})\).
### Proof of Theorem 5.2
The proof of this theorem depends on the following result.
**Theorem 5.9**.: _Let \(K\) be a totally real field. Then, there is a constant \(V=V_{K,B,r}>0\) (depending on \(K,B,r\)) such that the following hold. Let \((a,b,c)\in\mathcal{O}_{K}^{3}\) be a non-trivial primitive solution to the equation (5.1) with exponent \(p>V\), and let \(E\) be the Frey curve as in (3.4). Then, there exists an elliptic curve \(E^{\prime}/K\) such that:_
1. \(E^{\prime}/K\) _has good reduction away from_ \(S^{\prime}_{K}\) _and has a non-trivial_ \(2\)_-torsion point;_
2. \(\bar{\rho}_{E,p}\sim\bar{\rho}_{E^{\prime},p}\)_, and_ \(v_{\mathfrak{P}}(j_{E^{\prime}})<0\) _for_ \(\mathfrak{P}\in S_{K}\)_._
Proof.: Arguing as in the proof of Theorem 3.11, the proof of Theorem 5.9 follows from Theorem 5.5, Lemma 5.6, and Lemma 5.8.
Proof of Theorem 5.2.: Suppose \((a,b,c)\in\mathcal{O}_{K}^{3}\) is a non-trivial primitive solution to the equation (5.1) with exponent \(p>V\), where \(V=V_{K,B,r}\) be the constant as in Theorem 5.9. By Theorem 5.9, there exists an elliptic curve \(E^{\prime}/K\) having a non-trivial \(2\)-torsion point and good reduction away from \(S^{\prime}_{K}\). Hence \(j_{E^{\prime}}\in\mathcal{O}_{S^{\prime}_{K}}\). Arguing as in the proof of Theorem 3.3 and by (5.2), we obtain \(v_{\mathfrak{P}}(j_{E^{\prime}})\geq 0\) for some \(\mathfrak{P}\in S_{K}\), which contradicts Theorem 5.9. This completes the proof of the theorem.
Proof of Proposition 5.3.: Arguing as in the proof of Proposition 3.4, we can show that hypothesis (5.3) of Proposition 5.3 implies hypothesis (5.2) of Theorem 5.2. Hence, the proof of the proposition follows from Theorem 5.2.
## 6. Local criteria for the solutions of Diophantine equations
In this section, we present several local criteria of \(K\) which imply Theorems 3.3, 4.1, 5.2. We start this discussion with a lemma.
**Lemma 6.1**.: _Suppose the \(S^{\prime}_{K}\)-unit equation \(\lambda+\mu=1\) with \(\lambda,\mu\in\mathcal{O}_{S^{\prime}_{K}}^{*}\) has only solutions \((-1,2),(2,-1)\) and \((\frac{1}{2},\frac{1}{2})\). Then every solution \((\alpha,\gamma)\in\mathcal{O}_{S^{\prime}_{K}}^{*}\times\mathcal{O}_{S^{ \prime}_{K}}\) to the equation \(\alpha+1=\gamma^{2}\) satisfies \(|v_{\mathfrak{P}}(\alpha)|\leq 3v_{\mathfrak{P}}(2)\) and \(v_{\mathfrak{P}}(\alpha)\equiv 0\pmod{3}\) for all \(\mathfrak{P}\in S_{K}\)._
Proof.: The solution \((\alpha,\gamma)\in\mathcal{O}_{S^{\prime}_{K}}^{*}\times\mathcal{O}_{S^{\prime }_{K}}\) to the equation \(\alpha+1=\gamma^{2}\) gives rise to a solution of \(\lambda+\mu=1\) with \(\lambda,\mu\in\mathcal{O}_{S^{\prime}_{K}}\) as follows. Take \(\lambda:=\frac{\gamma+1}{2},\mu:=\frac{1-\gamma}{2}\). Since \(\gamma\in\mathcal{O}_{S^{\prime}_{K}}\), then so is \(\lambda,\mu\). Since \(\alpha=-4\lambda\mu\) with \(\alpha\in\mathcal{O}_{S^{\prime}_{K}}^{*}\), we get \(\lambda,\mu\in\mathcal{O}_{S^{\prime}_{K}}^{*}\) with \(\lambda+\mu=1\). The choices of \((\lambda,\mu)\in\{(-1,2),(2,-1),(\frac{1}{2},\frac{1}{2})\}\) implies \(\alpha=-1\) or \(8\). Therefore, \(|v_{\mathfrak{P}}(\alpha)|\leq 3v_{\mathfrak{P}}(2)\) and \(v_{\mathfrak{P}}(\alpha)\equiv 0\pmod{3}\) for all \(\mathfrak{P}\in S_{K}\).
### Local criteria for Theorems 3.3, 5.2:
In this section, we give local criteria of \(K\) which imply Theorems 3.3, 5.2. Throughout this section, we assume \(B=\pm 1\) and \(C=\pm 1\) or \(2^{r}\) for some \(r\in\mathbb{N}\) to get \(S^{\prime}_{K}=S_{K}\).
**Proposition 6.2** (Quadratic).: _Let \(K=\mathbb{Q}(\sqrt{d})\) for some prime \(d\) with \(d\equiv 5\pmod{8}\). Then the conclusion of Theorem 3.3 (resp., Theorem 5.2) holds over \(K\)._
Proof.: Since \(d\equiv 5\pmod{8}\), \(K\) has discriminant \(d\), \(2\) is inert in \(K\), and hence \(S_{K}\) is principal. The assumptions on \(B,C\) give \(S_{K}^{\prime}=S_{K}\). By [15, Proposition 1.3.2](or [14, SS3.8]), we have \(2\nmid h_{K}\). By [15, Table 1, SS6], the \(S_{K}\)-unit equation \(\lambda+\mu=1\) with \(\lambda,\mu\in\mathcal{O}_{S_{K}}^{*}\) has only solutions \((-1,2),(2,-1)\) and \((\frac{1}{2},\frac{1}{2})\). Now the proof of the proposition follows from Lemma 6.1 and Proposition 3.4 (resp., Proposition 5.3).
**Proposition 6.3** (Odd degree).: _Let \(K\) be a field of degree \(n\) with \(2\nmid h_{K}\). Suppose_
1. \(q\geq 5\) _be a prime with_ \(\gcd(n,q-1)=1\)_, totally ramifies in_ \(K\)_;_
2. \(2\) _is either inert or_ \(2=\mathfrak{P}^{n}\) _for some principal ideal_ \(\mathfrak{P}\in P\)_._
_Then the conclusion of Theorem 3.3 (resp., Theorem 5.2) holds over \(K\)._
Proof.: Let \(\mathfrak{P}\in S_{K}\) be the unique prime ideal lying above \(2\). Arguing as in the proof of Lemma 6.1, every solution \((\alpha,\gamma)\in\mathcal{O}_{S_{K}^{\prime}}^{*}\times\mathcal{O}_{S_{K}^{ \prime}}\) to the equation \(\alpha+1=\gamma^{2}\) gives rise to a solution \(\lambda,\mu\in\mathcal{O}_{S_{K}^{\prime}}^{*}\) with \(\lambda+\mu=1\), where \(\lambda=\frac{\gamma+1}{2}\), \(\mu=\frac{1-\gamma}{2}\), and a relation \(\alpha=-4\lambda\mu\). Since \(S_{K}^{\prime}=S_{K}\), by [15, Lemma 4.1], we get \(\max\{|v_{\mathfrak{P}}(\lambda)|,|v_{\mathfrak{P}}(\mu)|\}<2v_{\mathfrak{P}} (2)\). In particular, \(|v_{\mathfrak{P}}(\alpha)|=|2v_{\mathfrak{P}}(2)+v_{\mathfrak{P}}(\lambda)+v_ {\mathfrak{P}}(\mu)|<6v_{\mathfrak{P}}(2)\). Now, the proof of the proposition follows from Proposition 3.4 (resp, Proposition 5.3).
### Local criteria for Theorem 4.1:
In this section, we provide local criteria of \(K\) which imply Theorem 4.1. Throughout this section, we assume that \(B=\pm 1\) and \(C=2^{r}\) with \(r=1,2,4,5\) to get \(S_{K}^{\prime}=S_{K}\).
**Proposition 6.4** (Quadratic).: _Assume that Conjecture 2.1 holds over \(K=\mathbb{Q}(\sqrt{d})\) for some prime \(d\) with \(d\equiv 5\pmod{8}\). Then the conclusion of Theorem 4.1 holds over \(K\)._
Proof.: Since \(d\) is a prime with \(d\equiv 5\pmod{8}\), \(2\nmid h_{K}^{+}\), \(2\) is inert in \(K\) and hence \(S_{K}^{\prime}=S_{K}\) is principal. Arguing as in the proof of Proposition 6.2, we find that the \(S_{K}\)-unit equation \(\lambda+\mu=1\) with \(\lambda,\mu\in\mathcal{O}_{S_{K}}^{*}\) has only solutions \((\frac{1}{2},\frac{1}{2}),(2,-1)\) and \((-1,2)\). Now the proof of the proposition follows from Lemma 6.1 and Proposition 4.2.
**Proposition 6.5** (Odd degree).: _Let \(K\) be a field of degree \(n>1\) with \(2\nmid h_{K}^{+}\). Suppose_
1. \(q\geq 5\) _be a prime with_ \(\gcd(n,q-1)=1\) _and totally ramifies in_ \(K\)_;_
2. \(2\) _is inert in_ \(K\)_._
_Then the conclusion of Theorem 4.1 holds over \(K\)._
Define \(\Lambda_{S_{K}}:=\{(\lambda,\mu):\lambda+\mu=1,\lambda,\mu\in\mathcal{O}_{S_{K }}^{*}\}\). By the discussion in [15, SS5], the action of the symmetric group \(\mathfrak{S}_{3}\) on \(\mathbb{P}^{1}(K)\setminus\{0,1,\infty\}\) induces an action on \(\Lambda_{S_{K}}\) as \((\lambda,\mu)^{\sigma}:=(\lambda^{\sigma},\mu^{\sigma})\) with \(\sigma\in\mathfrak{S}_{3}\). For any \((\lambda,\mu)\in\Lambda_{S_{K}}\) and \(\mathfrak{P}\in S_{K}\), define \(m_{\lambda,\mu}(\mathfrak{P}):=\max\{|v_{\mathfrak{P}}(\lambda)|,|v_{\mathfrak{ P}}(\mu)|\}\). If \(\mathfrak{P}\in S_{K}\) is unique, then we write \(m_{\lambda,\mu}\) for \(m_{\lambda,\mu}(\mathfrak{P})\).
**Lemma 6.6**.: _Suppose \(2\) is inert in \(K\) and let \(\mathfrak{P}\in S_{K}\) be the unique prime lying over \(2\). Then, for any \((\lambda,\mu)\in\Lambda_{S_{K}}\), there exists \((\lambda^{\prime},\mu^{\prime})\in\Lambda_{S_{K}}\) with \(\lambda^{\prime}\in\mathcal{O}_{K}\), \(\mu^{\prime}\in\mathcal{O}_{K}^{*}\) and \(\sigma\in\mathfrak{S}_{3}\) such that \((\lambda^{\prime},\mu^{\prime})=(\lambda,\mu)^{\sigma}\) and \(m_{\lambda,\mu}=m_{\lambda^{\prime},\mu^{\prime}}\)._
Proof.: If \(v_{\mathfrak{P}}(\lambda)=v_{\mathfrak{P}}(\mu)=0\), or \(v_{\mathfrak{P}}(\lambda)>0\) (in this case \(v_{\mathfrak{P}}(\mu)=0\)), then take \(\lambda^{\prime}=\lambda,\ \mu^{\prime}=\mu\) and \(\sigma(\lambda)=\lambda\). If \(v_{\mathfrak{P}}(\mu)>0\), then \(v_{\mathfrak{P}}(\lambda)=0\) and take \(\lambda^{\prime}=\mu,\ \mu^{\prime}=\lambda\) and \(\sigma(\lambda)=1-\lambda\). If \(v_{\mathfrak{P}}(\lambda)<0\) then \(v_{\mathfrak{P}}(\mu)=v_{\mathfrak{P}}(\lambda)=-m_{\lambda,\mu}<0\) and take \(\lambda^{\prime}=\frac{1}{\lambda}\), \(\mu^{\prime}=1-\frac{1}{\lambda}\) and \(\sigma(\lambda)=\frac{1}{\lambda}\). In all cases, we can choose \(\lambda^{\prime}\in\mathcal{O}_{K}\), \(\mu^{\prime}\in\mathcal{O}_{K}^{*}\) with \(m_{\lambda^{\prime},\mu^{\prime}}=m_{\lambda,\mu}\).
Proof of Proposition 6.5.: Let \(\mathfrak{P}\in U_{K}\) be the unique prime lying above \(2\). By [15, Lemma 4.1], we have \(m_{\lambda,\mu}<2v_{\mathfrak{P}}(2)=2\) for all \((\lambda,\mu)\in\Lambda_{S_{K}}\). By Lemma 6.6, there exists \(\lambda^{\prime}\in\mathcal{O}_{K}\), \(\mu^{\prime}\in\mathcal{O}_{K}^{*}\) such that \(m_{\lambda^{\prime},\mu^{\prime}}<2\). If \(m_{\lambda^{\prime},\mu^{\prime}}=0\) then \(\lambda^{\prime},\mu^{\prime}\in\mathcal{O}_{K}^{*}\), contradicts [15, Theorem 4]. So, \(m_{\lambda^{\prime},\mu^{\prime}}=1\). Since \(\lambda^{\prime}\in\mathcal{O}_{K},\mu^{\prime}\in\mathcal{O}_{K}^{*}\), we get \(v_{\mathfrak{P}}(\lambda^{\prime})=1\) and hence \(v_{\mathfrak{P}}(\lambda^{\prime}\mu^{\prime})=1=v_{\mathfrak{P}}(2)\). By [15, Lemma 6.2(ii)], we have
\[v_{\mathfrak{P}}(\lambda\mu)\equiv v_{\mathfrak{P}}(2)\pmod{3}. \tag{6.1}\]
We now show that the proof of Proposition 6.5 follows from Proposition 4.2. Now, arguing as in the proof of Lemma 6.1, every solution \((\alpha,\gamma)\in\mathcal{O}_{S_{K}}^{*}\times\mathcal{O}_{S_{K}}\) to the equation \(\alpha+1=\gamma^{2}\) gives rise to an element \((\lambda,\mu)\in\Lambda_{S_{K}}\) with a relation \(\alpha=-4\lambda\mu\). Since \(m_{\lambda,\mu}<2\), \(|v_{\mathfrak{P}}(\alpha)|\leq 2+|v_{\mathfrak{P}}(\lambda)|+|v_{\mathfrak{P}}( \mu)|<6=6v_{\mathfrak{P}}(2)\). By (6.1), we get \(v_{\mathfrak{P}}(\alpha)\equiv 0\pmod{3}\). Note that, here \(n\) is odd, hence \(K\) satisfies \((ES)\).
|
2303.00358 | Semisimplicity of affine cellular algebras | In this note, we prove that an affine cellular algebra $A$ is semisimple if
and only if the scheme associated to $A$ is reduced and 0-dimensional, and the
bilinear forms with respect to all layers of $A$ are isomorphisms. Moreover, if
the ground ring is a perfect field, then $A$ is semisimple if and only if it is
separable. We also give a sufficient condition for an affine cellular algebra
being Jacobson semisimple. | Yanbo Li, Bowen Sun | 2023-03-01T09:36:59Z | http://arxiv.org/abs/2303.00358v1 | # Semisimplicity of affine cellular algebras
###### Abstract.
In this note, we prove that an affine cellular algebra \(A\) is semisimple if and only if the scheme associated to \(A\) is reduced and \(0\)-dimensional, and the bilinear forms with respect to all layers of \(A\) are isomorphisms. Moreover, if the ground ring is a perfect field, then \(A\) is semisimple if and only if it is separable. We also give a sufficient condition for an affine cellular algebra being Jacobson semisimple.
Key words and phrases:Jacobson semisimple; semisimple; separable; affine cellular algebra 2010 Mathematics Subject Classification: 13B25; 16G30; 16K40; 16N60 Corresponding Author: Bowen Sun The work is supported by the Natural Science Foundation of Hebei Province, China (A2021501002); China Scholarship Council (202008130184) and Natural Science Foundation of China (11871107).
hierarchy of the algebras aforementioned:
\[\{\text{semisimple}\}\subset\{\text{Jacobson\,semisimple}\}\subset\{\text{semiprime}\}\]
It is helpful to point out that if \(R\) is a left artinian ring, then \(R\) being semisimple is equivalent to being Jacobson semisimple, and is equivalent to being semiprime, too. Note that a commutative semiprime ring is also called reduced.
The main goal of this note is to prove the following theorem, which gives a sufficient and necessary condition for an affine cellular algebra \(A\) to be semisimple. Denote by \(Spec[A]\) the associated scheme to \(A\) (see Definition 3.3) and denote by \(\phi_{j}\) the bilinear form of the \(j\)-th layer of \(A\).
**Theorem** _Let \(K\) be a field and let \(A\) be an affine cellular \(K\)-algebra. Then \(A\) is semisimple if and only if_
1. \(Spec[A]\) _is a reduced_ \(0\)_-dimensional scheme;_
2. \(\phi_{j}\) _are invertible for all_ \(j\)_._
Moreover, if the ground ring is a perfect field, then \(A\) is semisimple if and only if it is separable. We also give a sufficient condition for an affine cellular algebra being Jacobson semisimple.
## 2. Affine cellular algebras
In this section, we give a quick review on the definitions and some known results about an affine cellular algebra which are needed in the next section. The main reference is [17].
Let \(K\) be a principal ideal domain. Given two \(K\)-modules \(V\) and \(W\), we denote by \(\tau\) the swich map: \(V\otimes_{K}W\to W\otimes_{K}V\), \(v\otimes_{K}w\to w\otimes_{K}v\) for \(v\in V\) and \(w\in W\). A \(K\)-algebra \(B\) is called affine if \(B=K[x_{1},\cdots,x_{t}]/I\), where \(K[x_{1},\cdots,x_{t}]\) is a polynomial ring in finitely many variables \(x_{1},\cdots,x_{t}\) and \(I\) an ideal. A \(K\)-involution \(*\) on a \(K\)-algebra \(A\) is a \(K\)-linear anti-automorphism with \((a^{*})^{*}=a\) for all \(a\in A\).
**Definition 2.1**.: [17, Definition 2.1] _Let \(A\) be a unitary \(K\)-algebra with a \(K\)-involution \(*\). A two-sided ideal \(J\) in \(A\) is called an affine cell ideal if and only if the following data are given and the following conditions are satisfied:_
1. _The ideal_ \(J\) _is fixed by_ \(*\)_:_ \((J)^{*}=J.\)__
2. _There exist a free_ \(K\)_-module_ \(V\) _of finite rank, an affine_ \(K\)_-algebra_ \(B\) _with identity and with a_ \(K\)_-involution_ \(\sigma\) _such that_ \(\Delta:=V\otimes_{K}B\) _is an_ \(A\)_-_\(B\)_-bimodule, on which the right_ \(B\)_-module structure is induced by_ \(B_{B}\)_._
3. _There is an_ \(A\)_-_\(A\)_-bimodule isomorphism_ \(\alpha:J\to\Delta\otimes_{B}\Delta^{\prime},\) _where_ \(\Delta^{\prime}=B\otimes_{K}V\) _is a_ \(B\)_-_\(A\)_-bimodule with the left_ \(B\)_-module induced by_ \({}_{B}B\) _and with the right_ \(A\)_-module structure defined by_
\(\tau(a^{*}(v\otimes b))\) _for_ \(a\in A\)_,_ \(b\in B\) _and_ \(v\in V\)_, such that the following diagram is commutative:_
\[\begin{CD}J@>{\alpha}>{}>\Delta\otimes_{B}\Delta^{\prime}\\ @V{*}V{}V@V{}V{v_{1}\otimes b_{1}\otimes_{B}b_{2}\otimes v_{2}\mapsto v_{2} \otimes\sigma(b_{2})\otimes_{B}\sigma(b_{1})\otimes v_{1}}V\\ J@>{\alpha}>{}>\Delta\otimes_{B}\Delta^{\prime}\end{CD}\]
_The algebra_ \(A\) _with_ \(K\)_-involution_ \(*\) _is called affine cellular if and only if there is a_ \(K\)_-module decomposition_ \(A=J_{1}^{\prime}\oplus J_{2}^{\prime}\oplus\cdots J_{m}^{\prime}\) _(for some_ \(m\)_) with_ \((J_{j}^{\prime})^{*}=J_{j}^{\prime}\) _for each_ \(j\)__\((j=1,\ldots,m)\) _and such that setting_ \(J_{j}:=\bigoplus_{l=1}^{j}J_{l}^{\prime}\) _gives a chain of two-sided ideals of_ \(A\)_:_ \(0=J_{0}\subset J_{1}\subset J_{2}\subset\cdots\subset J_{m}=A\) _(each of them fixed by_ \(*\)_), and each_ \(J_{j}^{\prime}=J_{j}/J_{j-1}\) _is an affine cell ideal of_ \(A/J_{j-1}\) _(with respect to the involution induced by_ \(*\) _on the quotient)._
Clearly, if all the affine algebras \(B_{j}\) are equal to the ground ring \(K\), we recover the definition of a cellular algebra given by Koenig and Xi in [16]. Note that the original definition of a cellular algebra was given by Graham and Lehrer in [9].
For an affine cell ideal \(J\) in an algebra \(A\), the following lemma gives the basic structure of \(J\) when viewed as an algebra (without unit) in itself.
**Lemma 2.2**.: _[_17_, Proposition 2.2]_ _Let \(J\) be an affine cell ideal in a \(K\)-algebra \(A\) with an involution \(*\). We identify \(J\) with \(\Delta\otimes_{B}\Delta^{\prime}=V\otimes_{K}B\otimes_{K}V\). Then:_
1. _There is a_ \(K\)_-linear map_ \(\phi:V\otimes_{K}V\to B\) _such that_ \[(u\otimes b\otimes v)(u^{\prime}\otimes b^{\prime}\otimes v^{\prime})=u \otimes b\phi(v,u^{\prime})b^{\prime}\otimes v^{\prime}\] _for all_ \(u,u^{\prime},v,v^{\prime}\in V\) _and_ \(b,b^{\prime}\in B\)_._
2. _If_ \(I\) _is an ideal in_ \(B\) _and_ \(u,v\in V\)_, then_ \(V\otimes_{K}I\otimes_{K}V\) _is an ideal in_ \(A\)_._
Because of the importance of bilinear form \(\phi\), we often write \(\Delta\otimes_{B}\Delta^{\prime}\) as \(\mathcal{A}(V,B,\phi)\). Let \(\{v_{1},\cdots,v_{n}\}\) be a basis of \(V\) and identify the bilinear form \(\phi\) with the matrix \(\phi=(\phi_{ij})=\phi(v_{i},v_{j})\) (we often use the same notation for the bilinear form and its matrix in this note). Then \(J\) is isomorphic to a swich algebra \(S(M_{n}(B),\,(\phi_{ij}))\) with the definition given as follows.
**Definition 2.3**.: _[_17_, Definition 3.3]_ _Let \(\Lambda\) be a \(K\)-algebra and fix an element \(a_{0}\in\Lambda\). We define a new \(K\)-algebra \(\widetilde{\Lambda}=S(\Lambda,\,a_{0})\), called the swich algebra of \(\Lambda\) with respect to \(a_{0}\), where as a set \(\widetilde{\Lambda}=\{\widetilde{a}\mid a\in\Lambda\}\), and the algebra structure on \(\widetilde{\Lambda}\) is given by_
\[\widetilde{a}+\widetilde{b}=\widetilde{a+b},\qquad a,b\in\Lambda,\] \[\widetilde{a}\cdot\widetilde{b}=\widetilde{aa_{0}b},\qquad\ a,b\in\Lambda,\] \[\lambda\widetilde{a}=\widetilde{\lambda a},\qquad\lambda\in K,\,a \in\Lambda.\]
A swich algebra of a matrix algebra \(M_{n}(B)\) is in fact a generalized matrix algebra in the sense of [1]. By a straightforward computation, one can prove that the map \(\varphi:\widetilde{\Lambda}\rightarrow\Lambda\) defined by
\[\widetilde{a}\mapsto\varphi(\widetilde{a})=aa_{0}\]
is an algebra homomorphism, and a \(\Lambda\)-module \(M\) will become a \(\widetilde{\Lambda}\)-module \(M^{\varphi}\) via \(\varphi\). The following lemma establishes a relationship between the set of all simple modules over \(\Lambda\) and that over a swich algebra \(\widetilde{\Lambda}\).
**Lemma 2.4**.: _[_17_, Theorem 3.10]_ _Let \(\Lambda\) be a \(K\)-algebra with identity such that \(\Lambda\) is finitely generated over its centre. Then there is a bijection \(\omega\) between the set of non-isomorphic simple \(\Lambda\)-modules \(E\) with \(\widetilde{\Lambda}.E\neq 0\), and the set of all non-isomorphic simple \(\widetilde{\Lambda}\)-modules, which is given by \(E\mapsto E^{\varphi}/\{x\in E^{\varphi}\mid\widetilde{\Lambda}.x=0\}\)._
We conclude this section by a result of [17] which is needed in Section 3.
**Lemma 2.5**.: _[_17_, Theorem 3.12 (2)]_ _Let \(A\) be an affine cellular algebra with a cell chain \(0=J_{0}\subset J_{1}\subset J_{2}\subset\cdots\subset J_{m}=A\) such that each layer \(J_{j}/J_{j-1}\cong\mathcal{A}(V_{j},B_{j},\phi_{j})\). Then for \(1\leq j\leq m\), \(\phi_{j}\) is an isomorphism if and only if the determinant \(det(\phi_{st}^{(j)})\) of \(\phi_{j}\) is a unit in \(B_{j}\). In particular, if all \(\phi_{j}\) are isomorphisms, then \(A\) is isomorphic, as an affine cellular \(K\)-algebra, to \(\bigoplus_{j=1}^{m}M_{n_{j}}(B_{j})\), where \(n_{j}\) is the dimension of \(V_{j}\)._
The algebra \(\bigoplus_{j=1}^{m}M_{n_{j}}(B_{j})\) will be called the asymptotic algebra of the affine cellular algebra \(A\).
## 3. Semisimplicity
In this section, we study the semisimplicity of affine cellular algebras. We first need to review some definitions and notations from commutative algebra. The main references here are [12] and [19].
**Definition 3.1**.: _Let \(R\) be a commutative ring with identity. The spectrum of \(R\) is defined to be \(Spec(R):=\{\mathfrak{P}\subseteq R\ |\ \mathfrak{P}\ is\ a\ prime\ ideal\}\)._
It is well-known that one can put a topology on \(Spec(R)\), which is the so-called Zariski topology. Then we can give the definition of an affine scheme.
**Definition 3.2**.: _An affine scheme is a pair \((Spec(R),\mathscr{O}_{R})\) consisting a spectrum of \(R\) equipped with Zariski topology, and its structure sheaf \(\mathscr{O}_{R}\). If \(R\) is a reduced ring, then \((Spec(R),\mathscr{O}_{R})\) is called reduced. The dimension of \((Spec(R),\mathscr{O}_{R})\) is defined to be the Krull dimension of \(R\)._
We refer the reader to [19, Definition 2.20] for the definition of the structure sheaf mentioned in Definition 3.2. By abuse of notations, we will often write simply \(Spec(R)\) for the affine scheme \((Spec(R),\mathscr{O}_{R})\).
Based on the above preparation, we can define the associated scheme to an affine cellular algebra, which will play a key role in this section.
**Definition 3.3**.: _Let \(A\) be an affine cellular algebra. We call_
\[Spec[A]:=Spec(\prod_{j=1}^{m}B_{j})\]
_the associated scheme to \(A\)._
Given an affine cellular algebra \(A\), we will show that some ring theoretical properties of \(A\), for example, semisimplicity, are partially determined by \(Spec[A]\).
Let us first consider Jacobson semisimplicity. For this goal, we need the following lemma about a swich algebra. For simplicity of description, we stipulate that \(0\) is a zero-divisor.
**Lemma 3.4**.: _Let \(R\) be a unital ring such that \(R\) is finitely generated over its center and \(\widetilde{R}=S(R,a_{0})\), a swich algebra of \(R\). Then \(\widetilde{R}\) is Jacobson semisimple if and only if \(R\) is Jacobson semisimple with \(a_{0}\) not a zero-divisor._
Proof.: By Lemma 2.4, there is a bijection between the set of non-isomorphic simple \(R\)-modules \(E\) with \(\widetilde{R}.E\neq 0\), and the set of all non-isomorphic simple \(\widetilde{R}\) -modules, which is given by \(E\mapsto E^{\varphi}/N\), where \(N\) is \(\{x\in E\mid\widetilde{R}.x=0\}\). For each maximal left ideal \(\mathfrak{m}\) of \(R\), denote the corresponding simple \(R\)-module \(R/\mathfrak{m}\) by \(E_{\mathfrak{m}}\).
\((\Rightarrow)\) Suppose that \(\widetilde{R}\) is Jacobson semisimple. Then \(\widetilde{R}\) is semiprime. Consequently, we have from [2, Lemma 2.1] that \(a_{0}\) is not a zero-divisor. Take an element \(a\) of \(J(R)\). Then \(aE_{\mathfrak{m}}=0\) for arbitrary \(\mathfrak{m}\). Assume that \(\widetilde{R}.E_{\mathfrak{m}}\neq 0\). For \(x+N_{\mathfrak{m}}\in E_{\mathfrak{m}}^{\varphi}/N_{\mathfrak{m}}\), we have
\[\widetilde{a}.(x+N_{\mathfrak{m}})=\widetilde{a}.x+N_{\mathfrak{m}}=aa_{0}.x+ N_{\mathfrak{m}}=0+N_{\mathfrak{m}},\]
where the last equality holds because \(a_{0}.x\in E_{\mathfrak{m}}\) and \(aE_{\mathfrak{m}}=0\). This implies that \(\widetilde{a}\) annihilates all simple \(\widetilde{R}\)-modules, that is, \(\widetilde{a}\in J(\widetilde{R})\), and hence \(\widetilde{a}=0\). As a result, \(a=0\) and \(R\) is Jacobson semisimple.
\((\Leftarrow)\) Let \(R\) be a Jacobson semisimple ring. Suppose that \(\widetilde{a}\in J(\widetilde{R})\) and \(\mathfrak{m}\) an arbitrary maximal left ideal of \(R\). Then \(\widetilde{a}.(E_{\mathfrak{m}}^{\varphi}/N_{\mathfrak{m}})=0\). Note that \(R\) is a unital ring, and it is clear that \(1_{R}\notin\mathfrak{m}\). For arbitrary \(x\in R\), denote \(x+\mathfrak{m}\in E_{\mathfrak{m}}\) by \([x]\). We have
\[0=\widetilde{a}.([1]+N_{\mathfrak{m}})=\widetilde{a}[1]+N_{\mathfrak{m}}=[aa_{ 0}]+N_{\mathfrak{m}},\]
and we deduce \([aa_{0}]\in N_{\mathfrak{m}}\), or \(\widetilde{R}.[aa_{0}]=0\) in \(E_{\mathfrak{m}}\). In particular, \(\widetilde{1}.[aa_{0}]=0\), and thus \(a_{0}aa_{0}\in\mathfrak{m}\). As a result, \(a_{0}aa_{0}\in J(R)\) because of \(\mathfrak{m}\) being arbitrary. Now the Jacobson semisimplicity of \(R\) forces \(a_{0}aa_{0}\) to be zero. Combining this result with the fact that \(a_{0}\) is not a zero-divisor yields \(a=0\). Consequently, the radical of \(\widetilde{R}\) is zero and this completes the proof.
Now we can give a sufficient condition for an affine cellular algebra being Jacobson semisimple.
**Theorem 3.5**.: _Let \(A\) be an affine cellular algebra. Then \(A\) is Jacobson semisimple if_
1. \(Spec[A]\) _is a reduced scheme;_
2. _all_ \(\phi_{j}\) _are not zero-divisors._
Proof.: The affine cellularity of \(A\) implies that a layer \(J_{j}/J_{j-1}\) is isomorphic to a swich algebra \(\widetilde{M}_{n_{j}}(B_{j})=S(M_{n_{j}}(B_{j}),\,\phi_{j})\). Since \(Spec[A]\) is a reduced scheme, each \(B_{j}\) is a reduced ring. Note that a reduced affine algebra is Jacobson semisimple. Then \(M_{n_{j}}(B_{j})\) is Jacobson semisimple because the matrix algebra over a Jacobson semisimple ring is Jacobson semisimple too. Note that \(\phi_{j}\) is not a zero-divisor. Then by Lemma 3.4, we have \(\widetilde{M}_{n_{j}}(B_{j})\) is semisimple.
Take an element \(r\in J(A)\) and assume that \(r=\sum_{j=1}^{m}r_{j}\), where \(r_{j}\in J_{j}^{\prime}\). Then \(r\) annihilates all simple modules. According to the representation theory of affine cellular algebras [17, Theorem 3.12], the actions of \(r_{i},\ i=1,2,\ldots m-1\) on simple modules belonging to the top layer are all zeros. Thus \(r_{m}\) annihilates all simple modules of the top layer. Now the Jacobson semisimplicity of each layer proved above forces \(r_{m}\) being zero. By continuing this process finitely many times, we obtain \(r_{i}=0\), for \(i=1,2,\ldots,m\), that is, \(r=0\). This completes the proof.
**Corollary 3.6**.: _For a unital affine cellular algebra \(A\), if \(Spec[A]\) is a reduced scheme and \(\phi_{j}\) are not zero-divisors for all \(j\), then \(A\) is semiprime._
The following is an example where the sufficient criterion given in Theorem 3.5 is not necessary. The example also implies that it is likely far away from a characterisation of Jacobson semisimplicity.
**Example 3.7**.: _Let \(F\) be a field and let \(K\) be the formal power series ring \(F[[x]]\). Then the Jacobson radical of \(K\) is the ideal generated by \(x\). This implies that \(K\) is not Jacobson semisimple. Let \(A=K[x]\). Then \(A\) is Jacobson semisimple ([11] Page 433 Exercise 14 (c)). We claim that \(A\) is an affine cellular algebra. In fact, we can take a cell chain of \(A\) to be \(0\subset(x)\subset A\) and define \(B_{1}=A\), \(V_{1}\) to be the free \(K\)-module with basis \(\{x\}\) and \(B_{2}=K\), \(V_{2}=K\). Since \(B_{2}=K\) is not a reduced ring, \(Spec[A]\) is not a reduced scheme._
Let us begin to study semisimplicity of an affine cellular algebra over a field \(K\) now. As is well-known, a semisimple algebra is left artinian. So we first give a sufficient and necessary condition for an affine cellular algebra to be left artinian as follows.
**Lemma 3.8**.: _Let \(A\) be an affine cellular algebra. Then \(A\) is a left artin ring if and only if \(Spec[A]\) is a \(0\)-dimensional scheme._
Proof.: If \(Spec[A]\) is a \(0\)-dimensional scheme, then the Krull dimension of every \(B_{j}\) is zero. This is equivalent to that all \(B_{j}\) are finite dimensional \(K\)-algebras by [12, Theorem 5.11], and hence the affine cellular algebra \(A\) is a finite dimensional \(K\)-algebra. So \(A\) is left artinian.
Conversely, assume that \(A\) is a left artin ring. We claim that for arbitrary \(i\in\{1,\cdots,m\}\), \(B_{i}\) is an artin ring. In fact, we have from \(A\) being left artinian that \(A/J_{i-1}\) is a left artin ring and \(J_{i}^{\prime}\) is a left artin \(A/J_{i-1}\)-module. If \(B_{i}\) is not artinian, then there exists an infinite descending chain of ideals of \(B_{i}\)
\[I_{1}\supset I_{2}\supset I_{3}\supset\cdots\supset I_{n}\supset\cdots\]
As a result, we obtain an infinite descending chain of submodules of \(J_{i}^{\prime}\)
\[V_{1}\otimes I_{1}\otimes V_{1}\supset V_{1}\otimes I_{2}\otimes V_{1}\supset V _{1}\otimes I_{3}\otimes V_{1}\supset\cdots\supset V_{1}\otimes I_{n}\otimes V _{1}\supset\cdots\]
due to Lemma 2.2. It is a contradiction. Then \(Spec[A]\) is a \(0\)-dimensional scheme, which follows from all \(B_{i}\) being artinian.
Employing [12, Theorem 5.11] again, we get a direct corollary of Lemma 3.8 as follows.
**Corollary 3.9**.: _Let \(A\) be an affine cellular algebra. Then the following statements are equivalent._
1. \(A\) _is a left artin algebra._
2. \(Spec[A]\) _is a_ \(0\)_-dimensional scheme._
3. \(A\) _is a finite dimensional_ \(K\)_-algebra._
4. _Each_ \(B_{j}\) _is a finite dimensional_ \(K\)_-algebra._
Now we can give a sufficient and necessary condition for an affine cellular algebra being semisimple.
**Theorem 3.10**.: _Let \(K\) be a field and let \(A\) be an affine cellular \(K\)-algebra. Then \(A\) is semisimple if and only if_
1. \(Spec[A]\) _is a reduced_ \(0\)_-dimensional scheme;_
2. \(\phi_{j}\) _is invertible for each_ \(j\in\{1,2,\ldots m\}\)_._
Proof.: "\(\Rightarrow\)" Let \(A\) be semisimple. Then \(A\) is both left artinian and semirpime, and thus \(Spec[A]\) is a \(0\)-dimensional scheme by Corollary 3.9. As is well-known, the quotients of a semisimple ring are semisimple too. This implies the semisimplicity of \(A/J_{j-1}\). On the other hand, an ideal of a semisimple algebra is semisimple. This gives that \(J_{j}^{\prime}\) is semisimple since \(J_{j}^{\prime}\) is an ideal in \(A/J_{j-1}\). As a result, \(B_{j}\) is a reduced ring with \(\phi_{j}\) not a zero-divisor by [2, Proposition 2.5]. Moreover, we also have from Corollary 3.9 that every \(B_{j}\) is a finite dimensional \(K\)-algebra. Recall that a finite dimensional unital algebra enjoy a special property [8, Theorem 1.2.1]: every element is either invertible or a zero-divisor. This forces \(\phi_{j}\) to be invertible.
"\(\Leftarrow\)" If \(Spec[A]\) is a reduced \(0\)-dimensional scheme and all \(\phi_{j}\) are invertible, then combining Corollary 3.6 with Corollary 3.9 implies that \(A\) is a finite dimensional semiprime algebra, and consequently, \(A\) is semisimple.
Note that if \(B_{j}\) is a finite dimensional affine \(K\)-algebra, then \(\phi_{j}\) is invertible if and only if \(det(\phi_{j})\) is a unit in \(B_{j}\). Moreover, employing Lemma 2.5 yields an easy result as follows.
**Corollary 3.11**.: _Let \(A\) be an affine cellular algebra \(A\). Then \(A\) is semisimple if and only if it is isomorphic to its asymptotic algebra and \(Spec[A]\) is a reduced 0-dimensional scheme._
To conclude the investigation of semisimplicity of an affine cellular algebra, we enhance the condition "reduced" in Theorem 3.10 to "geometrically reduced", which corresponds to a strengthened version of semisimple algebras: separable algebras.
Let us recall the definitions of a geometrically reduced ring and a separable algebra first.
**Definition 3.12**.: _Let \(K\) be a field and \(\overline{K}\) the algebraic closure of \(K\). An affine \(K\)-algebra \(A\) is called geometrically reduced if \(A\otimes\overline{K}\) is a reduced ring. A \(K\)-algebra \(A\) is said to be separable if for arbitrary finite extension field \(F\) over \(K\), \(A\otimes F\) is a semisimple \(F\)-algebra._
The definition of a separable algebra can be considered as a certain property which can be preserved under base change. Especially, we will find that the base change of an affine cellular algebra is actually the base change of affine algebras \(B_{j}\). In order to study separable affine cellular algebras, we recall the definition of Etale algebras.
**Definition 3.13**.: _[_18_, Definition 1.5.3]_ _A finite dimensional \(K\)-algebra is said to be Etale if it is isomorphic to a finite direct sum of separable extensions of \(K\)._
The following lemma can be viewed as an equivalent definition of an Etale algebra, which implies that an Etale algebra is in fact a finite dimensional commutative separable algebra.
**Lemma 3.14**.: _[_18_, Proposition 1.5.6]_ _Let \(A\) be an finite dimensional commutative \(K\)-algebra. Then the following statements are equivalent._
1. \(A\) _is Etale._
2. \(A\otimes_{K}\overline{K}\) _is reduced._
We can give some necessary and sufficient conditions for an affine cellular algebra being separable.
**Corollary 3.15**.: _Let \(A\) be an affine cellular \(K\)-algebra. Then the following statements are equivalent._
1. \(A\) _is a separable algebra._
2. \(Spec[A]\) _is a geometrically reduced_ \(0\)_-dimensional scheme and_ \(det(\phi_{j})\) _invertible for all_ \(j\)_._
3. \(\prod_{j=1}^{m}B_{j}\) _is an Etale algebra algebra and for all_ \(j\)_,_ \(det(\phi_{j})\) _invertible._
4. _For all_ \(j\)_,_ \(B_{j}\) _are Etale algebra algebra and_ \(det(\phi_{j})\) _invertible._
Proof.: It follows from Lemma 3.14 that (2) is equivalent to (3), and the equivalence between (3) and (4) is clear. Then we only need to prove \((1)\Leftrightarrow(2)\).
\((1)\Rightarrow(2)\) Let \(A\) be separable. Then by Definition 3.12, \(A\) is semisimple. As a result, \(Spec[A]\) is a reduced \(0\)-dimensional scheme with \(det(\phi_{j})\) invertible by Theorem 3.10. Then we only need to prove \(Spec[A]\) is geometrically reduced, or \(\prod_{j=1}^{m}B_{j}\otimes\overline{K}\) is reduced. This is clear from the semisimplicity of \(A\otimes\overline{K}\).
\((2)\Rightarrow(1)\) Assume that \(Spec[A]\) is a geometrically reduced scheme with \(det(\phi_{j})\) invertible. We deduce by Lemma 3.14 that every \(B_{j}\) is an Etale algebra and hence a separable algebra. This implies that \(B_{j}\otimes F\) is semisimple for arbitrary finite extension field \(F\) of \(K\), and so \(Spec[A\otimes F]\) is a reduced \(0\)-dimensional scheme. In addition, it is clear that the swich matrices of \(A\otimes F\) are the same as those of \(A\) and thus all of their determinants are invertible. Therefore, \(A\otimes F\) is semisimple and this completes the proof.
Note that when \(K\) is a perfect field, a finite dimensional affine \(K\)-algebra is an Etale algebra if and only if it is reduced (see [18, Remark 1.5.8]). This leads to a direct result as follows.
**Theorem 3.16**.: _Let \(K\) be a perfect field and \(A\) an affine cellular algebra. Then \(A\) is semisimple if and only if \(A\) is separable._
**Acknowledgement**. The authors are grateful to Zeren Zheng for some helpful conversations. Part of this work was done when Li visited Institute of Algebra and Number Theory at University of Stuttgart from August 2021 to September 2022. He takes this opportunity to express his sincere thanks to the institute and Prof. S. Koenig for the hospitality during his visit.
|
2306.12182 | DAT: Data Architecture Modeling Tool for Data-Driven Applications | Data is the key to success for any Data-Driven Organization, and managing it
is considered the most challenging task. Data Architecture (DA) focuses on
describing, collecting, storing, processing, and analyzing the data to meet
business needs. In this tool demo paper, we present the DAT, a model-driven
engineering tool enabling data architects, data engineers, and other
stakeholders to describe how data flows through the system and provides a
blueprint for managing data that saves time and effort dedicated to Data
Architectures for IoT applications. We evaluated this work by modeling five
case studies, receiving expressiveness and ease of use feedback from two
companies, more than six researchers, and eighteen undergraduate students from
the software architecture course | Moamin Abughazala, Henry Muccini, Mohammad Sharaf | 2023-06-21T11:24:59Z | http://arxiv.org/abs/2306.12182v2 | # DAT: Data Architecture Modeling Tool for Data-Driven Applications
###### Abstract
Data is the key to success for any Data-Driven Organization, and managing it is considered the most challenging task. Data Architecture (DA) focuses on describing, collecting, storing, processing, and analyzing the data to meet business needs. In this tool demo paper, we present the DAT, a model-driven engineering tool enabling data architects, data engineers, and other stakeholders to describe how data flows through the system and provides a blueprint for managing data that saves time and effort dedicated to Data Architectures for IoT applications. We evaluated this work by modeling five case studies, receiving expressiveness and ease of use feedback from two companies, more than six researchers, and eighteen undergraduate students from the software architecture course.
Keywords:Data Architecture Modeling Tool Data-Driven Data Architecture
## 1 Introduction
The International Data Corporation (IDC) [4] expects that by 2025 there will be more than 175 zettabytes of valuable data for a compounded annual growth rate of 61%. Ninety zettabytes of data will be from IoT devices, and 30% of the data generated will be consumed in real-time. A _data architecture_ is an integrated set of specification artifacts used to define data requirements, guide integration, control data assets, and align data investments with business strategy. It also includes an integrated collection of master blueprints at different levels of abstraction [11].
This tool demo paper presents the _Data Architecture Modeling Tool (DAT)_, an architecture modeling tool for the model-driven engineering of data architecture for data-driven applications.
DAT (Data Architecture Modeling Tool) is a data architecture modeling tool for IoT applications that shows how data flows through the system and provides a blueprint for it. It allows the stakeholders to describe two levels of data architecture: high-level Architecture (HLA) and Low-Level Architecture (LLA). It focuses on representing the data from source to destination and shows formats, processing types, storage, analysis types, and how to consume it.
The rest of this tool demo paper is organized as follows. The methodology is presented in Section 2. The application of DAT to a real case study is described in Section 3. The DAT evaluation is presented in Section 4. Related work is discussed in Section 5, while conclusions are drawn in Section 6.
Background
The main focus of this paper is to describe the data architecture of IoT applications through the _Data Modeling Language (DAML)_. Section 2.1 shows ISO/IEC/IEEE 42010:2011 standard. CAPS Framework in Section 2.2. Section 2.4 shows DAML and reports on the technologies used to implement the DAT.
### Ieee/Iso/Iec 42010 Architecture Description
Our work is built on the conceptual foundations of the ISO/IEC/IEEE 42010:2011, _Systems and software engineering -- Architecture description_[12] standard, to investigate the essential elements of data architecture description for IoT applications. The standard handles architecture description (AD), the practices of recording software, system, and enterprise architectures so that architectures can be understood, documented, analyzed, and realized. Architecture descriptions can take many forms, from informal to carefully specified models.
The content model for an architecture description is illustrated in Figure 1. The _Architecture viewpoint_ is a fundamental building block representing common ways of expressing recurring architectural concerns reusable across projects and organizations. It encapsulates _model kinds_ framing particular _concerns_ for a specific audience of system _stakeholders_. The concerns determine what the model kinds must be able to express: e.g., security, reliability, cost, etc. A model determines the notations, conventions, methods, and techniques. Viewpoints, defining the contents of each architecture _view_, are built up from one or more model kinds and _correspondence rules_, linking them together to maintain consistency.
### The CAPS Modeling Framework
CAPS [13] is an environment where Situational Aware Cyber-Physical Systems (SiA-CPS) can be described through software, hardware, and physical space models. The CAPS found three main architectural viewpoints of extreme importance when describing a SiA-CPS: the software architecture structural and behavioral view (SAML), the hardware view (HWML), and the physical space view (SPML).
This environment is composed of the CAPS modeling framework1 and the CAPS code generation framework [17][16] that aim to support the architecture description, reasoning, design decision process, and evaluation of the CAPS architecture in terms of data traffic load, battery level and energy consumption of its nodes.
Footnote 1: CAPS: [http://caps.disim.univaq.it/](http://caps.disim.univaq.it/)
### The Important of Data Architecture
Data architecture is important because it helps organizations manage and use their data effectively. Some specific reasons why data architecture is important to include:
1. Data quality: it helps to collect, store, and use data consistently and accurately. This is important for maintaining data integrity and reliability and avoiding errors or inconsistencies impacting business operations.
2. Data security: it helps to protect data from unauthorized access or modification and ensure that it is used compliantly and ethically. This is particularly important in industries with strict regulations, such as healthcare or finance.
3. Organizational efficiency: it helps organizations better understand and manage their data, increasing efficiency and productivity. By defining the structures, policies, and standards that govern data within an organization, data architecture can help streamline processes and improve decision-making.
4. Business intelligence and analytics: it is essential for organizations to collect, store, and analyze large amounts of data. This can support better decision-making, improve customer relationships, and drive business growth.
Figure 1: Content model of an architecture description (ISO/IEC/IEEE 42010)
5. Scalability and flexibility: A well-designed data architecture can support the growth and evolution of an organization. It allows organizations to easily add new data sources, incorporate new technologies, and adapt to changing business needs.
### The DAT Tool
The DAT modeling framework 45 gives data architects the ability to define a _data view_ for data-driven IoT Applications through the DAML modeling language [1].
Footnote 4: DAT Tool Source Code can be found at [https://github.com/moamina/DAT](https://github.com/moamina/DAT)
Footnote 5: DAT Tool video demo: [https://youtu.be/DuOVDg1CL1Q](https://youtu.be/DuOVDg1CL1Q)
#### 2.4.1 Technologies.
Our tool is based on MDE. For that, we use Eclipse Modeling Framework (EMF) [6] for building tools and applications based on the structure data model, which consists of three main parts, EMF Core, includes a meta-model for describing the models. EMF Editors contains generic reusable classes for building editors for EMF models. Eclipse Epsilon [5] is a Java-based scripting language for model-based software engineering tasks (e.g., model-to-model transformation and model validation) which strongly support EMF and works with UML, XML, Simulink, etc. To create graphical editors and views for the EMF models, we used Eugenia [7]. It is a tool to create a graphical model editor by generating the.gmfgraph,.gmftool, and.gmfmap models that the GMF editor from a single annotated Ecore meta-model needs.
DAT implements the DAML meta-model to be considered a fourth view (data view) for the CAPS, as shown in Figure 2. How the DAT supports the modeling of data views and its application to actual use cases will be presented in Section 3.
Figure 2: The Data View of CAPS
**Methodology.** DAT is built based on a meta-model containing a data architecture as a top root meta-class. Any **data architecture** of IoT can contain a set of **DataNodes** (components) and **connections**. A Component is considered a computational unit with an internal state and a known interface [3]. Data nodes can interact by passing data through **data ports**. A component's internal state is denoted by the current behavior of data representation and its values. Data representation includes _data formats, storage technologies, location, and processing type_. Every Node Behavior has a set of behavioral elements denoted by actions and events that depict the data flow within the component. This element can be executed when a previous action in the behavioral data flow has been achieved or triggered by an event like **ReceiveData**. Other main actions are **Generation**, **Ingestion**, **Process**, **Store**, **Analyze**, and **Consume**. An **event** is triggered in response to the external stimulus of the component. To show the data flow and connection between the events and actions, we use **links**.
**Steps To Use.** The architect needs to follow the following steps to use the tool to model any case:
1. Download the source code from The GitHub 6 and follow the steps in the tool demo video to lunch the tool 7. Footnote 6: DAT Tool Source Code can be found at [https://github.com/moamina/DAT](https://github.com/moamina/DAT)
2. Define which level of abstraction you need (High-Level or Low-Level). You could use a single Data Node at the High-Level, whereas you need to define the structure and behavior at the low level.
3. Define the main data nodes and the connection between them to determine the order of each node, such as the data source is the first data node. Ingestion comes the second one, and the connection shows the data flow from the source to the ingestion data node.
4. At the internal behavior, you could use data elements (low-level elements), such as data formats (JSON, XML, Video,...), and sub-operation, such as (classification, data reduction, cleaning, validation, filter, classification,...).
## 3 Real Use Cases
This section introduces the existing data architecture description used by three companies contributing to the DAT tool. We have chosen three cases from five cases to present our tool in the tool paper.
### Operational Data Warehouse
The data warehouse (DW) depends on data from different sources within the operational company system. These data sources can be data from RDS (MySQL Relational-DB), documents-based data in DynamoDB, and real-time data streams. The DW has data batching mechanisms that perform (complete data Extract, Transform, and Load (ETL)) processes on these data sources to load into the final DW tables and data models for reporting purposes. The ETL process is built through data batches using large-scale data processing and a file system (e.g., AWS S3). Batches run in an
hourly-based fashion. The final DW data model is saved in Column-oriented format using AWS S3.
Data from RDS will wait a specific time to be extracted, transformed, and saved in the column-oriented format on file system technology (AWS S3). For the data that comes from DynamoDB streams or real-time data streams, financial details will be added to part of this data for reporting purposes. Then this data is sent to the ingestion stage, extracted, transformed into a column-oriented format, and stored on file system technology (AWS S3) that will be consumed later by the batches to be processed and saved in the final tables.
The ETL batches will check for the new files on the staging tables; whenever a new file is found, the batch will extract the data, transform it, and save the related final tables on the final DW. Once the data is ready in the final tables, it will be ready to be queried for reporting and data export purposes.
Data-consuming micro-services use a query engine (Presto) to query the data in the DW. DW consumers could be reports, dashboards, or others. Report generator micro-services provide all the reports and dashboards with data. The warehouse exporter is responsible for creating data CSV exported files based on specific data templates and sending them to external customer endpoints such as(SFTP, FTP, S3, and emails). Reporting management, the purpose of this service is to manage and maintain the end user's custom configurations and settings of their preference in the dashboards and reports layouts. Tagging management, this service is built for a specific custom report (Operational Flash Dash) that gives the end user ability to decide on and design his report hierarchy and data drill down from the manager position perspective.
### Hydre
This case 3 from Lambda+ paper's author [10], is from a (Cocktail) research project which aims to study the discourses in two domains in health and food, as well as to identify weak signals in real-time using social network data. The data come from Twitter, compute real-time insights and store data for exploratory analysis.
The case contains other components. The master dataset is implemented with file system technology (Hadoop HDFS). Raw data (tweets) are stored as lines of files, and data re-processing can be done by reading and sending each line as is in another Kafka topic. The streaming ETL uses Kafka consumers to insert data in the micro-batch. The streaming ETL applies transformations and then stores tweets in the storage component. That includes relational, graph, and time series DBMSs. These databases are used for exploratory analyzes, mainly performed with Jupyter notebooks. Alongside, the real-time insights component extracts and aggregates several information about the harvesting, such as popular hashtags or users, using Kafka Streams. It stores the results in the time series database. Although this insertion is a side-effect of the stream processing, it is an idempotent action because the count of the elements will always yield the same result with an effective once guarantee. This result is stored for each element, replacing the old value if it already exists.
### Errors Data Pipeline
This case shows the data pipeline for data errors from different printers. The error data text files come from other printers in JSON format. The data represent the error that happened in different
printers, and the data could be the version of the printer, location, ink type, software version, time, etc. the data will be sent to AWS S3 and saved in the same format. The customer could see the raw error data using a query engine. The data will be processed and transferred into helpful information after a specific time, then converted to parquet format (Column-oriented). After that, the data will be transformed to CSV format and then to relation database format to be ready for query from the customer.
## 4 Evaluation
The DAT cases were evaluated through interviews with seven industry professionals from two companies of different domains and different maturity levels and one external researcher; table 1 shows more about the roles of the evaluators. The evaluation section will be structured in terms of agreements and suggestions for improvement.
Figure 3: The Hydre architecture
### Errors Data Pipeline
Figure 4 shows the Errors Data Pipeline using DAT. The first author presented the models and collected the practitioners' feedback.
**Agreements:** The model described the data flow from generation to destination. The model was easy to understand for new data engineers. The tool has the flexibility to change and add new nodes.
**suggestion:** It is good to include the data quality metrics that could apply to the data at each stage.
### Hydre
Figure 5 shows the Hydre model using DAT. The first author presented the models and collected feedback from Lambda+'s author.
**Agreements:** The model represents very well the Hydre case. The indications of when data are stored on disk are helpful, especially when working on a big data architecture with people who don't know each technology's details. The real-time and batch elements are useful as well.
**suggestion:** the first suggestion is similar to the first case related to data interaction patterns. The second suggestion was a starting point for us to provide two levels of architecture, High-level architecture (HLA) and Low-Level architecture (LLA).
### Operational Data Warehouse
Figure 6 shows the ODW model using DAT. The first author presented the models and collected the practitioners' feedback.
**Agreements:** The model was able to describe the details of the case and was easy to share and understand by different teams in other parts of the world. The model was a good communication language between team members, which means easy to avoid misinterpretations.
**suggestion:** In the current version of the DAT, the only way to show how different components interact with each other is to send/receive data. The suggestion was to include all data interaction patterns (request/response, publish/subscribe, pull/push, and others).
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Company** & **Use cases** & **Experts Roles** \\ \hline \multirow{3}{*}{**Company A**} & Operational Data Warehouse & Big Data Team Lead \\ \cline{2-3} & & Big Data Architect \\ \cline{2-3} & Analytical Data Warehouse & Big Data Engineer \\ \cline{2-3} & & Big Data Architect \\ \hline \multirow{3}{*}{**Company B**} & \multirow{3}{*}{Data Pipeline} & Big Data Team Lead \\ \cline{2-3} & & Big Data Architect \\ \cline{1-1} \cline{2-3} & & Big Data Engineer \\ \hline
**Researchers** & Data Architectures(Lambda, Kappa) & Researcher \\ \hline
**Locally** & NdR Data Architecture & Students \\ \hline \end{tabular}
\end{table}
Table 1: Outline of use cases and roles of the evaluators
## 5 Related Work
This section reviews relevant studies that are related to exploiting the most related research to data-driven IoT. Raj and Bosch [15] proposed a conceptual model for a data pipeline, which contains two main components (nodes and connectors); the node represents the main abstract data node, and the connection represents the way to carry and transmit the data between nodes. The DAT provided two levels of architecture, HLA (High-Level Architecture) and LLA (Low-Level Architecture), which is a more details architecture that gives the ability to model the behavior of each node by describing sub-action, data formats, location, processing type, etc. Borelli [2] proposed a classification for main software components and their relationships to model a software architecture for particular IoT applications. These components represent the abstract components. DAT can describe all of the mentioned components and their behavior too. Erraissi [8][9] proposed a meta-model for data sources, ingestion layers, and Big Data visualization layer. DAT can describe the data in each layer (source or generation, Ingestion, Processing, storing, Analyzing, and Consuming). Nesi [14] provided a solution based on a set of instruments to collect the data in real-time, store, and audit data flow for IoT smart City architecture. DAT is an architecture-driven tool to show how data flow from the source to the final destination at an abstract concept level; it is not a technologies-based tool.
Figure 4: Errors Data Pipeline
## 6 Conclusion and Future Work
This tool demo paper has presented the DAT, an architecture description, and the associated modeling platform for the model-driven engineering of Data Architecture for IoT. It is implemented on top of f the Eclipse Modeling Framework. It can allow the stack-holders to describe two levels of data architectures, high-level Architecture (HLA) and Low-Level Architecture (LLA).
This is an initial starting point for our future work plan, which can be extended to include finishing the current running evaluations with other companies and trying to model different big data patterns and architectures. Second, try to integrate the DAT with other existing technologies and tools.
## 7 Acknowledgment
The authors would like to thank Prof. Giovanni Stilo, Prof. Annabelle Gillet (Lambda+), Prof. Karthik Vaidhyanathan, Mostafa Shaer, Itay, and Roi from HP Team, Mustafa Tamim and Anas Eid from Harri Team, Mudassir Malik, Apurvanand Sahay, and Arsene Indamutsa as a researcher for their contributions in the evaluation.
Figure 5: Hydro (Lambda+ Example) |
2304.04199 | Information-Theoretic Testing and Debugging of Fairness Defects in Deep
Neural Networks | The deep feedforward neural networks (DNNs) are increasingly deployed in
socioeconomic critical decision support software systems. DNNs are
exceptionally good at finding minimal, sufficient statistical patterns within
their training data. Consequently, DNNs may learn to encode decisions --
amplifying existing biases or introducing new ones -- that may disadvantage
protected individuals/groups and may stand to violate legal protections. While
the existing search based software testing approaches have been effective in
discovering fairness defects, they do not supplement these defects with
debugging aids -- such as severity and causal explanations -- crucial to help
developers triage and decide on the next course of action. Can we measure the
severity of fairness defects in DNNs? Are these defects symptomatic of improper
training or they merely reflect biases present in the training data? To answer
such questions, we present DICE: an information-theoretic testing and debugging
framework to discover and localize fairness defects in DNNs.
The key goal of DICE is to assist software developers in triaging fairness
defects by ordering them by their severity. Towards this goal, we quantify
fairness in terms of protected information (in bits) used in decision making. A
quantitative view of fairness defects not only helps in ordering these defects,
our empirical evaluation shows that it improves the search efficiency due to
resulting smoothness of the search space. Guided by the quantitative fairness,
we present a causal debugging framework to localize inadequately trained layers
and neurons responsible for fairness defects. Our experiments over ten DNNs,
developed for socially critical tasks, show that DICE efficiently characterizes
the amounts of discrimination, effectively generates discriminatory instances,
and localizes layers/neurons with significant biases. | Verya Monjezi, Ashutosh Trivedi, Gang Tan, Saeid Tizpaz-Niari | 2023-04-09T09:16:27Z | http://arxiv.org/abs/2304.04199v1 | # Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks
###### Abstract
The deep feedforward neural networks (DNNs) are increasingly deployed in socioeconomic critical decision support software systems. DNNs are exceptionally good at finding minimal, sufficient statistical patterns within their training data. Consequently, DNNs may learn to encode decisions--amplifying existing biases or introducing new ones--that may disadvantage protected individuals/groups and may stand to violate legal protections. While the existing search based software testing approaches have been effective in discovering fairness defects, they do not supplement these defects with debugging aids--such as severity and causal explanations--crucial to help developers triage and decide on the next course of action. Can we measure the severity of fairness defects in DNNs? Are these defects symptomatic of improper training or they merely reflect biases present in the training data? To answer such questions, we present Dice: an information-theoretic testing and debugging framework to discover and localize fairness defects in DNNs.
The key goal of Dice is to assist software developers in triaging fairness defects by ordering them by their severity. Towards this goal, we quantify fairness in terms of protected information (in bits) used in decision making. A quantitative view of fairness defects not only helps in ordering these defects, our empirical evaluation shows that it improves the search efficiency due to resulting smoothness of the search space. Guided by the quantitative fairness, we present a causal debugging framework to localize inadequately trained layers and neurons responsible for fairness defects. Our experiments over ten DNNs, developed for socially critical tasks, show that Dice efficiently characterizes the amounts of discrimination, effectively generates discriminatory instances (vis-a-vis the state-of-the-art techniques), and localizes layers/neurons with significant biases.
## I Introduction
AI-assisted software solutions--increasingly implemented as deep neural networks [1] (DNNs)--have made substantial inroads into critical software infrastructure where they routinely assist in socio-economic and legal-critical decision making [2]. Instances of such AI-assisted software include software deciding on recidivism, software predicting benefit eligibility, and software deciding whether to audit a given taxpayer. The DNN-based software development, driven by the _principle of information bottleneck_[3], involves a delicate balancing act between over-fitting and detecting useful, parsimonious patterns. It is, therefore, not a surprise that such solutions often encode and amplify pre-existing biases in the training data. What's worse, improper training may even introduce biases not present in the training data or irrelevant to the decision making. The resulting fairness defects may not only disadvantage protected groups [4, 5, 6, 7, 8], but may stand to violate statutory requirements [9].
_This paper presents Dice, an information-theoretic testing and debugging framework for fairness defects in deep neural networks._
**Quantifying Fairness.** Concentrated efforts from the software engineering and the machine learning communities have produced a number of successful _fairness testing_ frameworks [10, 11, 12, 13]. These frameworks characterize various notions of fairness--such as group fairness [14] (decision outcome for various protected groups must be similar) and individual fairness [15] (individuals differing only on protected attributes must receive similar outcome)--and employ search-based testing to discover fairness defects. While a binary classification of fairness is helpful in discovering defects, developers may require further insights into the nature of these defects to decide on the potential "bug fix". Are some defects more severe than others? Whether these defects stem from biases present in the training data, or they are artifacts of an inadequate training? Is it possible to find an alternative explanation of the training data that does not use protected information?
_Individual discrimination_ is a well-studied [16, 11, 17, 18] causal notion of fairness that defines a function being discriminant towards an individual (input) if there exists another individual (potentially counterfactual), differing only in the protected features, receives a more favorable outcome. We present a quantitative generalization of this notion as the _quantitative individual discrimination_ (QID). We define QID as the amount of protected information--characterized by entropy metrics such as Shannon entropy and min entropy--used in deriving an outcome. Observe that a zero value for the QID measure implies the absence of the individual discrimination. The QID measure allows us to order various discriminating inputs in terms of their severity, as in an application that is not supposed to base its decisions on protected information, inputs
with higher dependence indicate a more severe violation. Our first _research question_ (**RQ1**) concerns the usefulness of QID measure in finding inputs with different severity.
**Search-Based Testing.** Search-based software testing provide scalable optimization algorithms to automate discovery of software bugs. In the context of fairness defects, the search of such bugs involves finding twin inputs exhibiting discriminatory instances. The state-of-the-art algorithms for fairness testing [19, 17, 18] explore the input space governed by a binarized feedback, resulting in a discontinuous search domain. On the other hand, QID-based search algorithms can benefit a smooth (quantitative) feedback during the optimization, resulting in a more guided search. Our next research question (**RQ2**) is to investigate whether this theoretical promise materializes in practice in terms of discovering richer discriminating instances than using classic notions of discrimination.
**Causal Explanations.** While the discriminating instances (ordered by their severity) provide a clear evidence of fairness defects in the DNN, it is unclear whether these defects are inherent in the training data, or whether they are artifacts of the training process. Inspired by the notion of "the average causal effects" [20] and Audee framework [21] for bug localization in deep learning models, we develop a layer and neuron localization framework for fairness defects. If the cause of the defects is found to be at the input layer, it is indicative of discrimination existing in the training data. On the other hand, if we localize the cause of the defect to some internal layer, we wish to further prod the DNN to extract quantitative information about neurons and their counterfactual parameters that can mitigate the defect while maintaining the accuracy. This debugging activity informed our next research question (**RQ3**): is it possible to identify a subset of neurons and their causal effects on QID to guide a mitigation without affecting accuracy?
**Experiments.** Dice implements a search algorithm (Algorithm 1) to discover inputs that maximize QID and a causal debugging algorithm (Algorithm 2) to localize layers and neurons that causally affect the amounts of QID. Using \(10\) socio-critical DNNs from the literature of algorithmic fairness, we show that Dice finds inputs that can use significant amounts of protected information in the decision making; outperforms three state-of-the-art techniques [19, 17, 18] in generating discriminatory instances; and localizes neurons that guides a simple mitigation strategy to reduce QID down to \(15\%\) of reported initial QID with at most \(5\%\) loss of accuracy. The key contributions of this paper are:
1. **Quantitative Individual Discrimination.** We introduce an information-theoretic characterization of discrimination, dubbed quantitative individual discrimination (QID), based on Shannon entropy and Min entropy.
2. **Search-based Testing.** We present a search-based algorithm to discover circumstances under which the DNNs exhibit severe discrimination.
3. **Causal Debugging.** We develop a causal fairness debugging based on the language of interventions to localize the root cause of the fairness defects.
4. **Experimental Evaluation.** Extensive experiments over different datasets and DNN models that show feasibility, usefulness, and scalability (viz-a-viz state-of-the-art). Our framework can handle multiple protected attributes and can easily be adapted for regression tasks.
## II Preliminaries
**Fairness Terminology.** We consider decision support systems as _binary classifiers_ where a prediction label is _favorable_ if it gives a desirable outcome to an input (individual). These favorable predictions may include higher income estimations for loan, low risk of re-offending in parole assessments, and high risk of failing a class. Each dataset consists of a number of _attributes_ (such as income, experiences, prior arrests, sex, and race) and a set of _instances_ that describe the value of attributes for each individual. According to ethical and legal requirements, data-driven software should not _discriminate_ on the basis of an individual's _protected attributes_ such as sex, race, age, disability, colour, creed, national origin, religion, genetic information, marital status, and sexual orientation.
There are several well-established fairness definitions. _Group fairness_ requires that the statistics of ML outcomes for different _protected groups_ to be similar [14] using metrics such as _equal opportunity difference_ (EOD), which is the difference between the true positive rates (TPR) of two protected groups. Fairness through unawareness (FTU) [15] requires removing protected attributes during training. However, FTU may provide inadequate support since protected attributes can influence the prediction via a non-protected collider attribute (e.g., race and ZIP code). Fairness through awareness (FTA) [15] is an _individual fairness_ notion that requires that two _individuals_ that deemed similar (based on their non-protected attributes) are treated similarly. _Our approach is geared toward individual fairness_.
**Individual Discrimination.** Causal discrimination, first studied in Themis[10], measures the difference between two subgroups via _counterfactual_ queries. It samples individuals with the protected attributes set to \(A\) and compares the outcome to a counterfactual scenario where the protected attributes is set to \(B\). Individual discrimination (ID) is a prevalent notion that adapts counterfactual queries to find an individual such that their counterfactual with a different protected attributes receives more favorable outcome. This fairness notion is used by the state-of-the-art fairness testing to generate fairness defects [22, 19, 17, 18] and closely related to situation testing notion [23]. While standard group fairness metrics (e.g., AOD/EOD) are already quantitative, the quantitative measures do not exist for individual fairness. We propose to adapt information theoretic tools to provide quantitative measures for individual fairness.
**Information-Theoretic Concepts.** The notion of Renyi entropy [24], \(H_{\alpha}(X)\) quantifies the uncertainty (randomness) of a system responding to inputs \(X\). In particular, Shannon
entropy (\(\alpha{=}1\)) and min-entropy (\(\alpha{=}\infty\)) are two important subclasses of Renyi entropy. Shannon entropy (\(H_{1}\)) measures the expected amounts of uncertainty over finitely many events whereas min entropy (\(H_{\infty}\)) measures the uncertainty over single maximum likelihood event.
Consider a deterministic system (like pre-trained DNN) with a finite set of responses and assume that the input \(X\) is distributed uniformly. Thus, the system induces an equivalence relation over the input set \(X\) such that two inputs are equivalent if their system outputs are approximately close, i.e. \(x{\sim}x^{\prime}\) iff \(\mathit{DNN}(x)\approx_{\epsilon}\mathit{DNN}(x^{\prime})\). Let \(X_{o}\) denote the equivalence class of \(X\) with output \(o\). Then, the remaining uncertainty after observing the output of DNN over \(X\) can be written as:
\[H_{1}(X|O)=\sum_{O=o}\frac{|X_{o}|}{|X|}.\log_{2}(|X_{o}|)\qquad\quad\text{( Shannon entropy)}\]
where \(|X|\) is the cardinality of \(X\) and \(|X_{o}|\) is the size of equivalence class of output \(o\). Similarly, the min-entropy is given as
\[H_{\infty}(X|O)=\log_{2}(\frac{|X|}{|O|})\qquad\qquad\qquad\qquad\qquad\text{ (min-entropy)}\]
where \(|O|\) is the number of equivalence classes over \(X\)[25, 26, 27]. Given that the initial entropy is equal to \(\log_{2}(|X|)\) for both entropies, the amount of information from \(X\) used by the system to make decisions are
\[I_{1}(X;O) = \log_{2}(|X|)-H_{1}(X|O),\text{ and}\] \[I_{\infty}(X;O) = \log_{2}(|O|),\]
under Shannon- and min- entropies with \(I_{1}\leq I_{\infty}\).
**Quantitative Notion and Fairness.** Our approach differs from these state-of-the-art techniques [28, 22, 19, 17, 29, 18] in that it extends the individual discrimination notion with quantitative information flow that enables us to measure the amount of discrimination in the ML-based software systems. Given non-protected attributes and ML outcomes, the _Shannon entropy_ measures the expected amount of individual discrimination over all possible responses varying protected classes, whereas, _min entropy_ measures the amount over a single response from the maximum likelihood class.
**Example 1**.: _Consider a dataset with \(16\) different protected values--sex(2), race(2), and age(4)--distributed uniformly, and suppose that we have \(4\) individuals in the system. We perturb the protected attributes of these individuals to generate \(16\) counterfactuals and run them through the DNN to get their prediction scores. Suppose that the outputs have all \(16\) outputs in the same class for the first individual (absolutely fair) and have \(16\) classes of size one for the second individual (absolutely discriminatory). For the third individual, let the outputs be in \(4\) classes with \(\{4,4,4,4\}\) elements in each class (e.g., there is one output class per each age group). For the fourth individual, let us consider outputs to be in \(5\) classes with \(\{8,4,2,1,1\}\) elements in each class (e.g., if race=1 then the output is class \(1\); else, if sex=1 then the output is \(2\); else if age={1,2} then the output is \(3\); else there is one output class for each age={3,4})._
We work with the following notions of discrimination.
* _Individual Discrimination Notion_. The individual discrimination used by the state-of-the-art techniques can only distinguish between the first individual and the rest, but they cannot distinguish among individuals two to four. In fact, these techniques generate tens of thousands of individual discriminatory instances in a short amount of time [19, 17, 18]. However, they fail to prioritize test cases for mitigation and cannot characterize the amounts of discrimination (i.e., their severity).
* _Shannon Entropy_. Using the Shannon entropy, we have the initial fairness to be \(4.0\) bits, a maximum possible discrimination. The remaining fairness of DNN are \(4.0\), \(0.0\), \(2.0\) and \(2.125\), for the first to fourth individuals, respectively. The discrimination is the difference between the initial and remaining fairness, which are \(0.0\), \(4.0\), \(2.0\) and \(1.875\) for the first to fourth individuals, respectively. It is important to note that beyond the two extreme cases, Shannon entropy deems perturbations to the third individual (rather than the fourth) create a higher amount of discrimination.
* _Min Entropy_. The initial fairness via min entropy is also \(4\) bits. The conditional min entropy is \(\log\frac{16}{1}=4.0\), \(\log\frac{16}{16}=0.0\), \(\log\frac{16}{4}=2.0\), and \(\log\frac{16}{5}=1.7\), for the four individuals, respectively. The amounts of discrimination thus are \(0.0\), \(4.0\), \(\log 4=2.0\) and \(\log 5=2.3\), respectively. Beyond the two extreme cases where both entropies agree, the min entropy deems perturbations to the fourth individual create a higher amount of discrimination. This is intuitive since the discrimination on the fourth case is more subtle, complex, and significant. Therefore, the ML software developers might prioritize those cases characterized by the min entropy.
## III Overview
**Dice in Nutshell.** Figure 1 shows an overview of our framework Dice. It consists of two components: (1) an automatic test-generation mechanism based on search algorithms and (2) a debugging approach that localizes the neurons with significant impacts on fairness using a causal algorithm. First, Dice searches through the space of input dataset to find circumstances on the non-protected attributes under which the DNN-under-test shows a significant dependency on the protected attributes in making decisions. In doing so, it works in global and local phases. In the global phase, the search explores the input space to increase the amount of discrimination in each step of search. On the other hand, in the local phase it exploits the promising seeds from the global phase to generate as many discriminatory instances as possible.
The key elements of search is a threshold-based clustering algorithm used for computing both gradients and objective functions that provide a smooth feedback. The search characterizes the quantitative individual discrimination (QID) and
returns a set of interesting inputs. Second, Dice uses those inputs to localize neurons with the largest causal effects on the amounts of discrimination. In doing so, it intervenes [30] over a set of suspicious neurons. For every neuron, our debugging approach forces the neurons to be active (\(n{>}0\)) and non-active (\(n{=}0\)) over the test cases as far as the functional accuracy of DNN remains in a valid range. Then, it computes the difference between the amounts of QID in these two cases to characterize the causal effects of the neuron on fairness.
Dice reports top \(k\) neurons that both have positive impacts (i.e., their activation reduces the amounts of discrimination) and negative impacts (i.e, their activation increases the amounts of discrimination). A potential mitigation strategy is to intervene to keep a small set of neurons activated (for the positive neurons) or deactivated (for the negative neurons).
**Test Cases.** Consider the _adult census income_[31] dataset with a pre-trained model with \(6\) layers [17] to overview Dice in practice. We ran Dice for \(1\) hours and obtain \(230,593\) test cases. It discovered \(36\) clusters from the initial of \(14\) clusters, and the amounts of QID are \(4.05\) and \(2.64\) bits for min entropy and Shannon entropy, respectively, out of a total of \(5.3\) bits of information from the protected attributes. Considering to order the test cases, we have \(6\) test case with maximum QID discrimination of \(5.3\) bits. In addition, we have \(29\) and \(112\) test cases cases with \(5.2\) and \(5.1\) bits of QID discrimination. The reported numbers are averaged (and rounded) for \(10\) runs.
**Localization and Mitigation.** Dice uses the generated test cases to localize layers and neurons with a significant causal contribution to the discrimination. For the census dataset, it identifies the second layer as the layer with largest sensitivity to protected attributes. Among the neurons in this layer, Dice found that \(15\)th neuron has the largest negative influence on fairness (the discrimination decreased by \(19.6\%\) when it is deactivated) and \(19\)the neuron has the largest positive influence on fairness (the discrimination decreased by \(17.6\%\) when it is activated). Following this localization, a simple mitigation strategy of activating or deactivating these neuron reduces the amounts of QID discrimination by \(20\%\) with \(3\%\) accuracy loss.
**Comparison to the State-of-the-art.** We compare Dice to the state-of-the-art techniques in terms of generating individual discrimination (ID) (rather than the quantitative notion) per each protected attribute. Our goal is to evaluate whether the clustering-based search is effective in generating discriminatory instances. We run Dice and baseline for \(15\) minutes, and report average of results over \(10\) runs. The baseline includes Aequitas[19], ADF [17], and NeuronFair[18]. Considering sex as the protected attribute in the census dataset, Dice generated \(79.0k\) instances whereas Aequitas, ADF, and NeuronFair generated \(10.4k\), \(18.2k\), and \(21.6k\) discriminatory instances, respectively. Overall, Dice generate more ID instances in all cases with more success rates. However, Dice is slower in finding the first ID instance in order of a few seconds (in average), since our approach does not generate ID instances in global phase. When considering the time to the first 1,000 instances, Dice has significantly outperformed the state-of-the-art. We conjecture that the improvements are due to smooth search space via quantitative feedback.
## IV Problem Statement
We consider DNN-based classifiers with the set of input variables \(A\) partitioned into protected set of variables \(Z\) (such as race, sex, and age) and non-protected variables \(X\) (such as profession, income, and education). We further assume the output to consist of \(t\) prediction classes.
**Definition IV.1** (DNN: Semantics).: _A deep neural network (DNN) encodes a function \(\mathcal{D}:X\times Z\rightarrow[0,1]^{t}\) where \(X=X_{1}\times X_{2}\cdots\times X_{n}\) is the set of non-protected input variables, \(Z=Z_{1}\times Z_{2}\cdots\times Z_{r}\) is the set of protected input variables, and the output is \(t\)-dimensional probabilistic vector corresponding to \(t\) prediction classes. The predicted label \(\mathcal{D}_{\ell}(x,z)\) of an input pair \((x,z)\) is the index of the maximum score, i.e. \(\mathcal{D}_{\ell}(x,z)=\max_{i}\mathcal{D}_{\ell}(x,z)(i)\). We assume that the set of protect input variables are finite domain, and we let \(m\) be the cardinality of the set of protected variables \(Z\)._
**Definition IV.2** (DNN: Syntax).: _A DNN \(\mathcal{D}\) is parameterized by the input dimension \(n{+}r\), the output dimension \(t\), the depth of hidden layers \(N\), and the weights of its hidden layers \(W_{1},W_{2},\ldots,W_{N}\). Our goal is to test and debug a pre-trained neural network with known parameters and weights. Let
Fig. 1: Workflow of Dice. Given a DNN and relevant input dataset, Dice quantifies QID discrimination via testing and applies causal debugging to localize and mitigate QID discrimination.
_be the output of layer \(i\) that implements an affine mapping from the output of previous layer \(D_{i-1}\) and its weights \(W_{i-1}\) for \(1\leq i\leq N\) followed by_
1. _a fixed non-linear activation unit (e.g., ReLU defined as_ \(D_{i-1}\mapsto\max\left\{W_{i-1}.D_{i-1},0\right\}\)_) for_ \(1\leq i<N\)_, or_
2. _a SoftMax function that maps scores to probabilities of each class for_ \(i=N\)_._
_Let \(D_{i}^{j}\) be the output of neuron \(j\) at layer \(i\)._
**Individual Discrimination.** We say a DNN \(\mathcal{D}\) is biased based on causal discrimination notion [16, 11, 17, 18] if
\[\exists z_{1},z_{2}\in Z,x\in X\ s.t.\ \mathcal{D}_{\ell}(x,z_{1})\neq \mathcal{D}_{\ell}(x,z_{2}),\]
for \(z_{1}\neq z_{2}\) of protected inputs. Intuitively, the idea is to find an individual such that their counterfactual with different protected attributes such as race receives a different outcome.
**Quantitative Individual Discrimination.** In the setting of fairness testing, it is often desirable to quantify the _amounts_ of bias for individuals. We define the notion quantitative individual discrimination (QID) based on the equivalence classes induced from the output of DNN over protected attributes. Formally, \(\mathit{QID}(Z,X=x)=\langle Z_{1},\ldots,Z_{k}\rangle\) that is the quotient space of \(Z\) characterized by the DNN outputs under an individual with non-protected value \(x\). Using this notion, we say a pair of protected values \(z,z^{\prime}\) are in the same equivalence class \(i\) (i.e., \(z,z^{\prime}\in\mathcal{Z}_{i}\)) if and only if \(\mathcal{D}(z,x)\approx\mathcal{D}(z^{\prime},x)\).
Given that \(Z\) is uniformly distributed and \(\mathcal{D}\) is a deterministic function, we can quantify the QID notion for an individual \((z,x)\) according to the Shannon and min entropy, respectively:
\[Q_{1}(Z,x)\!\!=\!\log_{2}(m)\!\!-\!\sum_{i=1}^{k}\frac{|\mathit{QID}_{i}(Z,x)| }{m}.\log_{2}(|\mathit{QID}_{i}(Z,x)|)\]
\[Q_{\infty}(Z,x)=\log_{2}(m)-\log_{2}(\frac{m}{k})=\log_{2}(k).\]
where \(m\) is the cardinality of \(Z\), \(|\mathit{QID}_{i}(Z,x)|\) is the size of equivalence class \(i\), and \(k\) is the number of equivalence classes.
**Debugging/Mitigating DNN for QID.** After characterizing the amounts of discrimination via \(\mathit{QID}\), our next step is to localize a set of layers and neurons that _causally_ effect the output of DNN to have \(k\) equivalence classes.
Causal logic [30] provides a firm foundation to reason about the causal relationships between variables. We consider a structural causal model (SCM) with exogenous variables \(U\) over the unobserved input factors, endogenous variables \(V\) over \((X,Z,D_{i}^{j})\); and the set of functions \(\mathcal{F}\) over the set \(V\) using the DNN function \(\mathcal{D}\) and exogenous variables \(U\). Using the SCM, we aim to estimate the average causal effect (ACE) [30] of neuron \(D_{i}^{j}\) on the QID.
A primary tool for performing such computation is called do logic [20]. We write do\((i,j,y)\) to indicate that the output of neuron \(j\) at layer \(i\) is intervened to stay \(y\). In doing so, we remove the incoming edges to the neuron and force the output of neuron to take a pre-defined value \(y\), but we are not required to control back-door variables due to the feed-forward structure of DNN. Then, the ACE of neuron \(D_{i}^{j}\) on the quantitative individual discrimination with min entropy can be written as \(E[Q_{\infty}|\mathrm{\triangle}\mathrm{\triangle}(i,j,y)\!,\!k,l]\), which is the expected QID after intervening on the neuron given that the non-intervened DNN characterized \(k\) classes with an accuracy of \(l\). Our goal is to find neurons with the largest causal effects on the QIDs, requiring that such interventions are faithful to the functionality of DNN.
**Definition IV.3** (Quantitative Fairness Testing and Debugging).: _Given a deep neural network model \(\mathcal{D}\) trained over a dataset \(A\) with protected (\(Z\subset A\)) and non-protected (\(X\subset A\)) attributes; the search problem is to find a single non-protected value \(x\in X\) such that the quantitative individual discrimination (QID), for a chosen measure \(Q_{1}\) or \(Q_{\infty}\), is maximized over the \(m\) protected values \(\Sigma=\{z_{1},\ldots,z_{m}\}\). Given the inputs \((\Sigma,x)\) characterizing the maximum QID, our debugging problem is to find a minimal subset of layers \(l\subset\{1,\ldots,N\}\) and neurons \(D_{l}^{(J)}\) for \(J\subseteq|W_{l}|\) such that the average causal effects of \(D_{l}^{j}\) on the QID are maximum._
## V Approach
**Characterizing Quantitative Individual Discrimination.** Given a DNN \(\mathcal{D}\) over a dataset \(A\), our goal is to characterize the worst-case QID over all possible individuals. Since min entropy characterizes the amounts of discrimination from one prediction with \(Q_{\infty}(Z,x)\geq Q_{1}(Z,x)\), it is a useful notion to prioritize the test cases. Therefore, we focus on \(Q_{\infty}\) and propose the following objective function:
\[\max_{x\in X}\ 2^{Q_{\infty}(Z,x)}+(1-exp(-0.1*\delta))\]
where \(2^{Q_{\infty}(Z,x)}=k\) and \(\delta\) is the maximum distance between equivalence classes, normalized with the exponential function to remain between \(0\) and \(1\). The term is used to break ties when two instances characterize the same number of classes, by preferring one with the highest distance. Overall, the goal is to find a single value of non-protected attribute \(x\) such that the neural network model \(\mathcal{D}\) predicts many distinguishable classes of outcomes when \(x\) is paired with \(m\) protected values. However, finding those inputs requires an exhaustive search in the exponential set of subsets of input space, and hence is clearly intractable. We propose a gradient-guided search algorithm that aims to search the space of input variables (attributes) to maximize the number of equivalence classes and generate as many discrimination instances as possible.
**Search Approach.** Our search strategy consists of global and local phases as in some of the prior work [17, 29, 18]. The goal of the global phase is to find the maximum quantitative individual discrimination via gradient-guided clustering. The local phase uses the promising instances to generate a maximum number of discriminatory instances (ID).
_Global Phase._ Given a current instance \(x\), the global stage first uses \(m\) different values from the space of protected attributes, while keeping the values of non-protected attributes the same. Then, it receives \(m\) prediction scores from the DNN and
partitions them into \(k\) classes. We adapt a constrained-based clustering with \(\epsilon\) where two elements cannot be in the same cluster if their scores differ more than \(\epsilon\). Now, the critical step is to perturb the current instance over a subset of non-protected attributes with a direction that will likely increase the number of clusters induced from the perturbed instance in the next step of global search.
In doing so, we first compute the gradients of DNN loss function for a pair of instances (say \(a,a^{\prime}\)) in the cluster with the maximum elements. The intuition is that we are more likely to split the largest cluster into \(2\) or more sub-clusters and increase the number of partitions in the next step. For the pair of samples, we use the non-protected attributes that have the same direction of gradients \(d\) since it shows the high sensitivity of loss function with respect to small changes on those common features of the pair. If we were to use gradients of opposite directions, we will neutralize the effects of gradients since we only perturb one instance over the non-protected attributes. Finally, we perturb the current sample \(x\) to generate \(x^{\prime}\) using the direction \(d\) and step size \(s_{g}\).
_Local Phase._ Once we detect an instance with more than \(2\) clusters, we enter a local phase where the goal is to generate as many discriminatory instances (ID) as possible. In our quantitative approach, we say that an unfavorable decision for an individual \(x\) is discriminatory if there is a counterfactual individual \(x^{\prime}\) that received a favorable outcome. Similar to the state-of-the-art [19, 17, 18], we use non-linear optimizer that takes an initial instance \(x\), a step function to generate the next instance around the neighborhood of the current instance, and an objective function that quantifies the discrimination of the current instance. Since our approach uses a continuous objective based on the characteristics of clusters, it enables us to guide the local search to generate discriminatory instances.
**Search Procedure.** Algorithm 1 sketches our search algorithm to quantify the amounts of bias. We first use the clustering (KMeans algorithm) to partition the data points into \(p\) groups (line 1). Next, we run the algorithm until the time-out \(T\) reaches where in each iteration we seed a sample randomly from one of the partitions \(p\) (line 2). Then, we proceed into global and local phases of search.
```
Input: Dataset \(A\), deep learning model \(\mathcal{D}\), the loss function for the DNN \(J\), protected attributes \(P\), non-protected attributes \(NP\), the number of partitions over the dataset \(p\), the step size in global perturbation \(s_{g}\), the step size in local perturbation \(s_{l}\), the maximum number of global iterations \(N_{g}\), the maximum number of local iterations \(N_{l}\), the tolerance \(\epsilon\), and time-out \(T\). Output: Num. Clusters and (local+Global) Test Cases.
1\(A^{\prime}\), \(cur\leftarrow\)KMeans(\(A\), \(p\)), time()
2whiletime() - \(cur<T\)do
3\(x\), \(i\), \(k\), \(\delta\leftarrow\) pick(\(A^{\prime}\)), 0, 1, 0.0
4while\(i<N_{g}\)do
5\(I_{m},S_{m}\leftarrow\) Generate_Predict(\(a\), \(P\))
6\(X_{k}\), \(\delta^{\prime}\leftarrow\) Clust(\(S_{m}\), \(\epsilon\))
7\(a,a^{\prime}\leftarrow\) Choose_Pair_Max(\(X_{k}\))
8\(Gs\leftarrow(\nabla J(a),\nabla J(a^{\prime}))\)
9\(d\leftarrow\) choose_common_direct(\(Gs\), \(NP\))
10\(x^{\prime}\leftarrow\) perturb(\(x\), \(d,s_{g}\))
11if(\(|X_{k}|>k\)) or (\(|X_{k}|=k\) and \(\delta^{\prime}>\delta\))then
12eval\(\leftarrow\)(\(x\)) {
13\(I^{\prime}_{m},S^{\prime}_{m}\leftarrow\) Generate_Predict(\(x\), \(P\))
14\(X_{k^{\prime}}\leftarrow\) clust(\(S^{\prime}_{m}\), \(\epsilon\))
15\(\Delta\leftarrow\)\(arg.\)max(\(X_{k^{\prime}}\))\(-arg.\) min(\(X_{k^{\prime}}\))
16\(local\_inps.\)add(\(x\))
17Return\(-\Delta\)
18step_f\(\leftarrow\)\(\lambda_{x}\) perturb_local(\(x,s_{l}\))
19LBFGS(\(x\), eval_f, step_f, \(N_{l}\))
20\(global\_inps.\)add(\(a\))
21\(k\), \(x\leftarrow\) max(\(k,|X_{k}|\), \(x^{\prime}\))
22
23return\(k\), \(I=global\_inps\cup local\_inps\)
```
**Algorithm 1**Dice (Search)
In the local phase, we use the general-purpose optimizer, known as LBFGS [32], which takes an initial seed \(x\), an objective function, a step function, and the maximum number of local iterations \(N_{l}\); it returns the generated instances during the optimization (Line 12-19). In the objective function shown with eval_f (Line 12-17), we generate \(m\) instances with the same non-protected values but different protected ones (Line 13). We generate prediction scores for those instances and cluster them with tolerance parameter \(\epsilon\) (line 14). Then, we compute the difference between the indices of two clusters with the smallest and largest scores (line 15). Finally, we record the generated sample and return the difference as the evaluation of optimizer at the current sample (line 16-17). The step function is shown with perturb_local (Line 18) where it guides the optimizer to take one step in the input space. Our step function uses a random sample from a different cluster compared to the current sample. Then, it computes the normalized sum of gradients and perturbs it using the smallest gradients to remain in the neighborhood of the current sample.
**Debugging Approach.** Since it is computationally difficult to intervene over all possible neurons in a DNN, we first
adapt a layer-localization technique from the literature of DL framework debugging [33, 21] where we detect a layer with the largest sensitivity to the protected attributes. Let \(D_{i}(z,x)\) be the output of layer \(i\) over protected value \(z\) and non-protected value \(x\). Let \(\Delta_{i}(x):\mathbb{R}^{|D_{i}|}\times\mathbb{R}^{|D_{i}|}\rightarrow\mathbb{R}\) be the distance between the outputs of DNN at layer \(i\) as triggered by \(m\) different protected values and the same non-protected value \(x\), and let \(\delta_{i}\) be the \(\max_{x}\Delta_{i}(x)\). The rate of changes in the sensitivity of layer \(i\) (w.r.t protected attributes) is
\[\rho_{i}=\frac{\delta_{i}-\max_{j}\delta_{j}}{\max_{j}\delta_{j}+ \epsilon},\ with\ 0\leq\ j<\ i\]
where \(\delta_{0}=0.0\) and \(\epsilon=10^{-7}\) (to avoid division-by-zero [33, 21]). Let \(l=\arg\max_{i}\rho_{i}\) be the layer index with the maximum rate of changes. Our next step is to localize neurons in the layer \(l\) that have significant positive or negative effects on fairness. Let \(V_{l}^{j}\) be the set of possible values for neuron \(j\) at later \(l\) (recorded during the layer localization). We are interested in computing the average causal effects when the neuron \(D_{l}^{j}\) is activated vs. deactivated, noting that such interventions might affect the functionality of DNN. Therefore, among a set of intervention values, we choose one activated value \(v_{1}\in V_{l}^{j}>0\) and one deactivated value \(v_{2}\in V_{l}^{j}\approx 0\), considering the functional accuracy of DNN within \(\epsilon\) of original accuracy \(A\). Therefore, we define average causal difference (ACD) for a neuron \(D_{l}^{j}\) as:
\[\mathbb{E}[Q_{\infty}\mid\texttt{do}(l,j,v_{1}>0),k,A]-\mathbb{E}[Q_{\infty} \mid\texttt{do}(l,j,v_{2}\approx 0),k,A],\]
where do notation is used to force the output of neuron \(j\) at layer \(l\) to a fix value \(v\). We then return the neuron indices with the largest positive (aggravating discrimination) and smallest negative (mitigating discrimination). Let \(\hat{i}\) and \(\hat{j}\) be the layer and neuron with the largest positive \(ACD\). One simple mitigation strategy is thus to deactivate the neuron \(D_{l}^{j}\), expecting to reduce QID by \(ACD/k\) percentage. Similarly, activating the neuron with the smallest negative \(ACD\) is expected to reduce QID by \(ACD/k\) percentage.
**Debugging Procedure.** Algorithm 2 shows the debugging aspect of Dice. Given a set of test cases from the search algorithm, we first use a notion of distance (e.g, \(\Delta=L_{1}\)) to compute the difference between any pair of protected values \(z,z^{\prime}\) w.r.t the outputs of layer \(l\in\{1,\ldots,N\}\) (line 1). Then, we compute the rate of changes (line 2-3) and return a layer \(l\) with the largest change (line 4). We compute various statistics on the output of every neuron \(i\) at layer \(l\) such as the minimum, maximum, average, average\(\pm\)std. dev, average\(\pm\)2*std. dev, etc (line 5). Among those values, we take the smallest and largest values such that the intervention on the neuron \(i\) at layer \(l\) has the minimal impacts on the accuracy of DNN (line 6). Finally, we compute the average causal difference (line 7) and return the indices of layer, neurons with large negative influence, and neurons with large positive influence.
```
Input: Dataset \(A=(A_{X},A_{Z})\), \(\mathcal{D}\) with accuracy \(\mathcal{A}_{\mathcal{D}}\), Test cases \(I\), the distance function \(\Delta\), the tolerance of layer localization \(\epsilon_{1}\), the tolerance of accuracy loss \(\epsilon_{2}\), and \(k\) top items. Output: Layer Index, Negative, and Positive Neurons. /* Layer Localization */
1\(\delta\leftarrow\lambda_{l}\ \max\sum_{x\in I}\Delta\big{(}D_{l}(z,x),D_{l}(z^{ \prime},x)\big{)}\)
2\(\delta[0]\), \(\delta_{max}\leftarrow\) 0.0, \(\lambda_{i}\max_{j<i}\delta[j]\)
3\(\rho\leftarrow\lambda_{l}\frac{\delta[j]-\delta_{max}[j]}{\delta_{max}[i]+ \epsilon}\)
4\(l\leftarrow\arg\max_{i}\rho[i]\)
5\(\ast\)Neuron Localization */
6\(V_{l}\leftarrow\lambda_{i}\ \text{stats}(D_{l}^{j})\)
7\(v_{l}\leftarrow\lambda_{i}\ \lambda_{j}\ |\mathcal{A}_{\mathcal{D}}- \mathcal{A}_{\mathcal{D}\gets d(l,i,V_{l}[j])}|\leq\epsilon_{2}\)
8\(ACD_{l}\)\(\leftarrow\)\(\lambda_{j}\ \mathbb{E}\) (\(k^{\prime}|\texttt{do}(l,j,v_{l}^{j}\textgreater 0)\)) - \(\mathbb{E}(k^{\prime}|\texttt{do}(l,j,v_{l}^{j}\textgreater 0)\)) return\(l\), \(top_{k}(\max\ ACD_{l})\), \(top_{k}(\min\ ACD_{l})\).
```
**Algorithm 2**Dice (Debugging).
## VI Experiments
**Datasets and DNN models.** We consider \(10\) socially critical datasets from the literature of algorithmic fairness. These datasets and their properties are described in Table I. For the DNN model, we used the same architecture as the literature [17, 18, 29] and trained all datasets on a six-layers fully-connected neural network with \(\langle 64,32,16,8,4,2\rangle\) neurons. We used the same hyperparameters for the all training with num_epochs, batch_size, and learning_rate are set to \(1000\), \(128\), and \(0.01\), respectively. The accuracy of trained models are reported in Table V.
**Technical Details.** We implemented Dice with TensorFlow v2.7.0 and scikit-learn v0.22.2. We run all the experiments on an Ubuntu 20.04.4 LTS OS sever with AMD Ryzen Threadripper PRO 3955WX 3.9GHz 16-cores X 32 CPU and two NVIDIA GeForce RTX 3090 GPUs. We choose the values \(10\), \(1000\), \(0.025\), \(1\), and \(1\) for max_global, max_local, \(\epsilon\), \(s\_{g}\), and \(s\_{l}\) in Algorithm 1, respectively, and take the average of \(10\) multiple runs for all experiments. In Algorithm 2, we used \(L_{1}\)-norm, \(10^{-7}\), \(0.05\), and \(3\) for \(\Delta\), \(\epsilon_{1}\), \(\epsilon_{2}\), and \(k\), respectively.
**Research Questions.** We seek to answer the following three questions using our experimental setup.
* Can Dice characterize the amounts of information from protected attributes used for the inferences?
* Is the the proposed search algorithm effective and efficient (vis-a-vis the state-of-the-art techniques) in generating individual discrimination instances?
* Can the proposed causal debugging guide us to localize and mitigate the amounts of discrimination?
* Our open-source tool Dice with all experimental subjects are publicly accessible:
### _Characterizing QID via Search (RQ1)_
An important goal is to characterize the amount of information from protected attributes used during the inference of DNN models. Table II shows the result of experiments to answer this research question. The left side of table shows the
initial characteristics such as the number of protected values (\(m\)), the maximum possible amounts of discrimination (\(Q_{I}\)) based on min(\(\epsilon^{-1},m\)), and the initial number of clusters found using samples from the dataset (\(K_{I}\)). The right side of table shows the results after running our search for \(1\) hour. The column #\(I\) is the number of QID instances generated, and \(K_{F}\) is the maximum number of clusters discovered by Dice. The column T\({}_{K_{F}}\) is the time taken to find the maximum number of clusters from an input with initial clusters \(K_{I}\) (in seconds). The columns \(Q_{\infty}\) and \(Q_{1}\) are the quantitative individual discrimination based on min entropy and Shanon entropy, respectively. The columns #\(I_{K_{k}^{1}}\), #\(I_{K_{F}^{2}}\), and #\(I_{K_{F}^{3}}\) show the number of test cases with the highest, second-highest, and third-highest QIDs, respectively, that order test cases with their QID severity. Overall, the results show that Dice can find \(3.4\times\) more clusters (in average) from the initial characteristics within one minute of search. The DNN for Students dataset showed the largest increase in the number of clusters going from \(1.9\) to \(10.9\). Dice found that Adult Income Census dataset has the largest amounts of QID where \(4.05\) out of \(5.3\) bits (\(76.4\%\)) from protected variables are used to make decisions. The German Credit dataset with \(1.61\) out of \(4.0\) bits (\(40.0\%\)) showed the least amounts of discrimination. For test-case prioritizing, the column #\(I_{K_{F}^{1}}\) shows our approach to be useful in finding a small percentage of generated test cases with the worst-case discrimination. In \(7\) out of \(10\) experiments, Dice found less than \(50\) test cases with severe discrimination out of hundreds of thousands inputs.
**Answer RQ1:** The search algorithm is effective in characterizing the amounts of discrimination via QID. Within \(1\) hour, it increased the number of clusters by \(3.4\times\) in average, and found instances that used up to \(76\%\) of protected information (\(4.05\) out of \(5.3\) bits) to infer DNN outcomes. Dice is useful to prioritize test cases with their severity where it generates less than \(50\) test cases with the maximum QID among hundreds of thousands test cases.
### _Individual Discriminatory Instances (RQ2)_
In this section, we compare the efficiency and effectiveness of our search algorithm to the state-of-the-art techniques in searching individual discrimination (\(ID\)) instances (as defined in Section IV). Our baselines are Aequitas[19], ADF[17], and NeuronFair[18]. We obtained the implementations of these tools from their GitHub repositories and configured them according to the prescribed setting to have the best performance. Following these techniques, we report the results for each protected attribute separately. Table III shows the results of baselines and Dice in runs of 15 minutes. The results are averaged over \(10\) repeated runs. The column #\(ID\) is the total number of generated individual discriminatory instances. The column \(l\_s\) is the success rate of local stage of searches. We exclude the global success rate since the goal of global phase in our search is to maximize QID whereas the local phase focuses to generate many \(ID\) instances. We calculate success rate as the number of \(ID\) found over the total number of generated samples. The columns \(T.1st\) and \(T.1k\) are the amount of time (in seconds) taken to find the first \(ID\) instance and to generate \(1,000\) individual discriminatory instances (note: \(N/A\) in column \(T.1k\) means that the tool did not generate \(1,000\)\(IDs\) in the experiment timeout of \(900\) seconds in the average of 10 runs).
The result shows that Dice outperforms the-state-of-the-art in generating many ID instances. In particular, Dice finds \(27.1\times\), \(16.0\times\), and \(16.0\times\) more \(ID\)s in the best case compare to Aequitas, ADF, and NeuronFair, respectively. Dice also generates \(3.2\times\), \(2.3\times\), and \(2.6\times\) more \(ID\)s in the worst case compare to Aequitas, ADF, and NeuronFair, respectively. The success rate of local search are \(20.6\%\), \(33.0\%\), \(29.6\%\), and \(78.2\%\) in average for Aequitas, ADF, NeuronFair, and Dice, respectively. For the time taken to find the first \(ID\), Aequitas achieves the best result with an average of \(0.03\) (s) whereas it took Dice\(1.46\) (s) in average to find the first \(ID\). In average, Dice was found to take the lowest time to generate \(1000\)\(IDs\) with \(57.2\) (S), while ADF
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
**Dataset** & **\#Instances** & **\#Features** & \begin{tabular}{l} **Protected Groups** \\ _Name_ \\ \end{tabular} & \begin{tabular}{l} **Num.** **Protected** \\ _Values_ (m) \\ \end{tabular} & \multicolumn{2}{c|}{**Outcome Label**} \\ \cline{4-7} \begin{tabular}{l} Adli \\ _Census_ \\ \end{tabular} & \multirow{2}{*}{\(32,561\)} & \multirow{2}{*}{\(13\)} & Sex & \(2\) & \multirow{2}{*}{90} & \multirow{2}{*}{High Income} & \multirow{2}{*}{Low Income} \\ \cline{4-7} Income[31] & & & Age & & & \\ \cline{4-7} \begin{tabular}{l} _Census_ \\ \end{tabular} & \multirow{2}{*}{\(7,214\)} & \multirow{2}{*}{\(12\)} & Sex & \(2\) & \multirow{2}{*}{12} & \multirow{2}{*}{Did not Reoffend} & \multirow{2}{*}{Roeffend} \\ \cline{4-7} \begin{tabular}{l} _Census_ \\ \end{tabular} & & & & Age & & & \\ \hline \multirow{2}{*}{_Census_[34]} & \multirow{2}{*}{\(7,214\)} & \multirow{2}{*}{\(12\)} & Sex & \(2\) & \multirow{2}{*}{16} & \multirow{2}{*}{Good Credit} & \multirow{2}{*}{Bad Credit} \\ \cline{4-7} \begin{tabular}{l} _German_ \\ Credit[35] \\ \end{tabular} & & & Age & & & \\ \hline _Default_ & \multirow{2}{*}{\(13,636\)} & \multirow{2}{*}{\(23\)} & Sex & \(2\) & \multirow{2}{*}{12} & \multirow{2}{*}{Default} & \multirow{2}{*}{Not Default} \\ \cline{4-7} \begin{tabular}{l} _Hear_ \\ Health[37] \\ \end{tabular} & & & Age & & & \\ \hline _Bank Marketing[38]_ & \(45,211\) & \(16\) & Age & \(9\) & \(9\) & Subscriber & Non-subscriber \\ \hline _Dicebies[39]_ & \(768\) & \(8\) & Age & \(9\) & \(9\) & Positive & Negative \\ \hline _Shalomens_ & \multirow{2}{*}{\(1044\)} & \multirow{2}{*}{\(32\)} & Sex & \(2\) & \multirow{2}{*}{16} & \multirow{2}{*}{Pass} & \multirow{2}{*}{Not Pass} \\ \cline{4-7} \begin{tabular}{l} Performance[40] \\ \end{tabular} & & & Age & & & \\ \hline \multirow{2}{*}{MEPS15 [41]} & \multirow{2}{*}{\(15,830\)} & \multirow{2}{*}{\(137\)} & Race & \(2\) & \multirow{2}{*}{36} & \multirow{2}{*}{Utilized Benefits} & \multirow{2}{*}{Not Utilized Benefits} \\ \cline{4-7} \begin{tabular}{l} _Dice_ \\ \end{tabular} & & & Sex & & & \\ \hline \multirow{2}{*}{MEPS16 [41]} & \multirow{2}{*}{\(15,675\)} & \multirow{2}{*}{\(137\)} & Age & \multirow{2}{*}{36} & \multirow{2}{*}{Utilized Benefits} & \multirow{2}{*}{Not Utilized Benefits} \\ \cline{4-7} \begin{tabular}{l} _Dice_ \\ \end{tabular} & & & Sex & & \\ \cline{4-7}
\begin{tabular}{l} _Dice_ \\ \end{tabular} & & & Sex & & & \\ \hline \end{tabular}
\end{table} TABLE I: Datasets used in our experiments.
took \(179.1\) (s), NeuronFair took \(135.4\) (s), and Aequitas took the longest time at \(197.7\) (s). Overall, our experiments indicate that Dice is effective in generating \(ID\) instances compared to the three state-of-the-art techniques, largely due to the smoothness of the feedback during the local search.
**Answer RQ2:** Our experiments demonstrate that Dice outperforms the state-of-the-art fairness testing techniques [19, 17, 18]. In the best case, our approach found \(20\times\) more individual discrimination (ID) instances than these techniques with almost \(3\times\) more success rates in average. However, we found that Dice is slower than those techniques in finding the first ID instance in the order of a few seconds.
### _Causal Debugging of DNNs for Fairness (RQ3)_
We perform experiments over the DNN models to study whether the proposed causal debugging approach is useful in identifying layers and neurons that significantly effect the amounts of discrimination as characterized by \(QID\). Table IV shows the results of experiments (averaged over \(10\) independent runs). The first two columns show the localized layer and its influence (i.e., \(l\) and \(\rho\) in Algorithm 2). The next six columns show top \(3\) neurons with the positive influence on fairness (i.e., activating those neurons reduce the amounts of discrimination based \(Q_{\infty}\)). The last six columns show top \(3\) neurons with the negative influence on fairness (i.e., activating those neurons increase the amounts of discrimination based \(Q_{\infty}\)). The layer index \(2\) is more frequently localized than other layers where the layers \(3\), \(4\), and \(5\) are localized once. Overall, the average causal difference (ACD) ranges from \(4\%\) to \(55\%\) for neurons with positive fairness effects and from \(0.6\%\) to \(18.3\%\) for neurons with negative fairness effects.
Guided by localization, Dice intervenes to activate neurons with positive fairness influence or de-activate those with negative influence. Table V shows the results of this mitigation strategy. The columns \(A\) and \(K\) show the accuracy and the number of clusters (averaged over a set of random test cases) reported by Dice before mitigation over the DNN model. The columns \(A^{=0}\) and \(K^{=0}\) are accuracy and the number of clusters reported after mitigating the DNN model by _de-activating_ the neuron with the highest negative fairness impacts (as suggested by Neuron\({}_{1}^{-}\) in Table IV). Similarly, the columns \(A^{>0}\) and \(K^{>0}\) are accuracy and the number of clusters reported after mitigating the DNN model by _activating_ the neuron with the highest positive fairness impacts (as suggested by Neuron\({}_{1}^{+}\) in Table IV). The results indicate that the activation interventions can reduce QID discrimination by at least \(5\%\) with \(3\%\) loss of accuracy and up to \(64.3\%\) with \(2\%\) loss of accuracy. The de-activation, on the other hand, can improve the fairness by at least \(6\%\) with \(1\%\) loss of accuracy and up to \(27\%\) with \(2\%\) loss.
**Answer RQ3:** The debugging approach implemented in Dice identified neurons that have at least \(5\%\) and up to \(55\%\) positive causal effects on the fairness and those which have at least \(0.6\%\) and up to \(18.3\%\) negative causal effects. A mitigation strategy followed by the localization can reduce the amounts of discrimination by at least \(6\%\) and up to \(64.3\%\) with less than 5% loss of accuracy.
## VII Discussion
_Limitation_. In this work, we consider all set of protected values and perturb them to generate counterfactual. Various perturbations of protected attributes may yield unrealistic counterfactuals and contribute towards false positives (an over-approximation of discrimination). This limitation can be mitigated by supplying domain-specific constraints (Age\(<\)YY \(\Longrightarrow\) NOT(married)): we already apply some common-sense constraints (e.g., to ensure valid range of age). In addition, similar to any dynamic testing methods, our approach might miss discriminatory inputs and is prone to false negatives. The probability of missing relevant inputs can be contained under a suitable statistical testing (e.g., Bayes factor). In addition, our debugging approach is similar to pin-pointing suspicious code fragments and is based on causal reasoning of its effect in decision making rather than correlation. But, it is not to furnish explanations or interpretations of black-box DNN functions.
_Threat to Validity_. To address the internal validity and ensure our finding does not lead to invalid conclusion, we follow established guideline and take average of repeated experiments. To ensure that our results are generalizable and address external validity, we perform our experiments on \(10\) DNN models taken from the literature of fairness testing. However,
\begin{table}
\begin{tabular}{|l|c|c||c|c|c|c|c|c|c|c|} \hline
**Dataset** & \(m\) & \(Q_{I}\) & \(K_{I}\) & \#\(I\) & \(K_{F}\) & T\({}_{K_{F}}\) & \(Q_{\infty}\) & \(Q_{1}\) & \#\(I_{K_{F}^{1}}\) & \#\(I_{K_{F}^{2}}\) & \#\(I_{K_{F}^{3}}\) \\ \hline Census & \(90\) & \(5.3\) & \(13.54\) & \(230,593\) & \(35.61\) & \(21.04\) & \(4.05\) & \(2.64\) & \(6.0\) & \(28.6\) & \(111.6\) \\ \hline Compas & \(12\) & \(3.6\) & \(3.12\) & \(157,968\) & \(10.24\) & \(6.50\) & \(1.81\) & \(1.40\) & \(35.2\) & \(338.7\) & \(1,016.9\) \\ \hline German & \(16\) & \(4.0\) & \(2.34\) & \(245,915\) & \(9.56\) & \(13.14\) & \(1.61\) & \(1.10\) & \(6.6\) & \(16.2\) & \(54.8\) \\ \hline Default & \(12\) & \(3.6\) & \(5.58\) & \(258,105\) & \(11.26\) & \(10.94\) & \(2.10\) & \(1.78\) & \(3,528.8\) & \(9,847.2\) & \(9,771.0\) \\ \hline Heart & \(14\) & \(3.8\) & \(4.54\) & \(270,029\) & \(10.01\) & \(11.88\) & \(2.31\) & \(1.80\) & \(21.7\) & \(135.2\) & \(579.7\) \\ \hline Bank & \(9\) & \(3.2\) & \(1.45\) & \(172,686\) & \(8.93\) & \(3.68\) & \(2.25\) & \(1.98\) & \(5,118.5\) & \(13,513.3\) & \(20,438\) \\ \hline Diabetes & \(10\) & \(3.3\) & \(2.39\) & \(504,414\) & \(7.90\) & \(0.016\) & \(1.40\) & \(1.11\) & \(89.7\) & \(609.6\) & \(2,310.1\) \\ \hline Students & \(16\) & \(4\) & \(1.90\) & \(133,221\) & \(10.90\) & \(14\) & \(1.93\) & \(1.35\) & \(16.0\) & \(130.7\) & \(128.7\) \\ \hline MEPS15 & \(36\) & \(5.2\) & \(7.03\) & \(19,673\) & \(18.52\) & \(31.62\) & \(2.61\) & \(1.62\) & \(2.6\) & \(3.5\) & \(6.0\) \\ \hline MEPS16 & \(36\) & \(5.2\) & \(9.06\) & \(14,266\) & \(19.25\) & \(49.16\) & \(2.21\) & \(1.52\) & \(2.0\) & \(3.5\) & \(6.0\) \\ \hline \end{tabular}
\end{table} TABLE II: Dice characterizes \(QID\) for \(10\) datasets and DNNs in \(1\) hour run (results are the average of \(10\) runs).
it is an open problem whether these datasets and DNN models are sufficiently representative for fairness testing.
## VIII Related Work
**Fairness Testing of ML systems.** Themis [10] presents a causal discrimination notion where they measure the difference between the fairness metric of two subgroups by _counterfactual_ queries; i.e., they sample individuals with the protected attributes set to A and compare the outcome to a counterfactual scenario where the protected attributes are set to B. Symbolic generation (SC) [28, 22] presents a black-box testing that approximates the ML models with decision trees and leverage symbolic execution over the tree structure to find individual discrimination (ID). AEQUITAS [19] uses a two-step approach that first uniformly at random samples instances from the input dataset to find a discriminatory instance and then locally perturb those instances to further generate biased test cases. ExpGA [16] proposed a genetic algorithm (GA) to generate ID instances in natural language processes. The proposed technique used a prior knowledge graph to guide the perturbation of protected attributes in the NLP tasks. While these techniques are black-box, they potentially suffer from the lack of local guidance during the search. ADF [17] utilized the gradient of the loss function as guidance in generating ID instances. The global phase explores the input space to find diverse set of individual discrimination whereas the local phase exploits each instance to generate many individual discriminatory (ID) instances in their neighborhoods. EIDIG [29] follows similar ideas to ADF, but uses different computations of gradients. First, it uses the gradients of output (rather than loss function) to reduce the computation cost at each iteration. Second, it uses momentum of gradients in
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline
**Dataset** & **Layer Index** & **Layer Influence** & **Neuron\({}^{+}\)** & **ACD\({}^{+}_{i}\)** & **Neuron\({}^{+}_{i}\)** & **ACD\({}^{+}_{i}\)** & **Neuron\({}^{+}_{i}\)** & **ACD\({}^{+}_{i}\)** & **Neuron\({}^{+}_{i}\)** & **ACD\({}^{+}_{i}\)** & **Neuron\({}^{+}_{i}\)** & **ACD\({}^{+}_{i}\)** & **Neuron\({}^{+}_{i}\)** \\ \hline Census & \(2\) & \(9.01\) & \(N_{19}\) & \(1.09\) & \(N_{2}\) & \(0.150\) & \(N_{12}\) & \(1.02\) & \(1.33\) & \(0.18\) & \(N_{24}\) & \(0.163\) & \(N_{14}\) & \(0.102\) \\ \hline Campus & \(2\) & \(2.07\) & \(N_{25}\) & \(0.16\) & \(N_{28}\) & \(0.419\) & \(N_{27}\) & \(0.366\) & \(N_{27}\) & \(N_{14}\) & \(N_{26}\) & \(1.1,238\) \\ \hline German & \(5\) & \(1.79\) & \(N_{3}\) & \(0.050\) & \(N_{1}\) & \(0.013\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) \\ \hline Defali & \(2\) & \(27.58\) & \(N_{0}\) & \(0.039\) & \(N_{27}\) & \(0.022\) & \(N_{19}\) & \(0.014\) & \(N_{2}\) & \(0.031\) & \(N_{13}\) & \(0.027\) & \(N_{10}\) & \(0.109\) \\ \hline HEPS15 & \(0.898\) & \(0.35\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) \\ \hline Barak & \(3\) & \(6.62\) & \(N_{0}\) & \(0.495\) & \(N_{2}\) & \(0.178\) & \(N_{1}\) & \(0.091\) & \(N_{6}\) & \(0.057\) & \(N_{11}\) & \(0.014\) & \(N/A\) & \(N/A\) \\ \hline Diabetes & \(2\) & \(1.67\) & \(N_{19}\) & \(0.041\) & \(N_{26}\) & \(0.035\) & \(N_{26}\) & \(0.031\) & \(N_{2}\) & \(0.042\) & \(N_{29}\) & \(0.001\) & \(N_{27}\) & \(N/A\) & \(N/A\) \\ \hline Students & \(2\) & \(4.01\) & \(N_{22}\) & \(0.550\) & \(N_{24}\) & \(0.442\) & \(N_{28}\) & \(0.229\) & \(N_{4}\) & \(0.084\) & \(N_{18}\) & \(0.055\) & \(N_{28}\) & \(0.026\) \\ \hline MIPS15 & \(2\) & \(35.44\) & \(N_{24}\) & \(0.230\) & \(N_{26}\) & \(0.167\) & \(N_{14}\) & \(0.160\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) \\ \hline MIPS16 & \(2\) & \(47.46\) & \(N_{8}\) & \(0.147\) & \(N_{11}\) & \(0.138\) & \(N_{24}\) & \(0.144\) & \(N_{30}\) & \(0.006\) & \(N_{30}\) & \(N_{22}\) & \(0.001\) \\ \hline \end{tabular}
\end{table} TABLE V: \(A\) is accuracy, \(K\) is the average number of clusters from test cases; \(A^{=0}\) is the accuracy after deactivating the neuron with the highest negative fairness impacts; \(K^{=0}\) is the average number of clusters after the deactivation; \(A^{>0}\) is the accuracy after activation the neuron with the highest positive fairness impacts; \(K^{>0}\) is the average number of clusters after the activation; and \(T_{I}\) is the amount of computation times for localization and mitigation in seconds.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
**Dataset** & **Pred.** & \multicolumn{3}{c|}{**Advoc. [17]**} & \multicolumn{3}{c|}{**Adv. [17]**} & \multicolumn{3}{c|}{**Neuron\({}^{+}_{i}\)**} & \multicolumn{3}{c|}{**ACD\({}^{+}_{i}\)**} & \multicolumn{3}{c|}{**ACD\({}^{+}_{i}\)**} & \multicolumn{3}{c|}{**ACD\({}^{+}_{i}\)**} & \multicolumn{3}{c|}{**ACD\({}^{+}_{i}\)**} \\ \hline & **avg** & \(9.01\) & \(1.03\) & \(1.03\) & \(0.02\) & **69.53** & \(1.13\) & \(1.26\) & **69.53** & \(7.13\) & \(7.12\) & **69.27** & \(1.03\) & **69.77** & \(7.16\) & **69.77** & \(7.16\) & **69.77** \\ \hline Census & **avg** & \(8.73\) & \(2.21\) & \(0.02\) & \(0.02\) & \(115.58\) & \(0.18\) & \(12.63\) & \(1.03\) & \(0.50\) & \(5.21\) & \(7.24\) & \(12.18\) & \(19.93\) & \(0.08\) & \(0.38\) & \(38.68\) & \(79.93\) & \(79.23\) & \(74.20\) & \(0.15\) & \(0.50\) & \(58.98\) \\ \hline Defali & **avg** & \(8.73\) & \(2.21\) & \(0.22\) & \(0.02\) & \(115.48\) & \(12.18\) & \(15.83\) & \(42.00\) & \(0.53\) & \(33.07\) & \(0.33\) & \(1.11\) & \(54.8\) & \(0.48\) & \(0.48\) & \(0.49\) & \(0.39\) & \(79.12\) & \(1.20\) & \(0.34\) & \(0.46\) & \(0.56\) & \(0.58\) & \(0.58\) \\ \hline Defali & **avg** & \(8.73\) & \(2.21\) & \(0.22\) & \(0.02\) & \(115.48\) & \(12.16\) & \(15.83\) & \(53.40\) & \(0.63\) & \(0.63\) & \(0.39\) & \(
global phase to avoid local optima. NeuronFair[18] extends ADF and EIDIG to support unstructured data (e.g., image, text, speech, etc.) where the protected attributes might not be well-defined. In addition, NeuronFair is guided by the DNN's internal neuron states (e.g., the pattern of activation and deactivation) and their activation difference. Beyond the capability of these techniques, Dice quantifies the amounts of discrimination, enables software developers to prioritize test cases, and searches multiple protected attributes at one time.
Beyond the scope of this paper, a body of prior work [42, 43, 44, 23, 45, 46] considered testing for group fairness. Fairway[43] mitigates biases after finding suitable ML algorithm configurations. In doing so, they used a multi-objective optimization (FLASH) [47]. Parfait-ML [45] searches the hyperparameter space of classic ML algorithms via a gray-box evolutionary algorithm to characterize the magnitude of biases from the hyperparameter configuration.
**Debugging of Deep Neural Network.** Cradle[33] traced the execution graph of a DNN model over two different deep-learning frameworks and used the differences in the outcomes to localize what backend functions might cause a bug. However, since Cradle did not use causal analysis, it showed a high rate of false positive. Audee[48] used a similar approach, but it leveraged causal-testing methods. In particular, it designed strategies to intervene in the DNN models and tracked how the intervention affected the observed inconsistencies. We adapted the layer localization of Cradle and Audee; but our causal localization is developed using do logic for a meta-property (fairness). Audee used a simple perturbation of neuron values for functional correctness (i.e., any inconsistency shows a bug) without considering the accuracy or the severity of neuron contributions to a bug.
**In-process Mitigation.** A set of work considers in-process algorithms to mitigate biases in ML predictions [49, 50, 51]. Adversarial debiasing [49] and Prejudice remover [50] improve fairness by adding constraints to model parameters or the loss function. Exponentiated gradient [51] uses a meta-learning algorithm to infer a family of classifiers that maximizes accuracy and fairness. Different than these approaches, we develop a mitigation approach that is specialized to handle neural networks for individual fairness. This setting allows us to exploit the layer-based structure of NNs toward causal reasoning and mitigation. We believe that our approach can be extended with in-process mitigation techniques to maximize fairness in the DNN-based decision support systems.
**Formal Methods.** We believe that this paper can connect to the rich literature of formal verification and its application. Here, we provide two examples. FairSquare[52] certifies a fair decision-making process in probabilistic programs using a novel verification technique called the weighted-volume-computation algorithm. SFTREE[53] formulated the problem of inferring fair decision tree as a mixed integer linear programming and apply constraint solvers iteratively to find solutions.
**Fairness in income, wealth, and taxation.** We develop a fairness testing and debugging approach that is uniquely geared toward handling regression problems. Therefore, our approach can be useful to study and address biases in income and wealth distributions [54] among different race and gender. Furthermore, our approach can be useful to study fairness in taxation (e.g., vertical and horizontal equities [55, 56]). We left further study in these directions to future work.
## IX Conclusion
DNN-based software solutions are increasingly being used in socio-critical applications where a bug in their design may lead to discriminatory behavior. In this paper, we presented Dice: an information-theoretic model to characterize the amounts of protected information used in DNN-based decision making. Our experiments showed that the search and debugging algorithms, based on the quantitative landscape, are effective in discovering and localizing fairness defects.
**Acknowledgement.** The authors thank the anonymous ICSE reviewers for their time and invaluable feedback to improve this paper. This research was partially supported by NSF under grant DGE-2043250 and UTEP College of Engineering under startup package.
|
2309.02514 | Scale without Conformal Invariance in Dipolar Ferromagnets | We revisit critical phenomena in isotropic ferromagnets with strong dipolar
interactions. The corresponding RG fixed point - dipolar fixed point - was
first studied in 1973 by Aharony and Fisher. It is distinct from the Heisenberg
fixed point, although the critical exponents are close. On the theoretical
side, we discuss scale invariance without conformal invariance realized by this
fixed point. We elucidate the non-renormalization of the virial current due to
a shift symmetry, and show that the same mechanism is at work in all other
known local fixed points which are scale but not conformal invariant. On the
phenomenological side, we discuss the relative strength of dipolar and
short-range interactions. In some materials, like the europium compounds,
dipolar interactions are strong, and the critical behavior is dipolar. In
others, like Fe or Ni, dipolar interactions are weaker, and the Heisenberg
critical behavior in a range of temperatures is followed by the dipolar
behavior closer to the critical point. Some of these effects have been seen
experimentally. | Aleix Gimenez-Grau, Yu Nakayama, Slava Rychkov | 2023-09-05T18:02:55Z | http://arxiv.org/abs/2309.02514v2 | # Scale without Conformal Invariance in Dipolar Ferromagnets
###### Abstract
We revisit critical phenomena in isotropic ferromagnets with strong dipolar interactions. The corresponding RG fixed point - dipolar fixed point - was first studied in 1973 by Aharony and Fisher. It is distinct from the Heisenberg fixed point, although the critical exponents are close. On the theoretical side, we discuss scale invariance without conformal invariance realized by this fixed point. We elucidate the non-renormalization of the virial current due to a shift symmetry, and show that the same mechanism is at work in all other known local fixed points which are scale but not conformal invariant. On the phenomenological side, we discuss the relative strength of dipolar and short-range interactions. In some materials, like the europium compounds, dipolar interactions are strong, and the critical behavior is dipolar. In others, like Fe or Ni, dipolar interactions are weaker, and the Heisenberg critical behavior in a range of temperatures is followed by the dipolar behavior closer to the critical point. Some of these effects have been seen experimentally.
## 1 Introduction
### Dipolar fixed point: Aharony-Fisher theory
###### Contents
* 1 Introduction
* 2 Dipolar fixed point: Aharony-Fisher theory
* 2.1 Manifestly local description
* 3 Phenomenology and experiments
* 3.1 Microscopic derivations of effective theory
* 3.2 Experimental evidence of the dipolar fixed point behavior
* 4 Scale invariance without conformal invariance
* 4.1 Two-point function argument
* 4.2 Stress tensor argument
* 4.3 Virial current dimension and the shift symmetry
* 4.4 Other consequences of shift symmetry
* 5 Other interacting models with scale without conformal invariance
* 5.1 Landau-gauge massless QED in \(d=4-\varepsilon\)
* 5.2 Landau-gauge Banks-Zaks fixed point in \(d=4\)
* 5.3 Fixed points not in the Landau gauge
* 5.4 Crystalline membrane theory
* 5.5 Gaussian curvature interaction model
* 5.6 Higher derivative shift symmetric scalar
* 6 Conclusions
* A Demagnetizing factor
* B Experimental data
* C Microscopic model
* C.1 Model
* C.2 Hubbard-Stratonovich transformation
* C.3 Comparison to Europium compounds
* D Trace of stress tensor in dipolar model
* D.1 Basic notation
* D.2 Composite operators
* D.3 Finiteness of stress tensor
* D.4 Building scaling operators
* D.5 Summary
* D.6 Scaling Ward identity
*
Introduction
Most renormalization group fixed points of relevance to physics are conformally invariant. Here we will describe an experimentally observable continuous phase transition that is rotationally, translationally and scale invariant without being conformal. This exceptional phase transition occurs in isotropic ferromagnets possessing non-negligible dipolar interactions. Standard examples are the europium compounds EuO and EuS. Aharony and Fisher pointed out in 1973 [1; 2; 3]1 that the Curie point \(T=T_{c}\) of such magnets is not the usual Heisenberg fixed point, but a different one, possessing slightly different values of critical exponents. The most dramatic difference is that the longitudinal fluctuations of the order parameter are suppressed at this new fixed point. This effect has been experimentally observed using polarized neutron scattering [5].
Footnote 1: See also the review in [4].
While most aspects of the physics of dipolar ferromagnets were well understood in the 1970s, the observation that the phase transition is scale without conformal appears to be new (it was first made in [6]). We know of only one other such interacting experimentally relevant example, furnished by the theory of fluctuating membranes, as recently discussed in [7]. Other interacting field theories showing scale without conformal include gauge-fixed versions of gauge theories [8], which have not yet found experimental applications. Non-interacting examples of scale without conformal invariance include the theory of elasticity [9] and the Maxwell theory in 5d [10; 11]. Holographic constructions of scale invariance without conformal invariance were studied in [12; 13; 14; 15; 16].
One issue that arises when discussing scale without conformal invariance in interacting models is the non-renormalization of the virial current - the vector operator \(V_{i}\) whose divergence appears in the trace of the stress tensor: \(T_{ii}=-\partial_{i}V_{i}\). The scaling dimension of the virial current should be exactly \(d-1\). It has been sometimes argued that this makes scale without conformal invariance unlikely, since non-conserved currents will generically pick up an anomalous dimension [11; 17]. How do dipolar magnets evade this? The reason is a shift symmetry acting on the field \(U\), the Lagrange multiplier enforcing the transverse condition \(\partial_{i}\phi_{i}=0\), where \(\phi_{i}\) is the order parameter and at the same time the shift symmetry current. The virial current has the form \(U\phi_{i}\), so that it transforms under the shift symmetry into the shift symmetry current. This relation implies that the scaling dimension of the virial current is protected from loop corrections due to interactions, although the shift symmetry current does get an anomalous dimension. This argument is one of our main new results, first presented in [18].
Furthermore, we show that shift symmetry is also responsible for all previously known interacting theories realizing scale without conformal invariance. A simple way to connect shift symmetry and lack of conformal invariance is as follows. Say the shift symmetry acts on a local operator as \(\mathcal{O}(x)\rightarrow\mathcal{O}(x)+c\), where \(c\) is a constant. Dimensional analysis
implies that the scaling dimension of the shift current is \(\Delta_{J}=d-1-\Delta_{\mathcal{O}}\). However, if \(J_{\mu}\) is a primary operator, then conservation and conformal invariance would imply the shift current has dimension \(\Delta_{J}=d-1\), leading to a contradiction. Although there is a workaround for free theories, we conclude that a generic interacting theory can only be scale invariant. Regarding the virial current, we show that there exists an operator of the schematic form \(V_{\mu}\sim\mathcal{O}J_{\mu}\) which has dimension exactly \(\Delta_{V}=d-1\) with no loop corrections even in presence of interactions. In the main text we present these arguments in more detail, and in particular we apply them to the membrane theory of Ref. [7]. Ref. [7] previously proved the non-renormalization of the virial current in the membrane theory, via a different argument which also used a shift symmetry but in a less direct fashion.
In high-energy physics literature, the question of scale invariance vs conformal invariance is usually discussed for unitary theories [19; 20; 21]. Everywhere in this paper we work in Euclidean signature, and unitarity is used synonymously with reflection positivity. From a wider field-theoretical perspective, unitarity is not _in itself_ a requirement for conformal invariance. Statistical physics furnishes many experimentally relevant critical theories which lack unitarity and yet are conformal, percolation and self-avoiding walks being two examples. As we will discuss, the dipolar fixed point, as well as all other fixed points realizing scale without conformal invariance with the help of the shift symmetry, are not unitary.
**Outline.** We start with Section 2 where we introduce dipolar effects in the Heisenberg model, using several equivalent descriptions which are useful later in the paper. After that the reader may proceed along two independent routes:
1. **Pheno route.** Continue to Section 3, and the associated Appendices A-C, where we review experiments that found evidence for dipolar effects in critical ferromagnets, and provide a method to estimate which materials should exhibit important dipolar effects near the fixed point. We hope that this section will stimulate further computations of observable effects in this model, and their experimental studies.
2. **Theory route.** Skip Section 3 and go directly to Sections 4 and 5, and the associated Appendix D. In Section 4 we argue that the dipolar fixed point is scale but not conformally invariant. We give two arguments - one from the two-point function \(\langle\phi_{i}\phi_{j}\rangle\), and one from the trace of the stress tensor. The latter argument leads us to explain the non-renormalization of the virial current, which we do relying on the shift symmetry of the model. We then show in Section 5 that the same arguments with minor modifications apply to all previously known examples of interacting scale but not conformal invariant fixed points.
We conclude in Section 6 with a summary and some possible future directions.
## 2 Dipolar fixed point: Aharony-Fisher theory
Consider a three-dimensional (3D) isotropic ferromagnet in the vicinity of its Curie temperature. Relation to actual materials will be discussed in Section 3. Here we write down the Landau-Ginzburg-Wilson (LGW) effective Hamiltonian describing fluctuations of the
three-component order parameter \(\phi_{i}\), \(i=1,2,3\). The usual short-range Hamiltonian is
\[\int d^{3}x\,\left(\frac{1}{2}\partial_{i}\phi_{j}\partial_{i}\phi_{j}+\frac{1}{ 2}m^{2}(\phi_{i}\phi_{i})+\frac{\lambda}{4}(\phi_{i}\phi_{i})^{2}\right)\,. \tag{1}\]
This Hamiltonian gives rise to a renormalization group fixed point known as the Heisenberg fixed point (or the Wilson-Fisher \(O(3)\) fixed point). This fixed point can be studied using the \(\varepsilon\)-expansion [22],2 the numerical conformal bootstrap [25; 26; 27; 28], and Monte Carlo simulations (see e.g. [29]).
Footnote 2: The literature being vast, more references can be found in [23; 24].
However, any realistic 3D ferromagnet in addition to the short-range interactions described by the above Hamiltonian will contain a long-range dipolar interaction term
\[V_{\rm dip}=v\int d^{3}x\int d^{3}y\,U_{ij}(x-y)\phi_{i}(x)\phi_{j}(y)\,, \tag{2}\]
where
\[U_{ij}(x)=-\partial_{x_{i}}\partial_{x_{j}}\frac{1}{|x|}=\frac{\delta_{ij}-3 \hat{x}_{i}\hat{x}_{j}}{|x|^{3}}\,. \tag{3}\]
In momentum space we have
\[V_{\rm dip}=4\pi v\int\frac{d^{3}q}{(2\pi)^{3}}\,\,\frac{q_{i}q_{j}}{q^{2}} \phi_{i}(q)\phi_{j}(-q)\,. \tag{4}\]
The term \(V_{\rm dip}\) appears because the field \(\phi(x)\), proportional3 to the coarse-grained magnetization at \(x\), generates magnetic field throughout the space, which in turn couples to the field \(\phi(y)\). In Section 2.1 below we show how (2) arises, with a positive coupling \(v\), when integrating out the magnetic field.
Footnote 3: In Section 3, we will change the normalization of the field \(\phi\) so that it _equals_ the coarse-grained magnetization. Then the coefficient \(v=1/2\). Here we find it more convenient to work in the normalization where the kinetic term \(\frac{1}{2}\partial_{i}\phi_{j}\partial_{i}\phi_{j}\) is canonically normalized.
The symmetry of the LGW Hamiltonian (1) was spatial \(O(3)\) times internal \(O(3)\). The dipolar interaction breaks this symmetry to the diagonal subgroup \(O(3)\), under which \(\phi_{i}\) transforms as a vector.4
Footnote 4: The mixing between space and internal symmetry is the reason we use Latin indices \(i,j=1,\ldots,d\) for the dipolar model. For all other models, we use notation \(\mu,\nu=1,\ldots,d\) for spacetime indices.
As mentioned the term (2) will always be there for any material, because we can't turn off the Maxwell equations in an experiment (although we can do this in a Monte Carlo simulation). The best we can hope for is that the coupling \(v\) is small. The actual size of coupling \(v\) at the microscopic scale depends on the material and, if \(v\) is small, the Heisenberg fixed point could be a good description in a range of distances and reduced temperatures, see Section 3. But at sufficiently long distances and sufficiently close to the critical point, which may or may not be experimentally resolvable in practice, the pure Heisenberg description breaks down and \(v\) needs to be taken into account.
Indeed the term \(V_{\rm dip}\) is strongly relevant - it has the same dimension as the mass term in (1). Moreover, it is the only nonlocal term in the Hamiltonian, so it will not get direct
renormalization contributions from local couplings.5 In the deep infrared (IR), the effective coupling \(v\) will grow to \(+\infty\). The effect of this will be quite dramatic: at the IR fixed point all longitudinal fluctuations of the order parameter will be suppressed
Footnote 5: There will be indirect renormalization effect due to wavefunction renormalization, which is small because the anomalous dimension of \(\phi\) is small.
\[\partial_{i}\phi_{i}=0\qquad\text{(at the IR fixed point)}\,. \tag{5}\]
(Note that the integrand in (4) can be written as \(|q_{i}\phi_{i}(q)|^{2}/q^{2}\).) The fixed point for the remaining transverse fluctuations can still be studied in the \(\varepsilon\)-expansion, but the usual procedure should be modified. If one is interested in fixed-point physics only, one can study the renormalization group (RG) flow of LGW Hamiltonian (1) imposing the constraint (5) all along the flow. This means that one is working with the transverse propagator
\[\langle\phi_{i}(q)\phi_{j}(-q)\rangle=\frac{\delta_{ij}-q_{i}q_{j}/q^{2}}{q^{2 }+m^{2}} \tag{6}\]
instead of the usual scalar field propagator. The propagator remains transverse along the RG flow. One works in \(d=4-\varepsilon\) dimensions, in which case \(\phi_{i}\) is a \(d\)-component order parameter. As usual, one computes the beta function for the quartic coupling, but since the diagrams are computed with a different propagator, the actual beta-function coefficients are a bit different. The quartic coupling flows to an IR fixed point, which we call "dipolar" (it was called "isotropic dipolar" in [2]). The anomalous dimensions of operators \(\phi_{i}\), \(\phi^{2}\), \((\phi^{2})^{2}\), etc, and the usual critical exponents can be computed as power series in \(\varepsilon\). These computations have been carried out in [2; 3] to order \(\varepsilon^{2}\), see Table II in [30], p. 388. Although these series are different from those for the Heisenberg case, the numerical values extrapolated to \(\varepsilon=1\) come out close. Recently, critical exponents were computed at three loops directly in \(d=3\)[31], confirming the closeness to the Heisenberg values.
The critical exponents being close to Heisenberg (\(H\)), the most dramatic feature of the dipolar (\(D\)) fixed point remains the suppression (5) of the longitudinal fluctuations of the order parameter. The critical two-point (2pt) function of the order parameter takes the form:
\[\langle\phi_{i}(q)\phi_{j}(-q)\rangle = \frac{\delta_{ij}-q_{i}q_{j}/q^{2}}{|q|^{2-\eta_{D}}}, \tag{7}\]
to be compared with \(\langle\phi_{i}(q)\phi_{j}(-q)\rangle=\delta_{ij}/|q|^{2-\eta_{H}}\) at the Heisenberg fixed point. This suppression has been seen with polarized neutron scattering, see Section 3. As we will see in Section 4, the 2pt function is compatible with conformal invariance at the Heisenberg but not at the dipolar fixed point.
_Remark 2.1_.: In this paper, we will be neglecting the cubic perturbation \(\sum_{i=1}^{3}\phi_{i}^{4}\) of the Hamiltonian (1). Such a perturbation, which may arise due to spin-orbit coupling, is usually assumed to be small in ferromagnets. It is known that this perturbation is (very weakly) relevant at the 3D Heisenberg fixed point [23, Sec. 11.3], [28; 32].
### Manifestly local description
In the previous discussion, we perturbed the LGW Hamiltonian by a non-local dipolar interaction, and we argued that the infrared behavior is local, because dipole interactions simply impose the local constraint (5). In this section, we reach the same conclusion starting from a manifestly local Hamiltonian.
As anticipated below equation (2), the field \(\phi_{i}(x)\) is proportional to the coarse-grained magnetization, and as such, it couples to the dynamical magnetic field \(B_{i}\). The precise form of the coupling is
\[\int d^{d}x\left(\frac{1}{2}\partial_{i}\phi_{j}\partial_{i}\phi_{j}-zB_{i} \phi_{i}+\frac{B_{i}^{2}}{8\pi}+\frac{m_{0}^{2}}{2}(\phi_{i}\phi_{i})+\frac{ \lambda}{4}(\phi_{i}\phi_{i})^{2}\right)\,, \tag{8}\]
where \(B_{i}=(\nabla\times A)_{i}\) is the magnetic field. The coefficient \(z\) is the proportionality factor in \(\phi_{i}=zM_{i}\) where \(M_{i}\) is the coarse-grained magnetization (we will set \(z=1\) in Section 3 and Appendix A). Because the vector potential \(A_{i}\) appears quadratically, we can solve its equations of motion (EOM) exactly and plug them back into the action. The result is the sum of (1) and (2), upon identifying
\[m^{2}=m_{0}^{2}-4\pi z^{2}\,,\qquad v=\frac{z^{2}}{2}\,. \tag{9}\]
The Hamiltonian (8) can also be used to show the suppression of longitudinal fluctuations. We rewrite (8) by imposing the Bianchi identity via a Lagrange multiplier \(U\):
\[\int d^{d}x\left(\frac{1}{2}\partial_{i}\phi_{j}\partial_{i}\phi_{j}-zB_{i} \phi_{i}+\frac{B_{i}^{2}}{8\pi}+\frac{m_{0}^{2}}{2}(\phi_{i}\phi_{i})+\frac{ \lambda}{4}(\phi_{i}\phi_{i})^{2}-U\partial_{i}B_{i}\right)\,. \tag{10}\]
Note that the field \(U\) coincides, up to rescaling, with the magnetic potential called \(U\) in App. A. Integrating out the unconstrained \(B_{i}\) we obtain
\[\int d^{d}x\left(\frac{1}{2}\partial_{i}\phi_{j}\partial_{i}\phi_{j}-2\pi( \partial_{i}U-z\phi_{i})^{2}+\frac{m_{0}^{2}}{2}(\phi_{i}\phi_{i})+\frac{ \lambda}{4}(\phi_{i}\phi_{i})^{2}\right). \tag{11}\]
Dropping the term \((\partial_{i}U)^{2}\), irrelevant in the long-wavelength limit, we obtain the effective Hamiltonian
\[\mathcal{H}=\int d^{d}x\left(\frac{1}{2}\partial_{i}\phi_{j}\partial_{i}\phi_{ j}-U\partial_{i}\phi_{i}+\frac{m^{2}}{2}(\phi_{i}\phi_{i})+\frac{\lambda}{4}( \phi_{i}\phi_{i})^{2}\right)\,, \tag{12}\]
up to a rescaling of \(U\). This shows that the role of \(U\) is to impose the local constraint (5).
The scale invariant dipolar fixed point is obtained by fine-tuning \(m^{2}\) at the critical value. From now on we work with the fine-tuned mass, which in dimensional regularization corresponds to \(m^{2}=0\). The RG flow of \(\lambda\) is attractive toward the fixed point value of \(\lambda_{*}\).
Let us briefly compare theory (12) to theory (1) with constraint (5) imposed "by hand". Correlation functions of \(\phi_{i}\) are the same in both theories. In theory (12) we have an additional local field \(U\). It should not be too surprising that such an additional local field could be added to the theory. In fact, the original microscopic theory had the magnetic
field \(B_{i}\), and correlation functions of \(U\) can be traced back to the (long wavelength limit of) correlators of \(B_{i}\). The non-interacting 2pt functions of \(U\) with itself and with \(\phi_{i}\) are given by
\[\langle U(q)U(-q)\rangle_{0}=-1\qquad\langle U(q)\phi_{i}(-q)\rangle_{0}=iq_{i} /q^{2}\,. \tag{13}\]
In perturbation theory, the field \(U\) appears only at the external legs since there are no vertices involving it. Because the propagator \(\langle\phi_{i}\phi_{j}\rangle_{0}\) is transverse, the 2pt function \(\langle U\phi_{i}\rangle\) is not renormalized. So the anomalous dimension of \(U\) will be the opposite to that of \(\phi_{i}\):
\[\Delta_{\phi}=(d-2)/2+\gamma_{\phi},\qquad\Delta_{U}=d/2-\gamma_{\phi}\,. \tag{14}\]
This is easy to understand: the wavefunction renormalization of \(\phi_{i}\) and of \(U\) comes from the \(\phi_{i}\) self-energy \(\Pi_{ij}\) (the sum of 1PI irreducible diagrams). In the \(\phi_{i}\) case, \(\Pi_{ij}\) is iterated, while for \(U\) only the linear in \(\Pi_{ij}\) term contributes, due to the transversality of \(\langle\phi_{i}\phi_{j}\rangle_{0}\).
## 3 Phenomenology and experiments6
Footnote 6: Readers interested primarily in scale without conformal may proceed directly to Section 4.
Experimentally, dipolar behavior has been reported in some ferromagnets (EuO, EuS), while in others, like Ni, Fe, it is harder to see, and in fact they are usually assumed to exhibit Heisenberg behavior. Here we would like to discuss why this is so, and how can one guess a priori which behavior to expect from a given material, depending on the range of temperatures and distance lengths used to probe the system.
For this discussion, it helps to normalize the field \(\phi\) so that it equals the (coarse-grained) microscopic magnetization:
\[\phi_{i}=M_{i}\,. \tag{15}\]
The finite-temperature partition function is given by
\[Z=\int D\phi\,e^{-\beta{\cal H}[\phi]}\,, \tag{16}\]
where the Hamiltonian, including the dipolar term, is given by:
\[{\cal H}[\phi]=\int d^{3}x\left(\frac{1}{2}a(\partial_{i}\phi_{j})^{2}+\frac{1 }{2}b\phi_{i}^{2}\right)+\frac{1}{2}\int d^{3}x\int d^{3}y\;U_{ij}(x-y)\phi_{i }(x)\phi_{j}(y)\,. \tag{17}\]
We are omitting here the quartic interaction term which will not play a role in the present discussion. Notably, in the chosen normalization of \(\phi\), the coefficient of the dipolar term is completely fixed.7 In presence of an external magnetic field \(B^{(0)}\), the Hamiltonian should be perturbed by \(-B^{(0)}_{i}\phi_{i}\), whose normalization is also fixed. This is important for discussing susceptibility measurements.
Footnote 7: We obtain this term by integrating out \(B_{i}\) from (8) with \(z=1\), see Eq. (9). See also App. A.
The inverse propagator of \(\phi\) is given, up to overall rescaling, by
\[G^{-1}_{ij}(q)\propto(q^{2}+\xi^{-2})\delta_{ij}+q_{d}^{2}\,\frac{q_{i}q_{j}}{ q^{2}}\,, \tag{18}\]
where we defined two important quantities:
\[\xi=(a/b)^{1/2}\,,\qquad q_{d}=(4\pi/a)^{1/2}\,. \tag{19}\]
The \(\xi\) is the correlation length, which goes to infinity when \(b\to 0\), as the critical point is approached. The \(q_{d}\) is the dipolar wavevector, which determines the range where the dipolar effects become important. The propagator, obtained by inverting (18), is given by
\[G_{ij}(q)\propto\frac{1}{q^{2}+\xi^{-2}}\left(\delta_{ij}-\frac{q_{i}q_{j}}{q^ {2}}\right)+\frac{1}{q^{2}+\xi^{-2}+q_{d}^{2}}\frac{q_{i}q_{j}}{q^{2}}\,. \tag{20}\]
We thus have two regimes, distinguishing between the short-range (Heisenberg) and the dipolar behavior:
Short range: \[\xi^{-2}+q^{2}\gg q_{d}^{2}\quad\Rightarrow\quad G_{ij}(q) \propto\frac{\delta_{ij}}{q^{2}+\xi^{-2}}\,,\] (21) \[\text{Dipolar: }\xi^{-2}+q^{2}\ll q_{d}^{2}\quad\Rightarrow\quad G _{ij}(q) \propto\frac{1}{q^{2}+\xi^{-2}}\left(\delta_{ij}-\frac{q_{i}q_{j}}{q^{2}} \right)\,.\] (22)
It is only in the second regime, where the propagator (which can be studied e.g. using polarized neutron scattering) will show longitudinal suppression.
Thus, we see that to access experimentally the dipolar regime, two conditions have to be satisfied. First, the correlation length \(\xi\) must be sufficiently large: \(\xi^{-1}\ll q_{d}\). This, according to (19), translates into \(b\ll 4\pi\). In other words, we must be sufficiently close to the critical point located at \(b=0\). In addition, the scattered neutrons have to be sufficiently soft: \(q\ll q_{d}\).
Let us focus on the criterion \(b\ll 4\pi\). To be useful, this criterion has to be translated as a constraint on the reduced temperature \(t=(T-T_{c})/T_{c}\).
For a given material, we can determine constants \(a,b\) in the LGW Hamiltonian doing experiments far away from \(T_{c}\), when neglect of the quartic interaction in (17) is justified. The constant \(b\) can be determined by measuring magnetization in the applied external uniform magnetic field \(B^{(0)}\) at \(t=(T-T_{c})/T_{c}=O(1)\) (i.e. far away from the transition point) and using the relation
\[\phi_{i}=\frac{B_{i}^{(0)}}{b+4\pi D_{i}}\,, \tag{23}\]
where \(D_{i}\) is the demagnetizing factor, depending on the shape of the sample (see App. A). The constant \(a\) can be determined measuring the correlation length \(\xi\) (e.g. via neutron scattering) and using (19).
Once we determine \(b\) at \(t=O(1)\), we can extrapolate it to \(t\ll 1\). In the Gaussian approximation as in (17), \(b\) would be proportional to \(t\). In the critical region it is more appropriate to use the relation corrected for the presence of critical exponents:
\[b=C^{-1}t^{\gamma}, \tag{24}\]
where \(\gamma\approx 1.4\) is the Heisenberg susceptibility exponent (see Eq. (21)) and \(C\) is a dimensionless constant. Let us define \(t_{d}\) to be the temperature such that \(b(t_{d})=4\pi\), i.e.
\[t_{d}=(4\pi C)^{1/\gamma}. \tag{25}\]
Then the dipolar behavior may be seen for \(|t|\ll t_{d}\) while for larger \(t\) we expect to see the Heisenberg behavior. The same criterion to determine \(t_{d}\) was proposed in [33].
Similarly, \(\xi\) depends on \(t\) according to:
\[\xi=(a/b)^{1/2}=f^{+}t^{-\nu}, \tag{20}\]
where \(f^{+}\) is a constant and \(\nu\approx 0.7\) is the correlation length exponent. Eqs. (21) and (20) are consistent if \(\gamma=2\nu\), which is not exactly true, but is approximately true because \(\eta\) is small. Such an approximation is acceptable here, as we are aiming for an order of magnitude estimate.
In Table 1 we give, for a few materials, values of \(C\), \(t_{d}\), \(f^{+}\) and \(q_{d}\) extracted from experiments.
We extract two conclusions from this table. First, the values of \(t_{d}\) are much larger for EuS and EuO than for Fe and Ni. Thus, we expect that EuS and EuO will show only dipolar behavior, for \(t\ll t_{d}\). On the other hand, as \(t\) is lowered, Fe and Ni are expected to show Heisenberg behavior for \(t_{d}\ll t\ll 1\), followed by a crossover to dipolar behavior for \(t\ll t_{d}\). See Fig. 1.
Second, \(q_{d}\) is also much smaller for Fe and Ni than for EuS and EuO. Thus, much softer polarized neutrons will have to be used to see the suppression of longitudinal fluctuations of the order parameter.
### Microscopic derivations of effective theory
For Eu compounds, which are ferromagnetic insulators with well-localized magnetic moments, one can also give an independent estimate for parameters \(a\) and \(b\) starting from the microscopic Heisenberg model and performing Hubbard-Stratonovich transformation. The inputs in this computation are the critical temperature, the magnitude of the individual magnetic moments, and the lattice constant. This gives, for EuS and EuO, \(a\) and \(b\) in reasonable agreement with the estimates in Table 1 extracted from the measurements of \(\xi\) and \(\chi\) (see App. C).
Early literature [1] attempted to estimate the strength of dipolar effects in ferromagnetic metals such Fe and Ni using similar microscopic arguments. Those estimates come out very different from Table 1, and we believe they cannot be trusted (reference [33] also
\begin{table}
\begin{tabular}{l l l l l} \hline & EuS & EuO & Fe & Ni \\ \hline \(C\), \(10^{-3}\) & 16 & 5.2 & 0.13 & 0.040 \\ \(t_{d}\) & 0.31 & 0.14 & 0.010 & 0.0044 \\ \(f^{+},\mathrm{\AA}\) & 1.8 & 1.6 & 0.91 & 1.27 \\ \(q_{d}\,,\,\mathrm{\AA}^{-1}\) & 0.24 & 0.16 & 0.045 & 0.018 \\ \hline \end{tabular}
\end{table}
Table 1: We use \(t_{d}\approx(4\pi C)^{1/1.4}\) and \(q_{d}=(4\pi C)^{1/2}/f^{+}\). We give a detailed account of sources for this data in Appendix B.
finds estimates for dipolar strength of Fe and Ni in tension with [1]). Indeed, the Heisenberg model description is not correct for Fe and Ni at the microscopic level, since their magnetic moments are not localized but are carried by electrons in the conductance bands (itinerant magnetism [34]). For such materials, direct microscopic estimates of \(a\) and \(b\) are bound to be much harder than for ferromagnetic insulators. On the other hand, the method leading to Table 1 should be universally applicable.
The microscopic derivation given in App. C, while not directly applicable to itinerant magnets, does illustrate an important point--that the unit normalization of the \(-\phi\cdot\vec{B}^{(0)}\) coupling is inevitably related to the fixed normalization of the dipolar interaction term given in (10). One could still ask, for the sake of the argument: what if we change the coefficient of the \(\phi.U.\phi\) term from \(1/2\) to \(e/2\), with \(e\) a new parameter? One change is that (11) would then become (see also (16))
\[\phi=\frac{B^{(0)}}{b+4\pi D^{\rm eff}}=b^{-1}H^{t},\qquad D^{\rm eff}=eD, \qquad H^{t}=B^{(0)}-4\pi D^{\rm eff}\phi\,. \tag{17}\]
When measuring susceptibility, the value of \(D^{\rm eff}\) is important when fitting the data. Thus any deviation of \(D^{\rm eff}\) from the purely geometrically determined \(D\) could be ascribed to \(e\neq 1\). To our knowledge, no significant deviation is observed. E.g. the works [35; 36] used the geometric \(D=4\pi/3\) for spherical samples. Ref. [37] reports \(N=0.32\) (\(D=4\pi N\)) for their spherical sample (sample No.1).
Figure 1: The RG flow diagram with four fixed points: \(G\) - Gaussian, \(G^{\prime}\) - Gaussian dipolar, \(H\) - Heisenberg, \(D\) - dipolar. Materials with \(t_{d}\sim 1\), like EuS and EuO, correspond to trajectories like 1 which flow straight to \(D\). Materials with \(t_{d}\ll 1\), like Fe and Ni, are supposed to correspond to trajectories like 2 which first approach \(H\) and then flow to \(D\). The shown trajectories correspond to the exact critical temperature \(t=0\) (critical flow). For \(t\) slightly different from 0, the RG flow initially tracks the critical trajectory, and then deviates from it. Depending on the value of \(t\), this deviation may happen, for type 2 trajectories, when the flow is near \(H\) or near \(D\). This implies the crossover behavior mentioned in the main text.
### Experimental evidence of the dipolar fixed point behavior
The most dramatic effect of the dipolar fixed point is the suppression of longitudinal fluctuations of the order parameter. This effect can be seen by scattering polarized neutrons of the critical sample. For the longitudinal polarization, and for \(t\ll t_{d}\), \(q\ll q_{d}\), scattering cross-section should be suppressed. This effect was observed in Ref. [38] for EuS and EuO. For one of these materials (EuS) they also explored \(q\sim q_{d}\) and saw that the longitudinal suppression disappears, in accord with the theory.
We next discuss measurements of the critical exponent \(\gamma\) governing the asymptotic scaling behavior of the susceptibility \(\chi\propto t^{-\gamma}\). In the Heisenberg fixed point, we have (using \(\gamma=\nu(2-\eta)\) and the latest Monte Carlo measurements of \(\nu\), \(\eta\) from [29])8
Footnote 8: The latest conformal bootstrap result [28] is in agreement but less precise \(\gamma=1.3964(9)\).
\[\gamma_{H}=1.39635(20)\,. \tag{23}\]
On the other hand for the dipolar fixed point the three-loop calculations of [31] predict a smaller but very close value:
\[\gamma_{D}=1.381(8)\,. \tag{24}\]
Ref. [39] measured \(\gamma=1.387(36)\) for EuO and \(\gamma=1.399(40)\) for EuS, in agreement with the theoretical value of \(\gamma_{D}\).
It is interesting to consider the effective susceptibility exponent:
\[\gamma_{\rm eff}=-\frac{d\log\chi}{d\log t}\,. \tag{25}\]
For materials with \(t_{d}\ll 1\), which show a crossover behavior from Heisenberg to dipolar, the effective exponent should be close to \(\gamma_{H}\) for \(t_{d}\ll t\ll 1\) and close to \(\gamma_{D}\) for \(t\ll d_{d}\), but it can deviate from these at \(t\sim t_{d}\). The functional dependence of this deviation on \(t/t_{d}\) is universal as it is controlled by the RG trajectory connecting the \(H\) and \(D\) fixed points (Fig. 1). This was computed to second order in the \(\varepsilon\)-expansion by Bruce, Kosterlitz and Nelson [40; 41], with a result that \(\gamma_{\rm eff}\) should show a pronounced dip at \(t\sim t_{d}\). This dip can be interpreted as follows. Let us write
\[\chi\approx X_{H}t^{-\gamma_{H}}\qquad(t_{d}\ll t\ll 1),\qquad\chi\approx X_{D}t ^{-\gamma_{D}}\qquad(t\ll t_{d}). \tag{26}\]
Although we have \(\gamma_{H}\approx\gamma_{D}\), the prefactors \(X_{H}\) and \(X_{D}\) do not have to be equal. The ratio \(X_{D}/X_{H}\) is universal. If \(X_{D}/X_{H}<1\), \(\gamma_{\rm eff}\) will show a dip. This effect was confirmed experimentally by measurements on amorphous ferromagnets [42; 43], reviewed in [44].
_Remark 3.1_.: We would like to comment on the relatively recent susceptibility measurements in Ni [37] which cover \(5\times 10^{-4}<t<1.5\times 10^{-2}\). The estimate of \(t_{d}\) for Ni in Table 1 (and also in [33]) falls in the middle of this interval. However, the observations of [37] are inconsistent with this. Indeed, Ref. [37] sees no sign of the Heisenberg to dipolar crossover; in fact, their \(\gamma_{\rm eff}\) decreases monotonically when increasing \(t\). The authors of [37] assume that their \(t\) belong to the \(H\) critical region, and attribute the variation of \(\gamma\) to corrections to scaling near the \(H\) fixed point. Their experimental \(\gamma\), extrapolated to
\(t=0\), is \(\gamma=1.340(10)\), significantly lower than the theoretical value (3.14). To explain this discrepancy, they hypothesize a long-range exchange interaction modifying the universality class. The theoretical origin of this ad hoc interaction is unclear.
Experimental \(\gamma\) lower than theory is typical for measurements in isotropic itinerant magnets (see e.g. [45], table 5). It would be interesting to understand why.
## 4 Scale invariance without conformal invariance
We will now argue that the dipolar fixed point is scale invariant but not conformally invariant. We will give two arguments, one based on the 2pt function of \(\phi_{i}\) and another on the form of the stress tensor and the existence of the virial current (including an explanation for its non-renormalization due to a shift symmetry). These discussions assume the \(\varepsilon\)-expansion in \(d=4-\varepsilon\) dimensions, but some of the arguments such as non-renormalization of the virial current will be non-perturbative.
In Section 5 we list several other interacting models having scale without conformal invariance and identify shift symmetry as a general feature protecting the virial current dimension of all such currently known models.
### Two-point function argument
The simplest way to observe that the theory is not conformally invariant is to look at the 2pt function of \(\phi_{i}\). This type of argument goes back to [11, 20], and was also used in [7].
The 2pt function of \(\phi_{i}\) with scaling dimension \(\Delta_{\phi}\) is given by
\[\langle\phi_{i}(x)\phi_{j}(0)\rangle=\frac{A}{|x|^{2\Delta_{\phi}}}\left( \delta_{ij}-\alpha\frac{x_{i}x_{j}}{x^{2}}\right)\;,\qquad\alpha=\frac{2 \Delta_{\phi}}{2\Delta_{\phi}-(d-1)}\,, \tag{4.1}\]
where \(\alpha\) is fixed by the transversality condition \(\partial_{i}\phi_{i}=0\). Unless \(\Delta_{\phi}=d-1\), this is different from the 2pt function of a primary vector field, which has \(\alpha=2\). In \(\varepsilon\)-expansion \(\Delta_{\phi}=d-1\) can be excluded; indeed we have [3]
\[\Delta_{\phi}=\frac{d-2}{2}+\gamma_{\phi},\qquad\gamma_{\phi}=\frac{10}{867} \varepsilon^{2}+O(\varepsilon^{3})\,, \tag{4.2}\]
and the anomalous dimension \(\gamma_{\phi}\) remains tiny in any \(3\leqslant d\leqslant 4\); the recent \(d=3\) calculation [31] found \(\eta_{\rm dip}=2\gamma_{\phi}=0.033(8)\). Hence \(\phi_{i}\) cannot be a conformal primary. Being the lowest dimension field of the theory, it cannot be a descendant either, finishing the proof.
As mentioned in the introduction, often questions of scale and conformal invariance are studied imposing unitarity [19, 20, 21, 46, 47]. In this respect, it is worth pointing out that the dipolar fixed point is not unitary. Consider the 2pt function (4.1) with the separation being in the \(x_{1}\) direction, i.e. \(x=(1,0,0,\ldots)\). In this configuration we have, from (4.1):
\[\langle\phi_{1}(x)\phi_{1}(0)\rangle=A(1-\alpha)\,, \tag{4.3}\] \[\langle\phi_{i}(x)\phi_{i}(0)\rangle=A\qquad(i\neq 1)\,, \tag{4.4}\]
while reflection positivity demands that
\[\langle\phi_{1}(x)\phi_{1}(0)\rangle\leqslant 0\,, \tag{4.5}\] \[\langle\phi_{i}(x)\phi_{i}(0)\rangle\geqslant 0\qquad(i\neq 1)\,. \tag{4.6}\]
We see that these constraints require \(\alpha\geqslant 1\), that is \(\Delta_{\phi}\geqslant\frac{d-1}{2}\), or equivalently \(\gamma_{\phi}\geqslant 1/2\), which is excluded by the above perturbative estimates of this anomalous dimension.9
Footnote 9: This argument also shows that the transverse vector unparticle proposed originally by Georgi in [48] is ruled out in the range \(\Delta_{\phi}<\frac{3}{2}\) even if we assume the existence of scale-invariant but non-conformal field theories in \(d=4\).
_Remark 4.1_.: As we have seen in Section 2.1, the dipolar model Hamiltonian (2.12) can be obtained by coupling the \(O(3)\) model to the fluctuating magnetic field, (2.8), integrating out the magnetic field and taking the low energy limit. The \(O(3)\) model and the magnetic field Hamiltonian are separately unitary. So how can their coupling produce a non-unitary theory?10 The answer is that unitarity is broken by the coupling term, \(\phi_{i}B_{i}\) in (2.8), which treats \(\phi_{i}\) as a vector field, while it was a scalar multiplet in the \(O(3)\) model.
_Remark 4.2_.: An equivalent way to study the unitarity of the 2pt function (4.1) is by considering the Wightman spectral density in momentum space [49], which can be obtained as the imaginary part of the Schwinger function in momentum space (2.7), continued from Euclidean to Lorentzian. We have:
Footnote 10: We thank Juan Maldacena for raising this question and answering it.
\[\langle\phi_{i}(q)\phi_{j}(-q)\rangle\propto\theta(q^{0})\theta(q^{2})\frac{1} {(q^{2})^{2-\gamma_{\phi}}}\left[q^{2}\eta_{ij}-q_{i}q_{j}\right]\,, \tag{4.7}\]
where we use Lorentzian notation with the mostly minus metric. The Wightman function is supported in the forward cone \(q^{0}\geqslant 0\), \(q^{2}\geqslant 0\).
Multiplying (4.7) by external wavefunctions \(\chi_{i}(q)\) and \(\chi_{j}^{*}(-q)\) and integrating over \(q\), unitarity requires that the answer should be nonnegative. The expression in brackets gives
\[q^{2}\chi.\chi^{*}-|\chi.q|^{2}\,, \tag{4.8}\]
which is non-negative inside the forward cone, as can be seen going to the rest frame \(\vec{q}=0\), where it becomes \(q_{0}^{2}|\vec{\chi}|^{2}\). The lack of unitarity of this theory comes not from the negativity of the Wightman function inside the forward cone (it is positive), but from the behavior of the integrand near the null cone. Indeed a positive distribution must be a _measure_, i.e. integrable. The strongest constraint comes from approaching the cone transversally and imposes the constraint (cf [49], Eq. (4.7)):
\[\text{unitarity}\Longrightarrow\gamma_{\phi}\geqslant 1, \tag{4.9}\]
which is even stronger than the condition \(\gamma_{\phi}\geqslant 1/2\) found above.11 However, the \(x\)-space argument was providing only the necessary condition, since we only examined reflection positivity for the field \(\phi_{i}\). Considering derivatives of \(\phi_{i}\), it should be possible to improve the \(x\)-space argument and rule out the range \(1/2\leqslant\gamma_{\phi}<1\).
Footnote 11: Without extra assumptions, one cannot improve the bound further due to the existence of a concrete example: take a free massless scalar \(\varphi\) and consider \(\phi_{i}=\partial_{i}\varphi\) (so that \(\gamma=1\) in (4.7)). It is conserved because of the equation of motion, and the 2pt function must be consistent with the unitarity bound.
_Remark 4.3_.: Note that the the position space correlator (4.1) becomes singular (\(\alpha=\infty\)) for \(\Delta_{\phi}=\frac{d-1}{2}\). One wonders if this could lead to a nonperturbative argument that \(\Delta_{\phi}\) cannot cross \(\frac{d-1}{2}\) between UV and IR. However, there does not seem to be a simple way to show this. Using an ansatz allowing for violations of scale invariance at intermediate scales:
\[\langle\phi_{i}(x)\phi_{j}(0)\rangle=f(x^{2})\left(\delta_{ij}+g(x^{2})\frac{x _{i}x_{j}}{x^{2}}\right)\,, \tag{4.10}\]
the constraint \(\partial_{i}\phi_{i}=0\) imposes:
\[\frac{dg}{d\rho}=\frac{d\log f}{d\rho}+\left[\frac{1-d}{2}-\frac{d\log f}{d \rho}\right]g\,,\quad\rho=x^{2}\,. \tag{4.11}\]
If \(\frac{d\log f}{d\rho}=(1-d)/2\) for some \(\rho=\rho_{0}\), the second term in the r.h.s. vanishes. This implies that \(\frac{dg}{d\rho}\) must be nonzero at such \(\rho\), but does not signal any particular singularity in integrating the equation.
### Stress tensor argument
The second, classic [19], way to understand whether a fixed point is scale invariant with or without conformal invariance, proceeds via the properties of the trace of the stress tensor, which is a response to the background metric. According to Polchinski's analysis [19], a local fixed point is scale invariant if the trace of the stress tensor \(T_{ij}\)12 is given by the divergence of a vector operator \(V_{i}\), referred to as the virial current:
Footnote 12: We always assume that the stress tensor is symmetric to make the rotational invariance manifest.
\[T_{ii}=-\partial_{i}V_{i}\,. \tag{4.12}\]
Furthermore, the fixed point is conformally invariant (in \(d>2\), which is our case of interest here) if in addition to (4.12), the virial current is given by a divergence of a local operator, namely
\[V_{i}=\partial_{j}\mathcal{O}_{ij}\,, \tag{4.13}\]
where \(\mathcal{O}_{ij}\) can be assumed symmetric without loss of generality. We will call virial currents satisfying this condition _improvable_. If it holds, an "improved" stress tensor can be found which is traceless [19].
To discuss the stress tensor in our model, we start from the local effective Hamiltonian (2.12), which eliminates the longitudinal fluctuations of the order parameter via the Lagrange multiplier. We will see below that our fixed point has a virial current \(V_{i}\propto U\phi_{i}+\ldots\) where \(\ldots\) stands for improvable terms.
Here we present the gist of the argument, postponing to Appendix D a more detailed treatment of the effects of renormalization. To compute the stress tensor in \(d=4-\varepsilon\) dimensions, it is convenient to redefine \(U\to U+\frac{1}{2}\partial_{i}\phi_{i}\), which gives an effective action equivalent to (2.12):
\[\widetilde{\mathcal{H}}=\int d^{d}x\left(\frac{1}{4}f_{ij}^{2}+\phi_{i} \partial_{i}U+\frac{\lambda}{4}(\phi_{i}\phi_{i})^{2}\right)\,. \tag{4.14}\]
Here \(f_{ij}=\partial_{i}\phi_{j}-\partial_{j}\phi_{i}\), and we have already assumed that \(m^{2}\) is at the critical value, which is \(m^{2}=0\) in dimensional regularization. The EOM are
\[\partial_{i}f_{ij}=\partial_{j}U+\lambda(\phi_{i}^{2})\phi_{j}\,\qquad \partial_{i}\phi_{i}=0\,. \tag{4.15}\]
The (classical) stress tensor is computed as usual by varying the background metric. The merit of using \(f_{ij}\) is that we have no covariant derivative in the action.13 We only give here the expression for the trace of the stress tensor (see App. D for the full stress tensor):
Footnote 13: We vary \(\int d^{d}x\sqrt{g}\left(\frac{1}{2}g^{ij}g^{kl}f_{ik}f_{jl}+g^{ij}\partial_{l }U\phi_{j}+\frac{\lambda}{4}(g^{ij}\phi_{i}\phi_{j})^{2}\right)\). Stress tensors computed from (2.12) and (4.14) differ by improvable and EOM terms.
\[T_{ii} =\frac{\varepsilon}{4}\big{(}f_{ij}^{2}+\lambda(\phi_{i}^{2})^{2 }\big{)}+(2-d)\phi_{i}\partial_{i}U\] \[=-\frac{\varepsilon}{4}\lambda(\phi_{i}^{2})^{2}-\frac{d}{2} \partial_{i}\left(\phi_{i}U\right)+\frac{\varepsilon}{2}\partial_{i}\partial _{j}\left(\frac{1}{2}\phi^{2}\delta_{ij}-\phi_{i}\phi_{j}\right)\,. \tag{4.16}\]
In the second line, we used the EOM to express the trace as a sum of a quartic term, a virial current and an improvable term. To include the effects of renormalization, we should express the trace in terms of renormalized fields. We carry this out in Appendix D, and here just give the main results. For the quartic term, renormalization amounts to the replacement \(\varepsilon\lambda(\phi_{i}^{2})^{2}\to-\beta(\lambda)(\phi_{i}^{2})^{2}-4 \gamma_{\phi}\partial_{i}\left(\phi_{i}U\right)+\ldots\), with \(\ldots\) improvable terms. At the fixed point, the beta-function vanishes \(\beta(\lambda)=0\) and the quartic term drops out. We finally obtain
\[T_{ii}\left|{}_{\text{fixed point}}=-\partial_{i}V_{i},\qquad V _{i}=V_{i}^{(0)}+\partial_{j}\mathcal{O}_{ij}\,,\qquad V_{i}^{(0)}=\Delta_{U} \,U\phi_{i}\,. \tag{4.17}\]
The coefficient \(\Delta_{U}\) in the last equation is shifted from the classical value \(d/2\) to the renormalized value \(\Delta_{U}=d/2-\gamma_{\phi}\).14 The part \(V_{i}^{(0)}\) of the virial current is not conserved: using EOM (4.15), we have \(\partial_{i}V_{i}^{(0)}=\Delta_{U}\,\partial_{i}U\phi_{i}\neq 0\). It is also not improvable, as it is built of fields that carry no derivatives, while according to (4.13), improvable virial currents should contain at least one derivative.
Footnote 14: The formal derivation of this finite shift can be found in Appendix D. Here we would like to point out the following amusing connection. We know that \(\Delta_{\phi}=d-1-\Delta_{U}\). If \(\Delta_{U}\) were zero (which does not happen in the dipolar model, because \(\gamma_{\phi}\) is tiny), \(\Delta_{\phi}\) would become consistent with the conformal symmetry, as we have seen in Section 4.1. We see that this goes hand in hand with the vanishing of the unimprovable part \(V^{(0)}\) of the virial current in (4.17).
Thus we arrive at the same conclusion as in Section 4.1: the dipolar fixed point is an interacting scale invariant fixed point without conformal invariance.
### Virial current dimension and the shift symmetry
This result begs the following question. Eq. (4.12) means that the virial current operator \(V_{i}\) has scaling dimension exactly \(d-1\), since the stress tensor has scaling dimension \(d\).15
In other words, its scaling dimension is not renormalized from the canonical dimension \(d-1\). Usually, the only vector operators that are not renormalized are the conserved currents, making scale without conformal invariance generically impossible in the presence of interactions [11; 17]. Yet \(V_{i}\) is definitely not conserved. There must be something non-generic about the dipolar fixed point, allowing \(V_{i}\) to not renormalize in the presence of interactions.
This non-generic feature is the _shift symmetry_ of model (12). The shift symmetry acts on the fundamental field \(U\) by \(U(x)\to U(x)+u\) where \(u\) is a constant. The origin of this symmetry is the Bianchi identity of the \(B\) field.
The shift symmetry is a global symmetry group, \(G_{\rm shift}=\mathbb{R}\). In particular, it commutes with Poincare. We can also consider the shift symmetry charge \(Q\), which by definition acts on the fundamental field as
\[[Q,U(x)]=1\qquad\text{(i.e. $\delta_{Q}U=1$)}\,. \tag{4.18}\]
Therefore the scaling dimension of \(Q\) is given by
\[\Delta_{Q}=-\Delta_{U}\,. \tag{4.19}\]
The conserved current generating the shift symmetry is \(\phi_{i}\). The charge \(Q\) can be obtained as a surface integral of the conserved current, \(Q=\int d\Sigma_{i}\,\phi_{i}\), from where we get an alternative expression for its scale dimension:
\[\Delta_{Q}=\Delta_{\phi}-(d-1)\,. \tag{4.20}\]
The two expressions for \(\Delta_{Q}\) are consistent by (14). This also provides an alternative way of understanding (14).
Eq. (4.19) can also be equivalently written as a commutation relation between \(Q\) and the dilatation generator \(D\):
\[[D,Q]=-\Delta_{U}Q\,. \tag{4.21}\]
Since \(\Delta_{U}>0\) in our model, this equation means that \(Q\) acts as a lowering operator for the scaling dimension. Scaling operators in the dipolar fixed point will come in infinite "shift multiplets":
\[\{\mathcal{O}_{0},\mathcal{O}_{1},\ldots\}\,, \tag{4.22}\]
where
\[[Q,\mathcal{O}_{n}]\,=\,\mathcal{O}_{n-1}\quad(n\geqslant 1)\,,\qquad[Q, \mathcal{O}_{0}]=0 \tag{4.23}\]
and
\[\Delta(\mathcal{O}_{n})=\Delta(\mathcal{O}_{0})+n\Delta_{U}. \tag{4.24}\]
Generally, in free theory we have \(\mathcal{O}_{n}=\frac{1}{n!}U^{n}\mathcal{O}_{0}\). However, our construction of shift multiplets does not rely on perturbation theory, and the scaling dimensions (4.24) are valid non-perturbatively.
Let us focus on the shift multiplet constructed on top of \(\mathcal{O}_{0}=\phi_{i}\). We have:
\[\Delta(\mathcal{O}_{1})=\Delta_{\phi}+\Delta_{U}=d-1. \tag{101}\]
I.e. \(\mathcal{O}_{1}\) has scaling dimension \(d-1\), which is what we need! However, it's not a conserved current. Indeed, in free theory we have
\[(\mathcal{O}_{1})_{i}=U\phi_{i}. \tag{102}\]
At the interacting IR fixed point we will have
\[(\mathcal{O}_{1})_{i}=U\phi_{i}+\ldots \tag{103}\]
where \(\ldots\) are terms of 4d scaling dimension 3 which have schematic form \(\partial\phi^{2}\). These terms need to be added to \(U\phi_{i}\) to make it a good scaling operator. Note that \(\ldots\) terms do not involve \(U\) because the mixing matrix of \(U\phi_{i}\) and \(\partial\phi^{2}\) is triangular: \(\partial\phi^{2}\), being neutral under the shift symmetry, cannot generate \(U\phi_{i}\) under RG flow.16
Footnote 16: The operator \(\partial_{i}U\) also has 4d scaling dimension 3, but being \(\mathbb{Z}_{2}\) odd it does not appear in \(\ldots\).
We have shown that the dipolar fixed point contains a vector operator \((\mathcal{O}_{1})_{i}\) of the form (103), having scaling dimension exactly \(d-1\). This achieves the main goal of this section - to show how the shift symmetry may naturally provide non-conserved operators of this scaling dimension.
The virial current \(V_{i}\) is also of the form (103), up to \(\Delta_{U}\) rescaling. Let us consider the stress tensor which has a well-defined scaling dimension (\(d\)), so that the corresponding virial current, which we denote \(V^{\rm scale}\) also has a well-defined scaling dimension (\(d-1\)). This also fixes the \(\ldots\) terms in the virial current. It can be shown that the \(\ldots\) terms in \(V^{\rm scale}\) and \(\mathcal{O}_{1}\) are the same (up to \(\Delta_{U}\) rescaling), that is:
\[V=\Delta_{U}\mathcal{O}_{1}\,, \tag{104}\]
This should not be surprising - it is already difficult to have _one_ non-conserved vector of scaling dimension exactly \(d-1\); to have _two_ would be inexplicable. Thus we will not verify this explicitly here.
We will see in Section 5 that shift symmetries are at work not just for the dipolar fixed point but for all known interacting models of scale without conformal invariance. In all of them, the virial current dimension can be seen protected by a shift symmetry.
_Remark 4.4_.: For completeness, we give here the original version of the argument for the non-renormalization of the virial current [18]. There, the main idea was to exploit the shift symmetry by considering the 2pt functions \(\langle V_{j}(x_{2})\phi_{k}(x_{3})\rangle\) and \(\langle\phi_{j}(x_{2})\phi_{k}(x_{3})\rangle\), which are related by shift symmetry:
\[\delta_{Q}\langle V_{j}(x_{2})\phi_{k}(x_{3})\rangle=\Delta_{U}\langle\phi_{j }(x_{2})\phi_{k}(x_{3})\rangle\,. \tag{105}\]
This relation can be written as the following Ward-Takahashi identity involving the divergence of \(\phi_{i}\), which is the shift symmetry current:
\[\langle\partial_{i}\phi_{i}(x_{1})V_{j}(x_{2})\phi_{k}(x_{3})\rangle=\Delta_{ U}\delta^{d}(x_{1}-x_{2})\langle\phi_{j}(x_{2})\phi_{k}(x_{3})\rangle. \tag{106}\]
Equating the scaling dimensions of both sides gives
\[\Delta_{\phi}+1+\Delta_{V}+\Delta_{\phi}=d+2\Delta_{\phi}\implies\Delta_{V}=d-1\,. \tag{108}\]
This argument for the nonrenormalization of \(\Delta_{V}\) is basically equivalent to the argument given in the main text. Although it is shorter, at the first look it may appear a bit ad hoc. The argument in the main text, based on equations (105), (106), is hopefully useful to understand the inner workings of this mechanism.
_Remark 4.5_.: It is amusing to recall that the usual argument for the non-renormalization of the conserved current dimension is also based on a Ward-Takahashi identity. Namely, we have, for a conserved current \(J\) associated with a linearly realized global symmetry, a Ward-Takahashi identity of the schematic form \(\langle\partial_{i}J_{i}(x_{1})\varphi(x_{2})\ldots\rangle\propto\delta^{d}( x_{1}-x_{2})\langle\varphi(x_{2})\ldots\rangle\), which implies \(\Delta_{J}=d-1\).
### Other consequences of shift symmetry
Let us discuss an additional role of the shift symmetry concerning the \(U^{2}\) operator. This operator is classically marginal. Taking into account interactions, it will become weakly relevant (see below), and could destabilize the dipolar fixed point. However, since the operator is charged under the shift symmetry, it is not generated by the RG, and so the fixed point is protected.
Let us discuss the dimension of \(U^{2}\) in more detail. Let us call by \([U^{2}]\) the renormalized \(U^{2}\) operator, which differs from \(U^{2}\) by pieces of the schematic form \((\partial\phi)^{2}\) and \(\partial(\phi U)\), with which it can mix under renormalization. The operator \([U^{2}]\) can be equivalently defined as the first excited member \(\mathcal{O}_{1}\) of the shift multiplet (107) built on top of \(\mathcal{O}_{0}=U\). Therefore, we have
\[\Delta_{[U^{2}]}=2\Delta_{U}=2(d-1-\Delta_{\phi})=d-2\gamma_{\phi}\,, \tag{109}\]
where the first equation follows from (108). Since \(\gamma_{\phi}\) is positive, the \(U^{2}\) deformation is indeed relevant, as stated above.
As we said, \(U^{2}\) is not going to be generated by the RG starting from (12). But what will happen if we add it by hand, thus breaking the shift symmetry? It is reasonable to guess that this deformation will start a flow which, if the mass term is properly perturbed as well, will eventually take us back to the \(O(d)\) Wilson-Fisher fixed point, where the theory is conformally invariant. We can test this guess for consistency by looking at what happens at the end point of this flow, when the Wilson-Fisher fixed point is approached. Around the \(O(d)\) Wilson-Fisher fixed point (or \(O(3)\) Heisenberg fixed point in three dimensions), the flow will be induced by the leading deformation which breaks the \(O(d)\) spatial times \(O(d)\) internal symmetry of Wilson-Fisher to the diagonal \(O(d)\). This deformation \(\mathcal{O}\) can be constructed from the Wilson-Fisher primary \(R_{\mu\nu,ij}\) of the schematic form \(\phi_{i}\partial_{\mu}\partial_{\nu}\phi_{j}\), which is spin two in both space-time and \(O(d)\), by contracting indices appropriately: \(\mathcal{O}=\delta_{\mu i}\delta_{\nu j}R_{\mu\nu,ij}\). It is known that in \(d=4-\varepsilon\)[22]
\[\Delta_{\mathcal{O}}=d+\frac{d}{(3(d+8))^{2}}\varepsilon^{2}\,. \tag{110}\]
This anomalous dimension has to be positive because this primary is not conserved, and unitarity implies that a non-conserved spin-two operator has dimension strictly greater than \(d\). So \(\mathcal{O}\) is indeed irrelevant, as it should be if the flow leads from the dipolar to the Wilson-Fisher fixed point.
We see that once we break the shift symmetry, we flow to a fixed point which is conformal. This may be traced back to the need to have the virial current of dimension \(d-1\). Once the shift symmetry is broken, there is nothing that protects the dimension of the virial current in an interacting theory. We will see in the next section that all known scale without conformal fixed points have a shift symmetry.
_Remark 4.6_.: At the beginning of the paper, we discussed a flow that takes us from Wilson-Fisher to dipolar via a non-local deformation (2). The non-local deformation could be obtained, as in Section 2.1, by coupling Wilson-Fisher to another local sector (magnetic field). This additional local sector is not reproduced when we get back from dipolar to Wilson-Fisher via a local deformation breaking the shift symmetry, as discussed above. Thus the RG flow is not circular.
## 5 Other interacting models with scale without conformal invariance
In this section, we discuss several other interacting scale without conformal models. An important feature of all these models is that they have a shift symmetry that acts on some fundamental field as \(\mathcal{O}(x)\to\mathcal{O}(x)+c\), with \(c\) a constant. It is this symmetry that prevents scale invariance from getting enhanced to conformal invariance.
To see that shift-invariant models are not conformal one simply looks at the 2pt function of the shift current. This is in fact the strategy of Section 4.1, because \(\phi_{i}\) is the shift current in the dipolar model. Throughout this section we call \(J_{\mu}\) the shift current, and we call \(Q=\int d\Sigma_{\mu}J_{\mu}\) its charge. In this notation, conservation of the shift current fixes its 2pt function
\[\langle J_{\mu}(x)J_{\nu}(0)\rangle=\frac{A}{|x|^{2\Delta_{J}}} \left(\delta_{\mu\nu}-\frac{2\Delta_{J}}{2\Delta_{J}-(d-1)}\frac{x_{\mu}x_{ \nu}}{x^{2}}\right)\,. \tag{5.1}\]
If \(J_{\mu}\) is a primary field, that is if \(J_{\mu}\) is not a total derivative of another local operator, then this is compatible with conformal invariance only when \(\Delta_{J}=d-1\). However, the way shift symmetry acts on \(\mathcal{O}\) implies
\[[Q,\mathcal{O}(x)]=\delta_{Q}\mathcal{O}(x)=1\qquad\Longrightarrow \qquad\Delta_{J}=d-1-\Delta_{\mathcal{O}}\,. \tag{5.2}\]
As a result, if \(J_{\mu}\) is a primary field and \(\Delta_{\mathcal{O}}\neq 0\), the fixed point is scale but not conformally invariant.17 This elementary argument explains why all models below, and more generally any interacting shift-invariant theory, are scale but not conformally invariant.
Footnote 17: The only subtlety is that the shift current could be a descendant. Since \(\delta_{Q}\mathcal{O}=1\) we get the 2pt function \(\langle J_{\mu}(q)\mathcal{O}(-q)\rangle=iq_{\mu}/q^{2}\), and conformal invariance would require \(J_{\mu}\) to be a descendant of \(\mathcal{O}\). In this case \(J\sim\partial^{n}\mathcal{O}\) and \(\Delta_{J}=\Delta_{\mathcal{O}}+n\). Compatibility with (5.2) requires \(\Delta_{\mathcal{O}}=\frac{d-1-n}{2}\) for some integer \(n\). This scenario is realized in free theories with Lagrangian \(\mathcal{L}=\frac{1}{2}(\partial_{\mu_{1}}\dots\partial_{\mu_{m}}\mathcal{O})^{2}\). However, in interacting theories generically \(\Delta_{\mathcal{O}}\neq\frac{d-1-n}{2}\) and only scale invariance will be realized.
This still begs the question of why all these models have a virial current \(V_{\mu}\) with dimension exactly \(\Delta_{V}=d-1\). The main idea is the same as in Section 4.3, namely that shift symmetry provides a candidate virial current \(\mathcal{V}_{\mu}\) with the right dimension. The property that defines \(\mathcal{V}_{\mu}\) is that under shift-symmetry it maps to the current \(J_{\mu}\), namely
\[[Q,\mathcal{V}_{\mu}]=\delta_{Q}\mathcal{V}_{\mu}=J_{\mu}\quad \Longrightarrow\quad\Delta_{\mathcal{V}}=d-1\,. \tag{109}\]
The implication \(\Delta_{\mathcal{V}}=d-1\) is valid assuming that \(\mathcal{V}_{\mu}\) is a good scaling operator (i.e. that it has a well-defined scaling dimension). We stress that \(\mathcal{V}_{\mu}\) is a _candidate_ virial current. To guarantee that only scale invariance is present, it is still necessary to check that this \(\mathcal{V}_{\mu}\) is the true virial current \(V_{\mu}\), i.e. that it does appear in the trace of the stress tensor, and moreover that it is not improvable. The goal of the rest of this section is to show in detail how this mechanism works for several models of interest. The argument in (108)-(109) trivially applies to all these models, and shall not be repeated below.
### Landau-gauge massless QED in \(d=4-\varepsilon\)
We start by reviewing the examples of [8]. Historically, these were the first interacting examples of scale without conformal invariance. The first example is the Landau-gauge massless QED in \(d=4-\varepsilon\) dimensions. It is known that massless QED (a \(U(1)\) gauge field + a massless fermion) flows to a fixed point in \(d=4-\varepsilon\) dimensions, which is conformal in the gauge-invariant sector [50; 51]. Let us consider the gauge-fixed action for the same flow, in the Landau gauge, implemented via a Lagrange multiplier:
\[S=\int d^{d}x\left(-\frac{1}{4e^{2}}(\partial_{\mu}A_{\nu}-\partial_{\mu}A_{ \nu})^{2}+B\partial_{\mu}A_{\mu}+i\tilde{\psi}D_{\mu}\gamma_{\mu}\psi+\text{ ghosts}\right). \tag{110}\]
Usually, the gauge fixed action is treated as a formal device to do computations for the original theory. Here, following [8], we consider it is as a field theory in its own right. It is a well-defined field theory, albeit non-unitary. The decoupled ghost sector is necessary for BRST invariance.
The bosonic part of the action (106) bears some similarity with the dipolar model action (106), except for the absence of the quartic term \((A_{\mu}A_{\mu})^{2}\). Of course, this term was absent in the gauge theory from which (110) originated. At the level of theory (110), this term is forbidden by the BRST invariance of (110). In the dipolar model (106), there was no BRST invariance, and the quartic term was allowed.
Now, the main points of [8] are:
* Theory (110) has a fixed point at the same value of the gauge coupling as the original gauge-invariant theory.
* This fixed point is scale invariant but not conformal invariant, with the non-zero virial current \(V_{\mu}=(d-2)BA_{\mu}\).
* There is no contradiction with the conformal invariance of the gauge theory fixed point. Indeed, the virial current together with the decoupled ghost contribution is BRST trivial, so after taking the BRST cohomology, the theory does become conformally invariant.
We can now see that the dimension of the virial current in this model is protected by the same mechanism (5.3). The shift symmetry \(Q\) is \(B\to B+b\) for \(b\) constant; it is generated by the current \(J_{\mu}=A_{\mu}\). We have \(\delta_{Q}V_{\mu}\propto J_{\mu}\). Thus \(\Delta_{V}=d-1\), as pertains to the virial current.
In [8], the non-renormalization of the virial current was proved in a slightly different manner by using the BRST symmetry. Our derivation here is more direct because we do not refer to the decoupled ghost sector.
### Landau-gauge Banks-Zaks fixed point in \(d=4\)
The next example from [8] applies the same idea to non-abelian gauge theories in 4d (as opposed to \(d=4-\varepsilon\)). In 4d, we know infinitely many examples of gauge theories with massless matter that show non-trivial fixed points such as the Banks-Zaks fixed points or the \({\cal N}=4\) super Yang-Mills theory. These fixed points show conformal invariance in the gauge-invariant sectors, but the corresponding gauge-fixed theories show scale invariance without conformal invariance [8]. However, after taking the BRST cohomology and thus restricting to the gauge-invariant sector, these scale invariance fixed points become conformal.
In more detail, consider the Yang-Mills theory with massless matter. Here as in the previous section we restrict to the Landau gauge (see the next section for an example not in the Landau gauge). The action is:
\[S=\int d^{4}x\left(-\frac{1}{4g^{2}}F^{a}_{\mu\nu}F^{a}_{\mu\nu}+B^{a}\partial _{\mu}A^{a}_{\mu}+i\bar{c}^{a}\partial_{\mu}D_{\mu}c^{a}+\text{matter}\right)\;. \tag{5.5}\]
Let us assume that the gauge coupling, as well as the other matter coupling constants if any, flow to a fixed point. In this general setup, the analysis of the stress tensor trace shows [8] that theory (5.5) is scale invariant but not conformal. The virial current is
\[V_{\mu}=\Delta_{B}(B^{a}A^{a}_{\mu}+i\bar{c}^{a}D_{\mu}c^{a})\,. \tag{5.6}\]
Here the coefficient in the virial current is shifted from the classical value of 2 to \(\Delta_{B}=2-\gamma_{A}\). While the full theory is only scale invariant, conformal invariance would be recovered were we to restrict to the BRST-invariant subsector. This is because the above virial current is BRST trivial:
\[V_{\mu}=\{Q_{\text{BRST}},\Delta_{B}\bar{c}^{a}A^{a}_{\mu}\}\,. \tag{5.7}\]
Thus, within the BRST cohomology, the energy-momentum tensor becomes traceless. This is why scale invariance of the gauge-fixed theory [8] is not in contradiction with conformal invariance of the corresponding gauge-invariant fixed point.
Now let us discuss how the dimension of the virial current is protected. In addition to the BRST invariance, theory (5.5) has _two_ shift symmetries \(B^{a}\to B^{a}+\lambda^{a}\) and \(\bar{c}^{a}\to\bar{c}^{a}+\bar{b}^{a}\) with constant \(\lambda^{a},\bar{b}^{a}\).18 Although we won't need it, we note the algebra satisfied by their charges:
Footnote 18: The second shift symmetry exists because the ghost \(c^{a}\) is independent of the anti-ghost \(\bar{c}^{a}\). Not all textbooks treat this point properly.
\[[Q_{\text{BRST}},Q_{B}]=Q_{\bar{c}},\qquad\{Q_{\text{BRST}},Q_{\bar{c}}\}=0, \qquad[Q_{B},Q_{\bar{c}}]=0\,. \tag{5.8}\]
The currents of the \(B\) and \(\bar{c}\) symmetries are the fields \(A_{\mu}\) and \(D_{\mu}c\). Moreover we have:
\[[Q_{B},BA_{\mu}]=A_{\mu},\quad[Q_{\bar{c}},\bar{c}\,D_{\mu}c]=D_{\mu}c\,. \tag{111}\]
Both these equations are of the form (108). Applying the general argument, we conclude that theory (109) contains _two_ fields of dimension \(d-1\), namely the level-1 fields of the \(B\)-shift and \(\bar{c}\)-shift symmetry multiplets built on top of the corresponding shift currents \(A_{\mu}\) and \(D_{\mu}c\). The virial current (110) is their particular linear combination, so it also has dimension \(d-1\).
This example produces infinitely many interacting scale invariant but non-conformal field theories where the dimension of the virial current is protected by the shift symmetries.
### Fixed points not in the Landau gauge
The third example from [8] is a generalization of (109) from the Landau gauge to a general \(\xi\) gauge, i.e. adding the term \(-\frac{1}{2\xi}(B^{a})^{2}\) to the action. The gauge-parameter \(\xi\) is then treated as a dimensionless coupling that runs under the RG flow. Then the Landau-gauge value \(\xi=\infty\) is always a fixed point. Depending on the theory, there may exist other fixed points of the gauge parameter \(\xi\). In QED, there is no other fixed point than the Landau gauge, but in non-Abelian theories, such fixed points were found in [8].
The virial current is still given by Eqs. (110), (111). We would like to explain that its dimension is \(d-1\). Since \(\xi\neq\infty\), we no longer have the shift symmetry of \(B^{a}\), so the argument from the previous section does not apply. However, we can still give a robust argument, using the shift symmetry of \(\bar{c}^{a}\), in combination with the BRST invariance.
The argument is based on the following four equations:19
Footnote 19: Unlike in (111), the coefficient \(\Delta_{B}=2-\gamma_{A}\) in (110) remains at its classical value 2, because \(\gamma_{A}=0\) at this fixed point. This is because the form of the beta-function equation for \(\alpha=1/\xi\) is [52]\(\beta_{\alpha}=\alpha\gamma_{A}\). So the fixed points with \(\alpha=0\), like in the previous subsection, may have \(\gamma_{A}\neq 0\), while the fixed points with \(\alpha\neq 0\) should have \(\gamma_{A}=0\)[8].
\[V_{\mu} =[Q_{\rm BRST},\Delta_{B}\,\bar{c}\,A_{\mu}], \tag{112}\] \[A_{\mu} =\{Q_{\bar{c}},\bar{c}\,A_{\mu}\},\] (113) \[iD_{\mu}c =[Q_{\rm BRST},A_{\mu}],\] (114) \[\Delta_{Q_{\bar{c}}} =\Delta_{D_{\mu}c}-d+1, \tag{115}\]
where the first equation is (111), and the last equation follows since \(D_{\mu}c\) is the \(\bar{c}\)-shift current. Now we have:
\[\Delta_{V} \stackrel{{\eqref{eq:2.1}}}{{=}}\Delta_{Q_{\rm BRST }}-\Delta_{\bar{c}A}\] \[\stackrel{{\eqref{eq:2.1}}}{{=}}\Delta_{Q_{\rm BRST }}+\Delta_{A}-\Delta_{Q_{\bar{c}}}\] \[\stackrel{{\eqref{eq:2.1}}}{{=}}\Delta_{D_{\mu}c}- \Delta_{Q_{\bar{c}}}\] \[\stackrel{{\eqref{eq:2.1}}}{{=}}d-1\,. \tag{116}\]
The non-renormalization of the virial current operator in the Banks-Zaks fixed point was first addressed in [53] and reviewed in [8] (see also [54] in \(4-\varepsilon\) dimensions) based on the BRST analysis. We find our argument presented here much simpler and more transparent. The argument in this subsection applies to the Landau-gauge fixed point in the previous subsection, but we have presented it separately from the simpler argument available there.
### Crystalline membrane theory
We now turn to crystalline membrane theory, which describes a \(d\)-dimensional membrane fluctuating around its equilibrium flat configuration in the ambient \(D\)-dimensional space. The most physically interesting case is \(D=3\), \(d=2\). The Hamiltonian contains two fundamental fields \(u_{\mu}\) and \(h_{a}\), with indices \(\mu=1,\ldots,d\) parallel and \(a=1,\ldots,D-d\) orthogonal to the membrane [55]
\[\mathcal{H}=\frac{1}{2}\int d^{d}x\left[(\partial^{2}h_{a})^{2}+\lambda(u_{\mu \mu})^{2}+2\mu u_{\mu\nu}u_{\mu\nu}\right]\,, \tag{5.15}\]
where \(u_{\mu\nu}=\frac{1}{2}(\partial_{\mu}u_{\nu}+\partial_{\nu}u_{\mu}+\partial_ {\mu}h_{a}\partial_{\nu}h_{a})\). If we set \(h_{a}=0\), the membrane model reduces to the theory of elasticity, a Gaussian theory which was observed to be scale invariant but not conformal invariant by Riva and Cardy [9]. Below we focus on the interacting case. In this case there is an IR fixed point at non-zero values of the couplings \(\lambda\), \(\mu\), which can be studied in a perturbative expansion in \(d=4-\varepsilon\). This is very interesting, as it implies that long-distance correlations of membranes are characterized by nontrivial critical exponents. Moreover, Mauri and Katsnelson [7; 56]20 have recently shown that this fixed point is only scale invariant but not conformal. Therefore, it stands with the dipolar fixed point discussed by us as one of the only two currently known experimentally relevant non-Gaussian examples of scale without conformal invariance.
Footnote 20: See also these works for a thorough review of prior work on the membrane fixed point.
Note that model (5.15), as all models described above, has a shift symmetry, which takes the form \(u_{\mu}\to u_{\mu}+\varepsilon_{\mu}\) with \(\varepsilon_{\mu}\) constant. The importance of this shift symmetry for controlling the renormalization structure of the model was already emphasized in [7]. Ref. [7] also discussed the non-renormalization of the virial current dimension, and shift symmetry played a role in that discussion as well, along with other considerations. Here we wish to show that \(\Delta_{V}=d-1\) can be understood in model (5.15), as in all previously described models, as a _direct_ consequence of shift symmetry, via our general mechanism. We will see however that in this model the mechanism operates with a twist compared to the simple Eq. (5.3).
The first step of the argument is to express the trace of the stress tensor and to find a virial current. One finds that the stress tensor contains terms proportional to the beta-functions, which vanish at the fixed point, as well as a virial current term [7]:
\[T_{\mu\mu}\big{|}_{\text{fixed point}}=-\partial_{\mu}V_{\mu}\,,\qquad V_{\mu }=V_{\mu}^{(0)}+\partial_{\nu}\mathcal{O}_{\mu\nu},\qquad V_{\mu}^{(0)}=k_{1}u _{\mu}u_{\nu\nu}+k_{2}u_{\nu}u_{\mu\nu}\,, \tag{5.16}\]
where the precise form of the improvable part \(\partial_{\nu}\mathcal{O}_{\mu\nu}\), and the values of the constants \(k_{1},k_{2}\) will not be important for us.
Next we would like to connect the unimprovable virial current \(V_{\mu}^{(0)}\) to the shift symmetry current, which is given by the expression
\[J_{\mu\nu}=\lambda u_{\alpha\alpha}\delta_{\mu\nu}+2\mu u_{\mu\nu}\,. \tag{111}\]
As in the dipolar model, the shift current in the membrane model acquires a non-trivial scaling dimension \(\Delta_{J}\). We can consider the trace part of the shift current and the traceless symmetric part. Although naively they do not mix under RG, they both should have the same scaling dimension, as a consequence of conservation of \(J\).21 We conclude that
Footnote 21: This is analogous to the trace and the symmetric traceless part of the stress tensor having the same scaling dimension at the dipolar fixed point, see the discussion in Appendix D.4.
\[\Delta(u_{\alpha\alpha})=\Delta(u_{\mu\nu}-\text{trace})=\Delta_{J}\,. \tag{112}\]
To run the general argument for \(\Delta_{V^{(0)}}=d-1\), we consider \([Q_{\nu},V_{\mu}^{(0)}]\). In Eq. (108), this was equal to the shift current itself. In the membrane model, because of the coefficients \(k_{1},k_{2}\), this is a linear combination of two fields in (112) which however both have the same dimension as the shift current (this is the twist alluded to above). Therefore the algebra works out the same, and we conclude that \(\Delta_{V^{(0)}}=d-1\).
### Gaussian curvature interaction model
Finally, let us consider the Gaussian curvature interaction (GCI) model [57], which provides an alternative description of crystalline membranes. This model is obtained in \(d=2\) by integrating out \(u_{\mu}\) in (107), and decoupling the resulting non-local interactions by introducing a Hubbard-Stratonovich field \(\chi\). The resulting effective Hamiltonian can be continued to arbitrary \(d\), and reads
\[\mathcal{H}=\int d^{d}x\left(\frac{1}{2}(\partial^{2}h_{a})^{2}+\frac{1}{2v}( \partial^{2}\chi)^{2}+\frac{i}{2}\chi(\partial^{2}h_{a}\partial^{2}h_{a}- \partial_{\mu}\partial_{v}h_{a}\partial_{\mu}\partial_{\nu}h_{a})\right)\,, \tag{113}\]
with \(v\) the coupling constant. The Hamiltonian for \(v=0\) reduces to two copies of biharmonic theory, and it is thus conformal [58]. Instead, when \(v\neq 0\) the coupling flows to a fixed point. This fixed point for \(d=2\) is equivalent to the one considered in the previous subsection, but for generic \(d\) it is distinct. Ref. [7] proved that this new fixed point also realizes scale without conformal invariance in any \(d\). They also discussed why the virial current dimension does not get renormalized.
The GCI model has a shift symmetry acting on the field \(\chi\) as
\[\chi\to\chi+a+b_{\mu}x_{\mu}\,, \tag{114}\]
where \(a\) is a constant scalar and \(b_{\mu}\) is a constant vector. We will refer to these as a "constant shift symmetry" and "linear shift symmetry".22 The linear shift symmetry is a new feature of the model (113) which was not present in other models discussed above.23 This shift
symmetry played a role, indirectly, in the discussion of the non-renormalization of \(\Delta_{V}\) in [7], as they connected it to improved UV properties of the model, so that certain loop diagrams were finite.
Here we would also like to connect the non-renormalization of \(\Delta_{V}\) to the linear shift symmetry. However, unlike [7], we would like to give an algebraic argument in the spirit of Eq. (111).
The virial current of the model has the form
\[V_{\mu}=k_{1}(\partial_{\mu}\chi)(\partial_{\nu}h_{a}\partial_{\nu}h_{a})+k_{2 }(\partial_{\nu}\chi)(\partial_{\nu}h_{a}\partial_{\mu}h_{a})+\partial_{\nu} \mathcal{O}_{\mu\nu}\,. \tag{115}\]
The \(k_{1}\) and \(k_{2}\) are determined at the fixed point after renormalization (see [7] for the renormalized expression, but we do not need them in the following). The \(\mathcal{O}_{\mu\nu}\) in the improvable part of the virial current is given by
\[\mathcal{O}_{\mu\nu} =k_{3}\,\partial_{\mu}\chi\partial_{\nu}\chi+k_{4}\delta_{\mu\nu} (\partial_{\sigma}\chi\partial_{\sigma}\chi)\] \[+r_{1}\delta_{\mu\nu}\partial^{2}\chi+r_{2}\partial_{\mu} \partial_{\nu}\chi+r_{3}\partial_{\mu}h_{a}\partial_{\nu}h_{a}+r_{4}\delta_ {\mu\nu}(\partial_{\sigma}h_{a}\partial_{\sigma}h_{a})\,. \tag{116}\]
Without further conditions, all coefficients here are arbitrary.
Under the linear shift symmetry, the virial current changes by \(b_{\nu}[Q_{\nu},V_{\mu}]\), where
\[[Q_{\nu},V_{\mu}]=k_{1}\delta_{\mu\nu}(\partial_{\nu}h_{a}\partial_{\nu}h_{a}) +k_{2}(\partial_{\nu}h_{a}\partial_{\mu}h_{a})+k_{3}(\delta_{\mu\nu}\partial^{ 2}\chi+\partial_{\mu}\partial_{\nu}\chi)+2k_{4}\partial_{\mu}\partial_{\nu}\chi. \tag{117}\]
Note that \(\partial_{\nu}\mathcal{O}_{\mu\nu}\) terms corresponding to the second line of (116) are linear shift-invariant and don't contribute.
We would like to relate the pieces in the r.h.s. of this equation to the pieces of the linear shift symmetry current. The latter is given by
\[J_{\mu\nu}=x_{\nu}\partial_{\rho}K_{\mu\rho}-K_{\mu\nu}\,, \tag{118}\]
where \(Q_{\nu}=\int d\Sigma^{\mu}J_{\mu\nu}\) is the corresponding charge. The symmetric 2-tensor field \(K_{\mu\nu}\) enters the EOM for \(\chi\), which can be written as
\[\partial_{\mu}\partial_{\nu}K_{\mu\nu}=0\,. \tag{119}\]
This implies the conservation of \(J_{\mu\nu}\).
The explicit form of the field \(K_{\mu\nu}\), satisfying the "partial conservation law"24 (119) is (classically)
Footnote 24: Using the terminology of [62].
\[K_{\mu\nu}=\frac{1}{v}\partial_{\mu}\partial_{\nu}\chi-\frac{i}{2}(\delta_{ \mu\nu}\partial_{\rho}h_{a}\partial_{\rho}h_{a}-\partial_{\mu}h_{a}\partial_{ \nu}h_{a})+A(\partial_{\mu}\partial_{\nu}\chi-\delta_{\mu\nu}\partial^{2} \chi). \tag{120}\]
Here the coefficient \(A\) of the "improvement term" is arbitrary. We wish to fix it so that \(K_{\mu\nu}\) is a good scaling operator. In the following, we do not need to know the explicit value of \(A\). Note that, like for the shift current (109) from the previous section, the partial conservation of \(K_{\mu\nu}\) implies that the whole of \(K_{\mu\nu}\) will have the same scaling dimension,
i.e. the trace part \({\cal K}=K_{\mu\mu}\) and the traceless symmetric part \({\cal K}_{\mu\nu}\) of \(K_{\mu\nu}\) have the same dimension \(\Delta_{K}\).
We now consider vector operators on the first level of the linear shift symmetry multiplets built on top of \({\cal K}\) and \({\cal K}_{\mu\nu}\). These are defined as the operators \({\cal V}_{1,\mu}\) and \({\cal V}_{2,\mu}\) which have a well-defined scaling dimension and satisfy the equations:
\[[Q_{\nu},{\cal V}_{1,\mu}]=\delta_{\mu\nu}{\cal K}\,,\qquad[Q_{\nu},{\cal V}_{2,\mu}]={\cal K}_{\mu\nu}\,. \tag{108}\]
Since \({\cal V}_{1,\mu}\) and \({\cal V}_{2,\mu}\) are scaling operators, we conclude, by Eq. (104), that they both have dimension \(d-1\). Thus any linear combination
\[{\cal V}_{\mu}=p_{1}{\cal V}_{1,\mu}+p_{2}{\cal V}_{2,\mu} \tag{109}\]
is a candidate virial current.
What is the relation of this construction to the true virial current \(V_{\mu}\) given above, which transforms under \(Q_{\nu}\) as (105)? Let us choose the constants \(p_{1},p_{2}\) in \({\cal V}_{\mu}\) and the improvable terms \(k_{3},k_{4}\) in \(V_{\mu}\) so that
\[[Q_{\nu},V_{\mu}-{\cal V}_{\mu}]=0\,. \tag{110}\]
To achieve this, we first determine \(p_{1}\) and \(p_{2}\) from \(k_{1}\) and \(k_{2}\) by comparing the \(h\) dependent terms of (107) and (105). Then, we fix the shift non-invariant improvable terms in \(V_{\mu}\) (i.e. \(k_{3}\) and \(k_{4}\)) by comparing the \(\chi\) dependent terms. The shift-invariant part of \(V_{\mu}\) is left undetermined.
Now, Eq. (110) shows that the difference \(v_{\mu}:=V_{\mu}-{\cal V}_{\mu}\) is linear shift invariant. Inspecting all vector operators of the appropriate classical scaling dimension, of schematic form \(\partial^{3}\chi\), \(\partial\chi\partial^{2}\chi\) and \(\partial h\partial^{2}h\), it turns out that all such terms are improvable [7, Eq. (106)], namely of the form \(\partial_{\nu}{\cal O}_{\mu\nu}\) with \({\cal O}_{\mu\nu}\) in the second line of (104). Hence, we can improve the stress tensor, so that the improved virial current is \(V^{\prime}_{\mu}=V_{\mu}-v_{\mu}\). By construction, this final virial current satisfies \(V^{\prime}_{\mu}={\cal V}_{\mu}\), and hence \(\Delta_{V^{\prime}}=d-1\), completing the argument.
### Higher derivative shift symmetric scalar
So far in this section, we have discussed scale invariant but non-conformal theories proposed in the literature, verifying the non-renormalization of the virial current. As a further application with novel predictions, let us study an interacting theory of higher derivative shift symmetric scalar. We consider the action studied in [63]25
Footnote 25: A multi-component generalization of this model may be related to the membrane theories discussed above. See [64] for more details. The following discussion applies to their models, too.
\[S=\int d^{d}x\left(\frac{1}{2}(\partial^{2}\varphi)^{2}+g(\partial_{\mu} \varphi\partial_{\mu}\varphi)^{2}\right). \tag{111}\]
It is invariant under the constant shift \(\varphi\to\varphi+c\). In \(d=4\), the action is conformal invariant classically (broken by the RG effect), and in any dimensions it is conformal invariant at the non-interacting fixed point \(g_{*}=0\). It is interesting to see if the interacting fixed point
in \(d=4-\varepsilon\) dimension is scale-invariant or conformal invariant. Note that the theory is non-unitary.
The existence of a non-trivial fixed point was confirmed by the perturbative calculation of [63], which is located at \(g_{*}=O(\varepsilon)\) at one-loop. The scaling dimension of \(\varphi\) is \(\Delta_{\varphi}=\frac{d-4}{2}+\eta\), where the anomalous dimension starts at three loops: \(\eta=\frac{1}{25}\varepsilon^{3}+O(\varepsilon^{4})\)[63, Eq. (III.47)]. Is this interacting fixed point conformal invariant?
One of the results in [63] was that the fixed point is conformal up to one loop in perturbation theory. We will show here that it is only scale invariant if higher orders are taken into account. The crucial observation is that this theory has a shift symmetry generated by the conserved shift current \(J_{\mu}=\partial_{\mu}\partial^{2}\varphi-4g\partial_{\mu}\varphi\partial_{ \nu}\varphi\partial_{\nu}\varphi\). Note that unless \(g=0\) at the fixed point, the entire \(J_{\mu}\) cannot be written as a derivative of other local operators. From our general argument of the shift symmetry, the dimension of \(J_{\mu}\) satisfies \(\Delta_{J}+\Delta_{\varphi}=d-1\), which implies \(\Delta_{J}=\frac{d+2}{2}-\eta\). The scaling dimension \(\Delta_{J}\) is _not_\(d-1\) (unless \(d=4\)), violating a necessary condition for a conserved _primary_ current in conformal field theories. We conclude that the interacting fixed point cannot be conformal within perturbation theory.
Regarding the trace of the stress tensor, a calculation analogous to Section 4.2 shows there is a virial current given by \(V_{\mu}=-\Delta_{\varphi}\,\varphi J_{\mu}\). Thanks to the shift symmetry, our general argument shows that the scaling dimension of \(V_{\mu}\) is protected to be \(d-1\) exactly, while this operator is not conserved at the interacting fixed point. A general lesson here is that at interacting fixed points with shift symmetry, it is more natural that they are only scale invariant rather than conformal invariant.
## 6 Conclusions
In this paper we discussed a fascinating RG fixed point which deserves to be more widely known - the dipolar fixed point of Aharony and Fisher, describing the phase transition in isotropic ferromagnets with strong dipole-dipole forces. Our interest in this fixed point was sparked by the realization that it provides an example of an interacting theory that is scale but not conformally invariant. Such examples are rare [8], and experimentally relevant ones are even rarer, the only other one occurring in the physics of fluctuating membranes [7].
One of the most pleasing conclusions of our work is a new insight into the role of a shift symmetry in protecting the virial current dimension from loop corrections due to interactions. Since the virial current \(V_{i}\) is mapped by the shift symmetry charge into the shift symmetry current, we naturally obtain \(\Delta_{V}=d-1\). Furthermore, by going through the list of other known interacting scale without conformal models, we found that all of them have a shift symmetry and protect the virial current dimension via the same mechanism or its small variation. While we do not have a proof, could it be that shift symmetry is a necessary feature of such models?
The shift symmetry is always spontaneously broken in the sense that there exists an operator \(\mathcal{O}\), whose variation is a constant (so that \(\langle\delta\mathcal{O}\rangle\neq 0\)). Usually, we expect that a spontaneously broken global symmetry leads to a massless Nambu-Goldstone boson
particle, but in our examples, this does not happen. Instead, the infrared theory is a scale-invariant fixed point that is non-trivially interacting, with anomalous dimensions. It is instructive to understand how this is avoided. The key point is that this may only happen in a free theory or in a non-unitary model. Indeed all our examples were non-unitary.
In unitary (relativistic) quantum field theories, the momentum space 2pt function of the spontaneously broken current \(\langle J_{i}(q){\cal O}(-q)\rangle=i\frac{q_{i}}{q^{2}}\) implies the existence of a massless Nambu-Goldstone boson by inserting the complete momentum eigenstates. In particular, we then predict the existence of a \(1/q^{2}\) pole in the \(\langle{\cal O}(q){\cal O}(-q)\rangle\) 2pt function. This means that the IR theory is that of a free massless boson (which indeed has a shift symmetry).
On the contrary, the dipolar fixed point, where \({\cal O}\) here is given by \(U\), avoids the existence of \(1/q^{2}\) pole in \(\langle{\cal O}(q){\cal O}(-q)\rangle\). It is allowed to do so because it's non-unitary. Technically, the Lorentzian continuation should give rise to the structure of a Hilbert space with an indefinite metric, i.e. \(1=\sum_{n}|n\rangle\langle n|\) is replaced with \(1=\sum_{n}|n\rangle\eta^{nm}\langle m|\), where \(\eta^{nm}\) is not positive definite. The appearance of the indefinite metric \(\eta^{nm}\) avoids the usual argument.
It is still an open question if scale invariance implies conformal invariance in _interacting and unitary_ quantum field theories in three dimensions.26 As argued above, such theories cannot have a shift symmetry, so if they exist, the virial current dimension should be protected by another mechanism.
Footnote 26: In 4d, Refs. [21; 47] proved that a unitary scale-invariant theory _without dimension 2 scalars_ must be conformal. With dimension 2 scalars, they showed that the theory is either conformal, or the trace of the stress tensor must have the form \(T_{\mu\mu}=\partial^{2}{\cal A}+{\cal B}\) where \({\cal A}\) is a dimension 2 scalar, and \({\cal B}\) is a generalized free field of dimension 4. The latter loophole is still open to the best of our knowledge.
Going back to the dipolar fixed point, although some of its features were observed (as we reviewed in Section 3), more experimental studies are welcome. The dipolar critical exponents are relatively poorly known compared to the Heisenberg fixed point. Perturbative results are available only at three loops [31]. On the nonperturbative side, we mention preliminary computations of critical exponents using the functional RG [18]. Since the fixed point is not conformal, the conformal bootstrap [65] does not apply. This is then a good concrete model to think about developing bootstrap techniques in the absence of conformal invariance. Indeed, we still have the operator product expansion (OPE). In addition to scale invariance, the model possesses a shift symmetry. The shift multiplets, a notion which we introduced in Section 4.3, have a particular structure, and may play a role similar to the conformal multiplets in setting up the bootstrap calculation. It is important to ascertain if different operators in the same shift multiplets have their OPE coefficients related. A further hurdle is the lack of unitary, hence an analog of Gliozzi's method [66] will be called for, instead of techniques based on positivity [67]. This is a hard but very interesting problem.
###### Acknowledgments.
SR thanks Viacheslav Krivorol for discussions related to Section 3. We thank Eric Perlmutter for a prescient question about an axiomatic definition of shift symmetry. SR and AGG
are supported by the Simons Foundation grant 733758 (Simons Bootstrap Collaboration) and AGG is also supported by the Simons Foundation grant 915279 (IHES). The work by YN is in part supported by JSPS KAKENHI Grant Number 21K03581.
## Appendix A Demagnetizing factor
In this appendix we explain Eq. (3.9) in the main text, needed to interpret all experimental papers measuring magnetic susceptibility. We normalize \(\phi\) as in (3.1), i.e. \(\phi=M\).
When a sample is put in an external magnetic field \(B_{i}^{(0)}\), which we assume uniform, it gets magnetized. Magnetization inside the sample \(\phi_{i}(x)\), \(x\in\Omega\), can be found by minimizing the Hamiltonian:
\[\int_{\Omega}\left(\frac{1}{2}a(\partial\phi_{i})^{2}+\frac{1}{2}b\phi_{i}^{2 }-\phi_{i}B_{i}^{(0)}\right)+\frac{1}{2}\iint_{x,y\in\Omega}U_{ij}(x-y)\phi_{i }(x)\phi_{j}(y),\] (A.1)
where the integration is over the sample \(\Omega\). Suppose we are above \(T_{c}\), then \(b>0\) and the quadratic form depending on \(\phi_{i}\) is positive definite, so the minimizer (with Neumann boundary conditions) exists and is unique. It solves the classical equation of motion:
\[-a\partial^{2}\phi_{i}+b\phi_{i}-B_{i}^{(0)}+\int_{y\in\Omega}U_{ij}(x-y)\phi _{j}(y)=0.\] (A.2)
The most important particular case arises when the magnetization is constant, in which case the first term in (A.2) drops out. There is a consistency condition for this to happen: the integral
\[\int_{y\in\Omega}U_{ij}(x-y){=4\pi D_{ij}}\qquad(x\in\Omega)\] (A.3)
must be \(x\)-independent. Famously, this happens if the sample is ellipsoidal; the tensor \(D_{ij}\) is then diagonal \(D_{ij}=\delta_{ij}D_{i}\) in the ellipsoid axis. The factors \(D_{i}\) are called demagnetizing factors. For an ellipsoid \(\frac{x_{1}^{2}}{a_{1}^{2}}+\frac{x_{2}^{2}}{a_{2}^{2}}+\frac{x_{3}^{2}}{a_{3} ^{2}}\leqslant 1\) they are given by (see e.g. section 4.18 in [68], the solution relies on ellipsoidal coordinates which are nicely reviewed in SS4 of [69]):
\[D_{i}=\frac{a_{1}a_{2}a_{3}}{2}\int_{0}^{\infty}\frac{ds}{(s+a_{i}^{2})\sqrt{( s+a_{1}^{2})(s+a_{2}^{2})(s+a_{3}^{2})}}\;.\] (A.4)
In older literature [70; 71], the demagnetizing factors were expressed in terms of elliptic integrals and tabulated. However, for practical purposes it is easier nowadays to numerically integrate the definition in (A.4). In general we have a relation \(D_{1}+D_{2}+D_{3}=1\)[71], in particular \(D=1/3\) for the sphere.
It follows from the above discussion that for ellipsoids the magnetization is constant and is given by Eq. (3.9). If the shape is not ellipsoidal then one has to solve Eq. (A.2), including the first term, so the discussion becomes more complicated. In experiments one usually uses spherical samples. Sometimes cylindrical samples are used, but instead of solving Eq. (A.2) one approximates them by ellipsoids and pretends that the magnetization is constant.
In textbooks, magnetization phenomena are usually discussed in terms of the \(H\) field. Let us see how this is related to the above. Let \(B_{i}\) be the magnetic field produced by the magnetization \(\phi_{i}\), and \(B^{(0)}+B\) be the total field. We can consider the Hamiltonian in which the magnetic field \(B\) is explicit (see Eq. (8), where we have to rescale \(\phi_{i}\) to set \(z=1\), and drop the quartic term):
\[\int_{\Omega}\left(\frac{1}{2}a(\partial\phi)^{2}+\frac{1}{2}\tilde{b}\phi^{2} -\phi(B^{(0)}+B)+\frac{1}{8\pi}B^{2}\right), \tag{100}\]
where \(B=\nabla\times A\). Varying this Hamiltonian over \(A\) we get the equation
\[\nabla\times(B-4\pi\phi)=0, \tag{101}\]
which is usually written as \(\nabla\times H=0\), introducing the field \(H=B-4\pi\phi\).
We can also integrate out \(B\) from (100), and get an effective Hamiltonian just in terms of the \(\phi\) field. It is easy to show that this Hamiltonian takes the form (100) with an important mass shift (see Eq. (9) where we need to put \(z=1\))
\[b=\tilde{b}-4\pi. \tag{102}\]
The equation for \(H\) is usually solved by introducing magnetic potential \(U\) so that \(H=-\nabla U\). The condition that \(B\) is solenoidal then gives
\[\nabla^{2}U=4\pi\nabla\cdot\phi, \tag{103}\]
and see e.g. [72] for appropriate boundary conditions that \(U\) must satisfy on the boundary of the sample. For constant magnetization \(\phi\), one finds
\[U(x)=\int_{\partial\Omega}d^{2}x^{\prime}\frac{n^{\prime}\cdot\phi}{\left|x-x^ {\prime}\right|}, \tag{104}\]
which gives the \(H\) field inside the sample:
\[H(x)=-4\pi D_{ij}(x)\phi_{j},\qquad x\in\Omega, \tag{105}\]
\[D_{ij}(x)=\frac{1}{4\pi}\int_{\partial\Omega}d^{2}x^{\prime}\partial_{x_{j}} \frac{1}{\left|x-x^{\prime}\right|}n^{\prime}_{j}=\frac{1}{4\pi}\int_{\Omega} d^{3}x^{\prime}U_{ij}(x-x^{\prime}), \tag{106}\]
by Stokes' theorem. As we already discussed for the ellipsoidal sample the last integral does not depend on \(x\) and agrees with the demagnetizing factor defined in (101).
Finally, let us derive Eq. (104). Extremizing the Hamiltonian (100) over \(\phi\) we get, assuming \(\phi\) and \(B\) are constant inside the sample:
\[\tilde{b}\phi_{i}=B_{i}^{(0)}+B_{i} \tag{107}\]
Substituting into this equation \(B=H+4\pi\phi\) and \(H_{i}=-4\pi D_{i}\phi_{i}\), and using the mass shift relation (102), we get precisely (104). Defining the total \(H\) field \(H^{t}=B^{(0)}+H\), Eq. (104) can also be written as
\[\phi=b^{-1}H^{t}. \tag{108}\]
Experimental data
In this appendix we describe the experimental data used to extract Table 1.
Amplitude \(C\)This amplitude is measured from the static susceptibility above the Curie temperature \(\chi=\frac{\partial M}{\partial H^{t}}|_{H^{t}=0}\). Here \(H^{t}\) is the total \(H\) field in the sample, i.e. \(H^{t}=H^{(0)}+H\), where \(H^{(0)}=B^{(0)}\) is the applied field, and \(H=-4\pi DM\) is the field associated with the magnetization of the sample, see Appendix A for details. Susceptibility varies as \(\chi=Ct^{-\gamma}\) near the Curie point, which is consistent with (3.10) using Eq. (A.13). In this paper we work in the Gaussian units, and must keep careful track of factors of \(4\pi\) compared to the literature.
EuS: We use [35] for the susceptibility near \(T_{c}=16.56\) K. They report their results in terms of normalized magnetization \(m=M/M_{0}\) and applied field \(h=H/H_{c}\). Thus, our amplitude \(C\) is related to the value \(\Gamma\) they report as \(C_{\rm EuS}=\Gamma M_{0}/H_{c}\), with the measured values \((\Gamma,4\pi M_{0},H_{c})_{\rm EuS}=(0.45,15.4\,{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{
### Model
Consider a system of atoms in a cubic lattice of type sc, bcc or fcc. The atom at point \(x\) has dipole moment \(\vec{m}=(m^{i})_{i=1,2,3}\) of magnitude \(|\vec{m}|=\mu\). The dipoles interact with a dynamical magnetic field \(\vec{B}\), so the partition function is
\[Z=\int D\vec{B}\,\prod_{x}d^{3}m_{x}\,\delta\big{(}\vec{m}_{x}^{2}-\mu^{2}\big{)} \exp\big{(}-\beta\mathcal{H}[m,B]\big{)}\,. \tag{108}\]
Recall that the energy felt by a dipole in a magnetic field is \(-\vec{m}\cdot\vec{B}\). At site \(x\) the total magnetic field is \(\vec{B}_{x}^{t}\equiv\vec{B}^{(0)}(x)+\vec{B}(x)\), where \(\vec{B}^{(0)}\) is a background field and \(\vec{B}\) is dynamical. In total, the electromagnetic part of the Hamiltonian is
\[\mathcal{H}_{\text{EM}}=-\sum_{x}\vec{m}_{x}\cdot\vec{B}_{x}^{t}+\frac{1}{8\pi }\int d^{3}x\,\vec{B}(x)^{2}\,. \tag{109}\]
Besides electromagnetic interactions, the dipoles experience short-range ferromagnetic interactions, which can be modeled with the Hamiltonian
\[\mathcal{H}_{\text{short-range}}=\frac{J}{4c}\sum_{x,\delta}(\vec{m}_{x}- \vec{m}_{x+\delta})^{2}-J\theta\sum_{x}\vec{m}_{x}^{2}\,. \tag{110}\]
Here \(J\) is the interaction strength, \(\delta\) runs over nearest neighbors, and \(c\) is the number of nearest neighbors. Because of the integration measure (108), the term proportional to \(\theta\) only changes the partition function by an overall normalization. We then expect that sensible predictions of our model should be independent of \(\theta\). We choose \(\theta>1\), which ensures that if we rewrite the interactions as
\[\mathcal{H}_{\text{short-range}}=-\frac{1}{2}\sum_{x,y}\vec{m}_{x}K_{xy}\vec{ m}_{y}\,, \tag{111}\]
then the quadratic form \(K_{xy}\) is positive definite, a property necessary for the Hubbard-Stratonovich transformation.
### Hubbard-Stratonovich transformation
To compare with the Hamiltonian (10), we need to express the partition function in terms of a coarse-grained magnetization that is not restricted by \(|\vec{m}|=\mu\). This is achieved with a Hubbard-Stratonovich transformation28
Footnote 28: See e.g. [78] for a detailed introduction.
\[Z=\int\prod_{x}d^{3}\lambda_{x}\,d^{3}m_{x}\,\delta\big{(}m_{x}^{2}-\mu^{2} \big{)}\exp\left(-\frac{\beta}{2}\sum_{x,y}\vec{\lambda}_{x}K_{xy}^{-1}\vec{ \lambda}_{y}+\beta\sum_{x}\vec{m}_{x}\cdot(\vec{\lambda}_{x}+\vec{B}_{x}^{t} )\right)\,. \tag{112}\]
For clarity, we ignore terms that only change the normalization of the partition function. We also omit the action for the dynamical magnetic field \(\vec{B}\), but we shall restore it at the end. Now we integrate over \(\vec{m}\) using
\[\frac{1}{4\pi}\int d^{3}m\,\delta\big{(}\vec{m}^{2}-1\big{)}\exp\big{(}\vec{m} \cdot\vec{v}\,\big{)}=\frac{\sinh|\vec{v}|}{|\vec{v}|}=\exp\left(\frac{1}{6} \vec{v}^{2}-\frac{1}{180}(\vec{v}^{2})^{2}+\ldots\right)\,, \tag{113}\]
where \(\vec{v}=\beta\mu(\vec{\lambda}+\vec{B}^{t})\). Since we are interested in a mean-field theory analysis, we drop quartic powers of \(\vec{v}\) and higher, so
\[\mathcal{H}=\frac{1}{2}\sum_{x,y}\vec{\lambda}_{x}K_{xy}^{-1}\vec{\lambda}_{y}- \frac{\beta\mu^{2}}{6}\sum_{x}(\vec{\lambda}_{x}+\vec{B}_{x}^{t})^{2}\,. \tag{102}\]
This depends on the field \(\vec{\lambda}\) that takes arbitrary real values, so we can interpret \(\vec{\lambda}\) as a coarse-grained field related to the magnetization, with the precise relation given below. That the last term in (102) comes with a negative coefficient is physically reasonable--lowering the temperature should have an ordering effect.
Now we should evaluate the inverse of the quadratic form \(K\). Before calculating the inverse, note that for \(\vec{\lambda}\) varying slowly compared to the lattice size, \(K\) acts as29
Footnote 29: We use the relation \(\sum_{\delta}(q\cdot\delta)(p\cdot\delta)=\frac{\delta}{3}\,\mathrm{a}^{2}\,q \cdot p\), which is valid for cubic lattices. Here \(c\) is the coordination number of the lattice, and the distance to nearest neighbors is \(|\delta|=\mathrm{a}\).
\[\sum_{x,y}\vec{\lambda}_{x}K_{xy}\vec{\lambda}_{y}=\int d^{3}x\left(\frac{2J \theta}{V}\vec{\lambda}^{2}-\frac{J\mathrm{a}^{2}}{6V}(\partial_{i}\vec{ \lambda})^{2}+O(\partial^{4}\lambda^{2})\right)\,. \tag{103}\]
In the previous equation, \(\mathrm{a}\) is the nearest-neighbor distance and \(V\) is the volume of the unit cell, which appears in the continuum limit \(\sum_{x}\to\int\frac{d^{3}x}{V}\). Now we can invert \(K\) using (103) and treating the kinetic term as a perturbation, so \((I-\varepsilon A)^{-1}\approx I+\varepsilon A\). This approximation is valid for sufficiently slow fluctuations, or in other words, the higher-order terms are irrelevant in the RG sense. Combining all the ingredients, we arrive at
\[\mathcal{H}=\int d^{3}x\left(\frac{1}{2}c_{1}(\partial_{i}\vec{\lambda})^{2}+ \frac{1}{2}c_{2}\vec{\lambda}^{2}-\frac{1}{2}c_{3}\big{(}\vec{\lambda}+\vec{ B}^{t}\big{)}^{2}\right)\,, \tag{104}\]
where \(c_{i}\) are given by
\[c_{1}=\frac{\mathrm{a}^{2}}{24\theta^{2}JV}\,,\qquad c_{2}=\frac{1}{2\theta JV }\,,\qquad c_{3}=\frac{\beta\mu^{2}}{3V}\,. \tag{105}\]
Now we redefine
\[\vec{\lambda}=p\vec{B}^{t}+q\vec{\phi}\,,\quad p=-1/(1+\sqrt{c_{2}/c_{3}})\,, \quad q=1/\sqrt{c_{2}c_{3}}\,. \tag{106}\]
Here \(p\) is chosen so that the term \((\vec{B}^{t})^{2}\) has zero coupling, and \(q\) so that the linear term \(-\vec{B}^{t}\cdot\vec{\phi}\) is unit normalized. This ensures that \(\vec{\phi}\) is the correct coarse-grained magnetization, because it satisfies \(\langle\vec{\phi}\rangle=\frac{1}{\beta}\frac{\delta\log Z}{\delta B(0)}\). The action in terms of \(\vec{\phi}\) takes the form
\[\mathcal{H}=\int d^{3}x\left(\frac{c_{1}}{2c_{2}c_{3}}(\partial_{i}\vec{\phi}) ^{2}+\frac{c_{2}-c_{3}}{2c_{2}c_{3}}\,\vec{\phi}^{2}-\vec{B}^{t}\cdot\vec{ \phi}\right)\,. \tag{107}\]
Here we ignored terms \(\partial_{i}\vec{B}^{t}=\partial_{i}\vec{B}^{(0)}+\partial_{i}\vec{B}\), justified for a sufficiently homogeneous external field and, for \(\partial_{i}\vec{B}\), because upon integrating out the magnetic field \(\vec{B}\), these terms will generate higher derivative interactions, which are more irrelevant in the RG sense.
Finally, we add the action \(\frac{1}{8\pi}\int\vec{B}^{2}\) for the dynamical magnetic field and integrate it out, as explained in Appendix A. This generates the long-range term \(\phi.U.\phi\), and adds \(-4\pi\) correction to the mass. The effective Hamiltonian comes out to be
\[\mathcal{H}=\int d^{3}x\left(\frac{1}{2}a(\partial_{i}\vec{\phi})^{2}+\frac{1}{ 2}b\vec{\phi}^{2}-\vec{B}^{(0)}\cdot\vec{\phi}\right)+\frac{1}{2}\int d^{3}x\,d ^{3}y\,\phi^{i}(x)\phi^{j}(y)U_{ij}(x-y)\,, \tag{111}\]
where
\[a=\frac{\mathrm{a}^{2}V}{4\beta\theta\mu^{2}}\,,\qquad b=\frac{3V}{\beta\mu^{2 }}-2\theta JV-4\pi\,. \tag{112}\]
We see that our calculation gave an effective Hamiltonian of the same form as (10), (109) considered above. Note that the coefficient \(b\propto\beta^{-1}\) at \(\beta\ll 1\). This is in agreement with Curie's law, that the susceptibility \(\chi\propto 1/T\) at high temperatures.
The identification of the correct coarse-grained magnetization \(\phi\) in terms of \(\lambda\) was crucial in the above line of reasoning. The derivation is robust and would work for any quadratic Hamiltonian density of the form \(k_{1}(\partial\lambda)^{2}+k_{2}\lambda^{2}+2k_{3}\lambda\cdot B^{t}+k_{4}(B^ {t})^{2}\) as long as the quadratic form \(k_{2}x^{2}+2k_{3}xy+k_{4}y^{2}\) is not sign-definite.
### Comparison to Europium compounds
We now apply these results to the ferromagnetic insulators EuS and EuO. These compounds form an fcc lattice of the rock salt type, so the nearest-neighbor distance is related to the lattice constant \(\mathrm{a}^{\prime}\) by \(\mathrm{a}=\mathrm{a}^{\prime}/\sqrt{2}\), and the volume of the unit cell is \(V=\mathrm{a}^{3}/\sqrt{2}\) (Fig. 2). As experimental inputs, we will use the lattice constant and the critical temperature of these materials:
\[\mathrm{EuS}:\quad\mathrm{a}^{\prime}=5.96\,\mathrm{\AA}\,,\quad T _{c}=16\,\mathrm{K},\] \[\mathrm{EuO}:\quad\mathrm{a}^{\prime}=5.14\,\mathrm{\AA}\,,\quad T _{c}=69\,\mathrm{K}. \tag{113}\]
In the europium compounds, the Eu\({}^{2+}\) ions are responsible for the dipole interactions. The europium ion has spin \(S=7/2\) so the Lande \(g\)-factor is \(g=2\). The magnitude of the dipole moment is then \(\mu=g\mu_{B}\sqrt{S(S+1)}\), where \(\mu_{B}\) is the Bohr magneton.
Figure 2: Fcc lattice structure of EuX, X=S,O.
We start estimating the coupling \(J\). For this we use that \(b(T_{c})=0\), where \(b\) is given in (106). We obtain:
\[J=\frac{3k_{B}T_{c}}{2\theta\mu^{2}}-\frac{2\pi}{\theta V}\,. \tag{107}\]
As a side comment, note that we can introduce a parameter \(\hat{g}\) that measures the relative shift of the critical temperature due to dipolar effects, namely:
\[\hat{g}=\frac{T_{c}-T_{c}^{\text{Heis.}}}{T_{c}}=\frac{4\pi\mu^{2}}{3k_{B}T_{c }V}\,, \tag{108}\]
where \(T_{c}^{\text{Heis.}}\) is defined as the temperature where \(b^{\text{Heis.}}\) vanishes, which is defined by dropping in the expression for \(b\) the \(4\pi\) term, which is due to dipolar effects. It is in this meaning that quantity \(\hat{g}\) was introduced by Aharony and Fisher [1], who used it to estimate the size of dipolar effects. (Note that the expression for \(\hat{g}\) in [1] is a factor of 3 smaller than ours, while another paper by the same authors [2] leads to \(\hat{g}\) that is a factor 3 larger than ours, see Remark C.1 below.) Using the experimental values (106), we evaluate
\[\hat{g}_{\text{EuS}}=0.19\,,\qquad\hat{g}_{\text{EuO}}=0.07\,. \tag{109}\]
Knowing \(J\) we can find \(a\) and \(b\) in (106). Expanding near \(T_{c}\), we obtain
\[b=C^{-1}t,\qquad\xi=(a/b)^{1/2}=f^{+}t^{-1/2}, \tag{110}\]
with the following expressions for the critical amplitudes
\[C=\frac{\mu^{2}}{3k_{B}T_{c}V}\,,\qquad f^{+}=\frac{\text{a}}{2\sqrt{3\theta} }\,. \tag{111}\]
Recall that \(\theta>1\) is an arbitrary parameter that we introduced to make the quadratic form in (105) positive definite. If we could compute the partition function exactly, then \(\theta\) would only affect the overall normalization without changing physical observables. The fact that \(f^{+}\) depends on \(\theta\) suggests that this prediction has to be taken with a grain of salt, unlike the predictions for \(\hat{g}\) and \(T_{c}\), which are robust. Keeping this caveat in mind, we choose an arbitrary order-one value for \(\theta\). For example, using \(\theta=2\) we find
EuS: \[C=15\cdot 10^{-3}\,, f^{+}=0.77\,\text{\AA}\,,\] (112) EuO: \[C=5.6\cdot 10^{-3}\,, f^{+}=0.72\,\text{\AA}\,.\] (113)
The predictions for \(C\) are within \(<10\%\) error compared to the experimental values in Table 1, while the estimates for \(f^{+}\) are off by a factor of 2.
_Remark C.1_.: Reference [2] passes from the microscopic theory to the effective theory using an alternative method, which goes back to [22, 79]. Their method amounts to replacing the Heisenberg model integration measure, with its restriction \(\vec{m}^{2}=\mu^{2}\), by a centered Gaussian of width of order \(\mu\):
\[\delta(\vec{m}^{2}-1)\to\exp\left(-\frac{w}{2\mu^{2}}\vec{m}^{2}+\dots\right)\,, \tag{114}\]
where \(w=1\) in [2], but we keep it general to see how it affects the final result. The \(\ldots\) includes some quartic interaction terms, which are not important for the present discussion.
After the replacement, the partition function becomes a Gaussian integral:
\[Z\to\int D\vec{B}\,\prod_{x}d^{3}m_{x}\,\exp\left(-\beta\mathcal{H}[m,B]-\frac{w }{2\mu^{2}}\sum_{x}\vec{m}_{x}^{2}+\ldots\right)\,. \tag{108}\]
It is now straightforward to take the continuum limit of the Hamiltonian using (107), and to integrate out the dynamical \(\vec{B}\) field. In this approach, the continuum limit of \(\vec{m}\) is directly the coarse-grained magnetization, with no additional rescaling needed. The effective Hamiltonian is of the form (10), (107), with effective parameters (AF stands for Aharony-Fisher)
\[a_{\rm AF}=\frac{\mathrm{a}^{2}JV}{6}\,,\qquad b_{\rm AF}=\frac{wV}{\beta\mu^{2 }}-2\theta JV-4\pi\,. \tag{109}\]
Here we also allowed for the ambiguity \(\theta\), of the same origin as in Section C.2. Ref. [2] does not discuss it and uses directly \(\theta=1/2\).
We now trade the short-range coupling \(J\) by the critical temperature \(T_{c}\). In solving for \(J\), we consider \(w\) and \(\theta\) as number independent of \(J\) and \(\beta\). All in all, we find
\[C_{\rm AF}=\frac{\mu^{2}}{wk_{B}T_{c}V}\,,\qquad f_{\rm AF}^{+}=\frac{\mathrm{a }}{2\sqrt{3\theta}}\,\sqrt{1-\hat{g}_{\rm AF}}\,,\qquad\hat{g}_{\rm AF}=\frac{ 4\pi\mu^{2}}{wk_{B}T_{c}V}\,. \tag{110}\]
Comparing these equations to (106) and (107), we see that the method of [2] does not agree with ours for \(w=1\) used in [2], while it would agree for \(w=3\), up to \(O(\hat{g})\) corrections in \(f^{+}\) which are small. Even this partial agreement is quite surprising, given that the replacement in (108) is rather ad hoc. It cannot be considered in any sense an _approximation_, as it does not even conserve the rough shape of the potential. Our approach based on the Hubbard-Stratonovich transformation seems better justified.
## Appendix D Trace of stress tensor in dipolar model
In this appendix we renormalize the dipolar model (105), using methods analogous to [80], putting the discussion of sections 4.2 and 4.3 on solid ground. To summarize the strategy, we construct a renormalized stress tensor, a renormalized virial current, and a renormalized shift charge. Furthermore, we show it is possible to improve both the stress tensor and the virial current to make them good scaling operators. Finally, we show that under shift symmetry the virial current maps to \(\phi_{i}\). By the discussion in Section 4.3, this implies the virial current has dimension \(\Delta_{V}=d-1\).
### Basic notation
We work with the Hamiltonian (105), except that we rename \(\lambda\to\lambda_{0}\). Throughout this appendix, \(\phi_{i}\) and \(U\) are bare fields and \(\lambda_{0}\) is the bare coupling. We denote renormalized
fields \([\mathcal{O}]\) and renormalized coupling \(\lambda\), namely
\[\phi_{i}=Z_{\phi}[\phi_{i}]\,,\quad U=Z_{U}[U]\,,\quad\lambda_{0}= \mu^{\varepsilon}\lambda\left(1+\frac{a_{11}\lambda+a_{12}\lambda^{2}+\dots}{ \varepsilon}+\frac{a_{22}\lambda^{2}+\dots}{\varepsilon^{2}}+\dots\right)\,. \tag{104}\]
The renormalization factors \(Z_{\mathcal{O}}\) and \(a_{ij}\) follow from requiring finiteness of all correlation functions of renormalized operators
\[G_{N,M}(x_{1},\dots,x_{N};y_{1},\dots,y_{M})=\langle[\phi_{i_{1}}](x_{1})\dots [\phi_{i_{N}}](x_{N})\,[U](y_{1})\dots[U](y_{M})\rangle\,. \tag{105}\]
From the fact that \(\langle\phi_{i}(x)\,U(y)\rangle\) does not receive perturbative corrections, as discussed below equation (13), we conclude that \(Z_{U}=Z_{\phi}^{-1}\).
The beta function is \(\beta(\lambda)=\mu\frac{d\lambda}{d\mu}\) and the anomalous dimensions is \(\gamma_{\mathcal{O}}=\mu\frac{d\log Z_{\mathcal{O}}}{d\mu}\). Unlike in the main text, we keep track of equations of motion (EOM)
\[E_{i}\equiv\frac{\delta\tilde{\mathcal{H}}}{\delta\phi_{i}}=- \partial_{j}f_{ji}+\partial_{i}U+\lambda_{0}(\phi_{k}^{2})\phi_{i}\,,\qquad E \equiv\frac{\delta\tilde{\mathcal{H}}}{\delta U}=-\partial_{i}\phi_{i}\,, \tag{106}\]
since they are necessary to show that correlation functions satisfy scaling Ward identities at the fixed point.
The stress tensor, defined by the formula \(T^{ij}=-2\frac{\delta\tilde{\mathcal{H}}}{\delta g_{ij}}\) (see note 13), reads
\[T_{ij}=f_{ik}f_{jk}+\phi_{i}\partial_{j}U+\phi_{j}\partial_{i}U+ \lambda_{0}\phi_{k}^{2}\phi_{i}\phi_{j}-\delta_{ij}\left(\frac{1}{4}f_{kl}^{2 }+\phi_{k}\partial_{k}U+\frac{\lambda_{0}}{4}\phi^{4}\right)\,. \tag{107}\]
As expected, it is conserved up to EOM:
\[\partial_{i}T_{ij}=-E_{i}\,\partial_{j}\phi_{i}-E\,\partial_{j}U+\partial_{i} \big{(}E_{i}\,\phi_{j}\big{)}\,. \tag{108}\]
The trace of the stress tensor works out to be (compare Eq. (4.16))
\[T_{ii} =-\frac{\varepsilon}{4}\lambda_{0}\phi^{4}+\frac{\varepsilon}{2} \,\phi_{i}E_{i}-\frac{d}{2}\,UE-\partial_{i}V_{i}\,, \tag{109}\] \[V_{i} =\frac{d}{2}\phi_{i}U+\frac{\varepsilon}{2}\phi_{i}E-\frac{ \varepsilon}{2}\partial_{j}\left(\frac{1}{2}\phi^{2}\delta_{ij}-\phi_{i}\phi_ {j}\right)\,. \tag{110}\]
Expressed in terms of bare fields, \(T_{ij}\) should be thought of as a bare stress tensor. Below we will discuss how to make it finite. The first step is to express \(T_{ij}\) and \(V_{i}\) in terms of renormalized couplings and renormalized operators.
### Composite operators
To organize the operators, note that the Hamiltonian (4.14) enjoys a \(\mathbb{Z}_{2}\) symmetry, under which \(\phi_{i}\) and \(U\) are both odd. There is also a shift symmetry \(U(x)\to U(x)+u\) for constant \(u\), with associated current \(\phi_{i}\). We discussed this extra symmetry in Section 4.3. The renormalized shift charge is defined by integrating the renormalized current \([\phi_{i}]\). Thus we have the relation
\[Q=Z_{\phi}[Q] \tag{111}\]
between the bare and renormalized shift charges.
Note that operators neutral under shift symmetry only mix with neutral operators, while operators charged under shift symmetry can mix with everything. Once all operators that can mix under renormalization are identified, then the precise renormalization factors should be determined from the requirement that correlators with composite operator insertions should be finite. More precisely, Green's functions of the form
\[G_{N,M}\big{(}x_{1},\ldots,x_{N};y_{1},\ldots,y_{M};[\mathcal{O} ](x)\big{)}=\langle[\phi_{i_{1}}](x_{1})\ldots[U](y_{1})\ldots[\mathcal{O}](x )\rangle\,, \tag{111}\]
should be finite. For EOM terms this criterion immediately shows that they do not renormalize
\[\phi_{i}E_{i}=[\phi_{i}E_{i}]\,,\qquad UE=[UE]\,. \tag{112}\]
This is because the insertions of \(\phi_{i}E_{i}\) and of \(UE\) into correlation functions simply generate \(\delta\)-functions at the positions of \(\phi_{i}\)'s and of \(U\)'s, respectively, as in [80], Eq. (111). A similar argument shows that
\[\phi_{i}E=(Z_{\phi})^{2}[\phi E]\,. \tag{113}\]
Let us start renormalizing the simplest composite operators, namely \(\phi_{i}^{2}\) and \(\Phi_{ij}:=\phi_{i}\phi_{j}-\frac{\delta_{ij}}{d}\phi_{k}^{2}\). Note that these are the only scalar and symmetric traceless operators even under \(\mathbb{Z}_{2}\), neutral under the shift symmetry, and of 4d dimension \(\Delta_{4d}=2\). As a result, they can only get multiplicatively renormalized \(\phi_{i}^{2}=Z_{\phi^{2}}[\phi_{i}^{2}]\), \(\Phi_{ij}=Z_{\Phi}[\Phi_{ij}]\).
The next case of interest is the renormalization of \(U\phi_{i}\). In this case, mixing occurs with vector operators that are \(\mathbb{Z}_{2}\)-even, either charged or neutral under shift symmetry, and of 4d dimension \(\Delta_{4d}=3\). A basis of linearly-independent functions is \(\{U\phi_{i},\phi_{i}\partial_{j}\phi_{j},\partial_{i}\phi^{2},\partial_{j}( \phi_{i}\phi_{j})\}\). So \(U\phi_{i}\) will be a linear combination of the corresponding renormalized operators:
\[U\phi_{i}=(1+\hat{c}_{1})\left[U\phi_{i}\right]+\hat{c}_{2} \left[\phi_{i}E\right]+\hat{c}_{3}\,\partial_{i}\big{[}\phi^{2}\big{]}+\hat{c }_{4}\,\partial_{j}\big{[}\Phi_{ij}\big{]}\,. \tag{114}\]
Here \(\hat{c}_{i}=\hat{c}_{i}(\lambda,\varepsilon)\) are counterterms that make the renormalized operators finite, which have an ascending series of poles in \(\varepsilon\) starting at \(O(\varepsilon^{-1})\). All such quantities below will carry a hat. The last two terms are total derivatives, so they only modify the virial current by improvement terms. We can further argue that \(\hat{c}_{1}=0\). For this let us act by \([Q]\) on both sides of (114). Using (110) we obtain:
\[[\phi_{i}]=(1+\hat{c}_{1})\left[[Q],\big{[}U\phi_{i}\big{]}\right]\,, \tag{115}\]
where we used the fact that all operators but the first in the r.h.s. of (114) are shift-invariant. Since the l.h.s. of (115) is finite, the r.h.s. must be finite as well, hence \(\hat{c}_{1}=0\).
The other case of interest is the renormalization of \(\phi^{4}\). In this case, we need a basis of \(\Delta_{4d}=4\) operators, which are shift neutral and \(\mathbb{Z}_{2}\) even:
\[\{\phi^{4},\phi_{i}\partial_{i}U,\phi_{i}\partial_{i}\partial_{j }\phi_{j},\phi_{i}\partial^{2}\phi_{i},\partial_{i}(\phi_{i}\partial_{j}\phi_ {j}),\partial^{2}\phi^{2},\partial_{i}\partial_{j}(\phi_{i}\phi_{j})\}\,. \tag{116}\]
Note that one linear combination of operators in (145) corresponds to \(\phi_{i}E_{i}\), where \(E_{i}\) is the EOM for \(\phi_{i}\). Similarly, we identify terms \(\partial_{i}\phi_{i}\) with \(E\), the EOM for \(U\). Finally, the term \(\phi_{i}\partial_{i}U\) is related to the finite operator \(UE\) by integration by parts, so it must have a finite integral. As a result, it can only get infinite contributions that are total derivatives:
\[\phi_{i}\partial_{i}U=[\phi_{i}\partial_{i}U]+\hat{q}_{2}\,\partial_{i}[\phi_{i }E]+\hat{q}_{3}\,\partial^{2}[\phi^{2}]+\hat{q}_{4}\,\partial_{i}\partial_{j}[ \Phi_{ij}]\,. \tag{146}\]
Again \(\hat{q}_{i}=\hat{q}_{i}(\lambda,\varepsilon)\) are ascending series in poles in \(\varepsilon\) starting at \(O(\varepsilon^{-1})\). Comparing to (143), we can rearrange the equation as
\[[\phi_{i}\partial_{i}U]-[EU]-\partial_{i}[U\phi_{i}]=(\hat{c}_{2}-\hat{q}_{2} )\,\partial_{i}[\phi_{i}E]+(\hat{c}_{3}-\hat{q}_{3})\,\partial^{2}[\phi^{2}] +(\hat{c}_{4}-\hat{q}_{4})\,\partial_{i}\partial_{j}[\Phi_{ij}]\,. \tag{147}\]
Since the left-hand side is finite and the right-hand side goes like \(O(\varepsilon^{-1})\), the right-hand side must vanish. We conclude that renormalization preserves integration by parts
\[[\phi_{i}\partial_{i}U]=[EU]+\partial_{i}[U\phi_{i}]\,, \tag{148}\]
a fact that will be useful below.
After these technical remarks, we see that the most general form of the renormalized quartic field is
\[\frac{\lambda_{0}}{4}\phi^{4}=\Big{(}1+\hat{k}_{1}\Big{)}\,\frac {\mu^{\varepsilon}\lambda}{4}\big{[}\phi^{4}\big{]}+\hat{k}_{2}\,[\phi_{i}E_ {i}]+\hat{k}_{3}\,[\phi_{i}\partial_{i}U]+\hat{k}_{4}\,[\phi_{i}\partial_{i}E ]+\hat{k}_{5}\,\partial_{i}[\phi_{i}E]\] \[+\hat{k}_{6}\,\partial^{2}[\phi^{2}]+\hat{k}_{7}\,\partial_{i} \partial_{j}[\Phi_{ij}]\,. \tag{149}\]
We will see in a second that \(\hat{k}_{1},\dots,\hat{k}_{4}\) only have a simple pole in \(\varepsilon\), which is a consequence of requiring that \(\partial_{\lambda}G_{N,M}\) is finite. A similar trick was used in [80], Eq. (3.19). Using the chain rule, the derivative \(\partial_{\lambda_{0}}G_{N,M}\) inserts an integrated bare operator \(\int\phi^{4}\). As a result of the integration, all total derivative terms drop out, hence \(\hat{k}_{5},\hat{k}_{6},\hat{k}_{7}\) cannot be determined by this trick but need an explicit computation of the divergence which we will not do.
As to the trick, the precise calculation is analogous to [80], and one finds30
Footnote 30: We point out a minor difference in our notation from [80]. Ref. [80] denotes by \(\beta(\lambda)\) the 4d part of the beta function, while the beta function in \(d=4-\varepsilon\) is given by \(\mu\frac{d}{d\mu}\lambda|_{\text{Brown}}=-\varepsilon\lambda+\beta(\lambda)\). On the other hand, we denote by \(\beta(\lambda)\) the full beta function in \(d=4-\varepsilon\): \(\mu\frac{d}{d\mu}\lambda|_{\text{here}}=\beta(\lambda)\).
\[\hat{k}_{1}=-\frac{\beta(\lambda)+\varepsilon\lambda}{\lambda\,\varepsilon}\,, \qquad\hat{k}_{2}=-\hat{k}_{3}=\frac{\gamma_{\phi}}{\varepsilon}\,,\qquad\hat {k}_{4}=0\,, \tag{150}\]
which also used integration by parts as in (148). All in all, the renormalization of the quartic operator reads
\[\frac{\lambda_{0}}{4}\phi^{4}=-\frac{\mu^{\varepsilon}\beta(\lambda)}{4 \varepsilon}\big{[}\phi^{4}\big{]}+\frac{\gamma_{\phi}}{\varepsilon}\,[\phi_{ i}E_{i}]-\frac{\gamma_{\phi}}{\varepsilon}\,[\phi_{i}\partial_{i}U]+\,\hat{k}_{5}, \hat{k}_{6},\hat{k}_{7}\text{ terms}\,. \tag{151}\]
Using this equation and (143), we can finally express the trace (147) of the stress tensor in terms of renormalized operators. The result looks more elegant using a further integration by parts with (148), giving
\[T_{ii}=\frac{\mu^{\varepsilon}\beta(\lambda)}{4}[\phi^{4}]-(\Delta_{\phi}-1)\, [\phi_{i}E_{i}]-\Delta_{U}[UE]-\Delta_{U}\partial_{i}[U\phi_{i}]-p\,\partial _{i}[\phi_{i}E]-\partial_{i}\partial_{j}\mathcal{O}_{ij}\,. \tag{152}\]
The scaling dimensions are defined in (14), the improvement is \(\mathcal{O}_{ij}=a[\phi^{2}]\delta_{ij}+b[\Phi_{ij}]\), and furthermore
\[p=\frac{d}{2}\hat{c}_{2}+\frac{\varepsilon}{2}(Z_{\phi})^{2}+ \varepsilon\hat{k}_{5}\,, \tag{114}\] \[a=\frac{d}{2}\hat{c}_{3}-\frac{\varepsilon}{2}\left(\frac{1}{2} -\frac{1}{d}\right)Z_{\phi^{2}}+\varepsilon\hat{k}_{6},\qquad b=\frac{d}{2} \hat{c}_{4}+\frac{\varepsilon}{2}Z_{\Phi}+\varepsilon\hat{k}_{7}\,. \tag{115}\]
In expressing \(p\) we also used (113). It will follow from the next subsection that \(p\) is finite (this is not clear from above).
### Finiteness of stress tensor
Let us discuss the structure of counterterms for the stress tensor and the virial current. As already mentioned \(T_{ij}\) is a bare stress tensor, and it is not in general finite. We see that its trace (114), expressed in terms of renormalized fields, involves coefficients \(p,a,b\) which are potentially singular as \(\varepsilon\to 0\).
However, all fields in the divergence of \(T_{ij}\) (111), proportional to EOM, are in fact finite (compare [80], (3.29)):
\[\partial_{i}T_{ij}\sim\text{EOM}\quad\text{(finite)}\,. \tag{116}\]
This strongly constrains possible divergences of \(T_{ij}\).
Similarly to how we expressed \(T_{ii}\) in terms of finite operators, we could do the same for the remaining symmetric traceless part of \(T_{ij}\), see e.g. the analysis in [80] for the \(\phi^{4}\) case. Note that the symmetric traceless and trace parts of \(T_{ij}\) do not mix under renormalization, so that analysis would not affect the renormalization of \(T_{ii}\) that we already discussed. To save time, we will avoid renormalizing here the symmetric traceless part of \(T_{ij}\) explicitly. However, by now it should not be a surprise that this can be done.
We can then split \(T_{ij}\) into a finite piece, which we call \([T_{ij}]\), and a divergent piece that we call \(\hat{R}_{ij}\):
\[T_{ij}=[T_{ij}]+\hat{R}_{ij}\,. \tag{117}\]
The requirement that \(\partial_{i}(T_{ij}-[T_{ij}])\) should be finite implies that \(\partial_{i}\hat{R}_{ij}=0\), where this conservation does not rely on EOM. To begin with, this implies that the Poincare charges \(P_{i}\), \(M_{ij}\) constructed by integrating \(T_{ij}\) and \([T_{ij}]\) coincide. The divergent piece \(\hat{R}_{ij}\) drops out from them - the Poincare charges are finite.
Furthermore, \(\hat{R}_{ij}\) as any symmetric 2-tensor field satisfying the condition \(\partial_{i}\hat{R}_{ij}=0\) can be written in the form:
\[\hat{R}_{ij}=\partial_{k}\partial_{l}\hat{Y}_{[ik][jl]}\,, \tag{118}\]
where \(\hat{Y}_{[ik][jl]}\) has symmetries of the Riemann tensor, i.e. is a field antisymmetric in \(ik\) and in \(jl\) and symmetric under the exchange of these two groups of indices. For \(c\)-number tensor fields i.e. smooth mappings \(\mathbb{R}^{d}\to\mathbb{R}\) this follows by using the Poincare lemma twice, together with the (anti)symmetry of the involved fields, see Exercise 5 of Chapter 4 in
[81].31 One could worry that perhaps \(\hat{Y}\) is a non-local function of the fields (e.g. [83], Eq. (4.7) and below). However, this worry is unfounded. The point is that the Poincare lemma remains valid in the space of local field, a fact known as "algebraic Poincare lemma" (see [84], Theorem 4.2). Hence the same argument shows that \(\hat{Y}_{[ik][jl]}\) is a local field, i.e. can be built out of products of \(U\), \(\phi_{i}\) and their derivatives.
Footnote 31: It is also a partial case of the (dualized) generalized Poincaré lemma for mixed-symmetry tensors (see [82], Eq. (7)).
We can check this explicitly for our dipolar model. We can make the most general ansatz for \(\hat{R}_{ij}\) consisting of all rank-two symmetric operators of \(4d\) dimension \(\Delta_{4d}=4\) being \(\mathbb{Z}_{2}\) even and shift invariant. Requiring conservation, we find that \(\hat{Y}_{[ik][jl]}\) is a linear combination of two building blocks
\[(\delta_{il}\delta_{jk}-\delta_{ij}\delta_{kl})\phi^{2}\,,\qquad\delta_{il} \Phi_{jk}\pm(\text{3 permutations})\,,\] (D.27)
and it is indeed local.
An important consequence of equation (D.26) is that all divergent contributions to the bare stress tensor must contain two total derivatives. Since the \(p\) term in (D.21) contains only one total derivative, its coefficient \(p\) must be finite.
### Building scaling operators
Up to now, we have explained how to obtain a finite stress tensor and virial current. We now show how to make them good scaling operators. For the sake of clarity, in this section we drop square brackets around renormalized operators, e.g. \([T_{ij}]\to T_{ij}\), although all operators are finite.
The argument to make the stress tensor a scaling operator is well known [19],[14; 21]. One starts from the most general form of the commutation of the dilatation operator \(D\) and the stress tensor:
\[[D,T_{ij}]=x_{m}\partial_{m}T_{ij}+d\,T_{ij}+y_{a}\partial_{k}\partial_{l}Y^{ a}_{ikjl}\,.\] (D.28)
Here \(Y^{a}_{ikjl}\) is a complete set of operators with the symmetries of the Riemann tensor (excluding operators such that \(\partial_{k}\partial_{l}Y_{ikjl}=0\)) such that \(\partial_{k}\partial_{l}Y_{ikjl}\) can mix with the stress tensor. In perturbation theory these are operators of 4d scaling dimension 2.
The operators \(Y_{ikjl}\) themselves generically mix under dilatation
\[[D,Y^{a}_{ikjl}]=x_{m}\partial_{m}Y^{a}_{ikjl}+\Delta_{ab}Y^{b}_{ikjl}\,.\] (D.29)
With this information in mind, we can perform a finite improvement32
Footnote 32: Ref. [19] has a mistake in the sign before \(\Delta\) in the following equation. The correct sign, here as in [14; 21], requires changes in the subsequent argument.
\[T_{ij}\to T_{ij}+y^{a}(d-2-\Delta)^{-1}_{ab}\partial_{k}\partial_{l}Y^{b}_{ ikjl}\,,\] (D.30)
such that the new stress tensor is a good scaling operator, or in other words
\[[D,T_{ij}]=x_{m}\partial_{m}T_{ij}+d\,T_{ij}\,.\] (D.31)
The only caveat is that improvement (114) is valid provided the matrix \(\Delta_{ab}\) does not have any eigenvalue \(\Delta=d-2\). In our theory, there are two candidate improvements (101). The dimension \(\Delta_{\phi^{2}}\) is known at two loops [30], while we computed \(\Delta_{\Phi_{ij}}\) at one loop using conformal perturbation theory
\[\Delta_{\phi^{2}}=2-\frac{8\varepsilon}{17}+\frac{1441\varepsilon^{2}}{14739}+O (\varepsilon^{3})\,,\qquad\Delta_{\Phi_{ij}}=2-\frac{44\varepsilon}{51}+O( \varepsilon^{2})\,. \tag{115}\]
Since neither dimension is exactly \(d-2\), we can always improve \(T_{ij}\) to be a good scaling operator.
Let's imagine we already performed improvement (114). Then taking the trace of (114), gives the most general consistent commutation relation for the virial current33
Footnote 33: In more general theories possessing conserved global symmetry currents of dimension \(d-1\), those could also appear in the right-hand side. Then, one may not be able to define a good scaling virial current operator. Physically, this would be the effect of mixing between scale transformation and the global symmetry transformation under the RG flow. In our dipolar model, there are no conserved global symmetry currents of dimension \(d-1\), the only conserved current being the shift symmetry current \(\phi_{i}\), which cannot appear in the right-hand side because of the \(\mathbb{Z}_{2}\) symmetry and because it does not have the right classical dimension.
\[[D,V_{i}]=x_{m}\partial_{m}V_{i}+(d-1)V_{i}+w_{a}\partial_{j}A^{a}_{ij}\,. \tag{116}\]
Here \(A^{a}_{ij}=-A^{a}_{ji}\) is a basis of antisymmetric operators. As before, the basis behaves under dilations as
\[[D,A^{a}_{ij}]=x_{m}\partial_{m}A^{a}_{ij}+\widehat{\Delta}_{ab} A^{b}_{ij}\,. \tag{117}\]
With this information, we can make \(V_{i}\) a good scaling operator using the freedom to transform \(V_{i}\) in a way that preserves \(T_{ii}=-\partial_{i}V_{i}\). The right improvement is
\[V_{i}\to V_{i}+w^{a}(d-2-\widehat{\Delta})^{-1}_{ab} \partial_{j}A^{b}_{ij}\,. \tag{118}\]
In the case of the dipolar model, there is no candidate antisymmetric tensor \(A^{a}_{ij}\) with the right dimension, so the virial current improvement is unnecessary, and we do not need to discuss whether \(\widehat{\Delta}_{ab}\) has eigenvalues \(d-2\). However, for other models this discussion might be necessary.
### Summary
To wrap up the discussion, sections 2 and 3 show that we can find a finite stress tensor \([T_{ij}]\) that generates the Poincare symmetry charges. Furthermore, Section 4 shows that it is possible to choose suitable improvements such that both \([T_{ij}]\) and \([V_{i}]\) are scaling operators, of dimensions \(d\) and \(d-1\). Combining the results, the trace of the stress tensor is
\[\delta_{ij}[T_{ij}] =\frac{\mu^{\varepsilon}\beta(\lambda)}{4}[\phi^{4}]-(\Delta_{ \phi}-1)\left[\phi_{i}E_{i}\right]-\Delta_{U}[UE]-\partial_{i}[V_{i}]\,, \tag{119}\] \[=\Delta_{U}[U\phi_{i}]+p[\phi_{i}E]+q\partial^{2}[\phi^{2}]+r \partial_{i}\partial_{j}[\Phi_{ij}]\,. \tag{120}\]
Recall that \(p\), \(q\) and \(r\) are not determined from our analysis, except they are finite constants. These operators have all the desired properties discussed in Section 4.3. Indeed, the trace at the fixed point contains the virial current, and the virial current is mapped to \([\phi_{i}]\) under shift symmetry:
\[\delta_{ij}[T_{ij}]\big{|}_{\text{fixed point}}=-\partial_{i}[V_{i}]+\text{ EOM}\,,\qquad\big{[}[Q],[V_{i}]\big{]}\propto[\phi_{i}]\,. \tag{103}\]
As explained in Section 4.3, the latter equation is responsible for explaining the "paradox" of why the virial current does not acquire anomalous dimension.
### Scaling Ward identity
Although it is not strictly necessary for us, before concluding the appendix we derive the Ward identity for scale invariance. We construct the scale current \(D_{i}=x^{j}[T_{ij}]+[V_{i}]\), which satisfies the conservation equation
\[\partial_{i}D_{i} =\delta^{ij}[T_{ij}]+\partial_{i}[V_{i}]+x^{j}\partial_{i}[T_{ij}] \tag{104}\]
The last total-derivative term, which was generated by integrating by parts, does not contribute to the Ward identities, which follow from
\[\int d^{d}x\,G_{N,M}\big{(}x_{1},\ldots,x_{N};y_{1},\ldots,y_{M};\partial_{i}D _{i}(x)\big{)}=0\,. \tag{105}\]
This is evaluated by recalling that the EOM acts in correlation functions as
\[\Big{\langle}\frac{\delta\tilde{\mathcal{H}}}{\delta\mathcal{O}(x)}\mathcal{ O}_{1}(x_{1})\ldots\mathcal{O}_{n}(x_{n})\Big{\rangle}=\sum_{i=1}^{n}\Big{\langle} \mathcal{O}_{1}(x_{1})\ldots\frac{\delta\mathcal{O}_{i}(x_{i})}{\delta \mathcal{O}(x)}\ldots\mathcal{O}_{n}(x_{n})\Big{\rangle}\,, \tag{106}\]
where \(\mathcal{O}\) is either of \(\phi_{i}\) or \(U\). At the end of the day, we find the expected scaling Ward identity:
\[\Bigg{[}\!\sum_{a=1}^{n}\bigg{(}x_{a}^{i}\frac{\partial}{\partial x_{a}^{i}} +\Delta_{\phi}\bigg{)}+\sum_{b=1}^{m}\bigg{(}y_{b}^{i}\frac{\partial}{ \partial y_{b}^{i}}+\Delta_{U}\bigg{)}\Bigg{]}G_{N,M}=\frac{\mu^{\varepsilon} \beta(\lambda)}{4}\int d^{d}x\,G_{N,M}\big{(}[\phi^{4}](x)\big{)}\,. \tag{107}\]
For compactness we dropped the arguments \(x_{a}^{i}\) and \(y_{b}^{i}\) on the correlators \(G_{N,M}\).
|
2301.02336 | Exploring Levels of Control for a Navigation Assistant for Blind
Travelers | Only a small percentage of blind and low-vision people use traditional
mobility aids such as a cane or a guide dog. Various assistive technologies
have been proposed to address the limitations of traditional mobility aids.
These devices often give either the user or the device majority of the control.
In this work, we explore how varying levels of control affect the users' sense
of agency, trust in the device, confidence, and successful navigation. We
present Glide, a novel mobility aid with two modes for control: Glide-directed
and User-directed. We employ Glide in a study (N=9) in which blind or
low-vision participants used both modes to navigate through an indoor
environment. Overall, participants found that Glide was easy to use and learn.
Most participants trusted Glide despite its current limitations, and their
confidence and performance increased as they continued to use Glide. Users'
control mode preference varied in different situations; no single mode "won" in
all situations. | Vinitha Ranganeni, Mike Sinclair, Eyal Ofek, Amos Miller, Jonathan Campbell, Andrey Kolobov, Edward Cutrell | 2023-01-05T23:55:49Z | http://arxiv.org/abs/2301.02336v1 | # Exploring Levels of Control
###### Abstract
Only a small percentage of blind and low-vision people use traditional mobility aids such as a cane or a guide dog. Various assistive technologies have been proposed to address the limitations of traditional mobility aids. These devices often give either the user or the device majority of the control. In this work, we explore how varying levels of control affect the users' sense of agency, trust in the device, confidence, and successful navigation. We present Glide, a novel mobility aid with two modes for control: Glide-directed and User-directed. We employ Glide in a study (N=9) in which blind or low-vision participants used both modes to navigate through an indoor environment. Overall, participants found that Glide was easy to use and learn. Most participants trusted Glide despite its current limitations, and their confidence and performance increased as they continued to use Glide. Users' control mode preferences varied in different situations; no single mode "won" in all situations.
assistive navigation, robotics, user study 2023
Only a small percentage of blind and low-vision people use traditional mobility aids such as a cane or a guide dog. Various assistive technologies have been proposed to address the limitations of traditional mobility aids. These devices often give either the user or the device majority of the control. In this work, we explore how varying levels of control affect the users' sense of agency, trust in the device, confidence, and successful navigation. We present Glide, a novel mobility aid with two modes for control: Glide-directed and User-directed. We employ Glide in a study (N=9) in which blind or low-vision participants used both modes to navigate through an indoor environment. Overall, participants found that Glide was easy to use and learn. Most participants trusted Glide despite its current limitations, and their confidence and performance increased as they continued to use Glide. Users' control mode preferences varied in different situations; no single mode "won" in all situations.
## CCS CONCEPTS
\(\bullet\) **Accessible technologies:** **Computer systems \(\rightarrow\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\
###### Acknowledgements.
**ACM Reference Format:**
Vinitha Ranganeni, Mike Sinclair, Eyal Ofek, Amos Miller, Jonathan Campbell, Andrey Kolobov, and Edward Cutrell. 2023. Exploring Levels of Control for a Navigation Assistant for Blind Travelers. In _Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI '23), March 13-16, 2023, Stockholm, Sweden. ACM, New York, NY, USA, 9 pages. [https://doi.org/10.1145/3568162.3578630](https://doi.org/10.1145/3568162.3578630)
##.. Introduction
As of 2016, approximately 7 million people in the US reported being blind or having low vision (BLVI) (Kanganeni et al., 2017). For these people, white canes and guide dogs are the only primary mobility aids. White canes come in direct contact with the area immediately ahead of their user, enabling the user to sense the environment, assess differences in materials, and detect obstacles. Guide dogs lead the user around obstacles and are capable of global navigation in familiar settings. The user can interact with the dog and suggest different walking directions. Despite empowering many people, these mobility aids require long training to master and use confidently, and even then the risk of losing one's way remains substantial. The difficulty of safe navigation can also adversely impact a person's _self_-confidence. As a result, of the estimated 7 million BLVI in the US, **only 2% to 8% use a white cane**(Kanganeni et al., 2017). **Only about 2% of the remaining individuals use guide dogs**(Kanganeni et al., 2017). About 90% of BLVI have a high dependency on sighted assistance and/or confine their lives to a limited set of locations and activities, in many cases solely to their homes. Prior efforts to automate blind navigation, such as smartphone-based navigation applications and motorized robots, so far have failed to change this reality.
In this work, we conjecture that a mobility aid's helpfulness depends on a heretofore understudied factor -- the level of control it offers to its user (Fig. 1). White canes and smartphone navigation programs leave full navigational control to the person, but the amount of information they give to the user may not be enough for local decision making. Guide dogs and motorized robots are closer to the other end of the control spectrum, but a BLVI person following them may feel a loss of agency in the process. To experiment with different levels of shared control, we developed **Glide** (Fig. 2), a novel mobility aid designed to safely steer users to their destination with a range of a haptic vocabulary that conveys a tactile sense of the surface it rolls on while enabling the user to set the walking pace. Glide is designed to be light and portable. Beside the practicality of users carrying it over stairs or bringing it with them, users can manipulate the device to their liking to increase the sense of control.
Glide uses passive kinetic guidance through a pole with a handle connected to a small mobile platform with steerable and brakeable wheels (Fig. 2). The wheels are non-motorized and require the user to push the device in front of them. Glide's sensors allow it to identify the user's location and both static and dynamic obstacles. Glide uses this information to guide the user around obstacles or engage the brakes to make the user stop. Glide's handle connects the user's grip to the wheeled platform, conveying the ground's tactile information. The handle is equipped with an array of haptic actuators that serve as another channel of communication with the user (e.g., to slow down when approaching the goal). This enhances the user's understanding of their surroundings and helps them build a mental map of their environment.
We experimented with two modes of operation of Glide, using increasing levels autonomy (Fig. 3). In the **Glide-directed** mode, the user pushes Glide forward, while Glide steers their walking direction to their desired destination while avoiding obstacles. In the **User-directed** mode, Glide waits for the user's directional input at decisions points, such as junctions in a hallway, and then steers them in their desired direction of travel to the next decision point while avoiding obstacles. We tested Glide in an indoor office building, where turns are mostly limited to 90 degrees and straight corridors between them.
We conducted a user study with nine BLVI people to evaluate the users' progression through the two control modes and the overall user experience. More specifically, we wanted to understand the users' level of trust in Glide, their level of confidence when using Glide, Glide's ease of use, learnability and whether users' performance improved as they used Glide.
Our findings show that Glide was easy to learn, and users found both modes easy to use. Most participants trusted Glide to avoid obstacles but limitations in the system's current ability to react quickly impacted other participants' trust in the device. Overall, most users were confident when using Glide and their confidence level increased as they continued to use the device. Additionally, users' preferences for modes varied. Most users' preference for a mode was situational. Few users stated that they strongly preferred one mode over the other.
## 2. Related Works
We discuss Orientation & Mobility training (O&M), conventional mobility aids and their limitations. We then discuss various assistive technologies that have been developed to address the limitations of traditional mobility aids.
Figure 2. A breakdown of the components of Glide and its processor. The user can twist the handle to indicate desired direction of travel (input to torque sensor) and pushes the robot forward (input to encoder). Glide outputs haptic feedback, steers the user and applies/releases the brakes.
### Orientation & Mobility Training
The purpose of Orientation & Mobility training (O&M) is to teach people with visual impairments how to travel safely through a variety of environments (Beng et al., 2017). This training helps people learn how to use other senses to gain new information about their surroundings and navigate safely. Some techniques taught include maintaining a straight line of travel and classifying objects into obstacles, clues, landmarks, or hazards. For a more detailed discussion on O&M training, see (Beng et al., 2017; Chen et al., 2018; Chen et al., 2019).
O&M skills are used in conjunction with primary mobility aids such as a white can or guide dog, each of which requires substantial training to use effectively. Typically, users must be trained for more than 100 hours to become skilled with the white cane. Effective use of a guide dog depends on the user being competent with a cane and having established O&M skills. It can take up to 6 months to start working with a new guide dog, and their working life is typically 6 to 7 years.
There are additional limitations, mostly due to the local nature of the aids. While a cane be used to sense the immediate vicinity of a user, it relies on the user's familiarity with the environment to navigate to a destination. A dog can help guide the user to a limited number of known locations while avoiding obstacles.
### Assistive Technologies for Blind Users
Over the years researchers have developed a variety of assistive technologies to aid people with visual impairments by detecting and avoiding obstacles, improving orientation and virtual wayfinding (Han et al., 2017). Explorations have included many diverse types of devices with varying levels of autonomy.
The most readily available assistive technology supplement O&M skills. This includes commercial smartphone applications such as Google Maps 1, which visually identifies landmarks, provides directions, and helps the user regain orientation, and Soundscape 2, which helps the user navigate using spatial audio that enhances their awareness of their surroundings and the direction to their destination. Both applications use GPS for localization and hence are limited to outdoor navigation. Researchers have also presented various types of navigation systems that provide turn-by-turn navigation assistance to help blind users walk to their destination (Beng et al., 2017; Chen et al., 2018; Chen et al., 2019). Most of these systems, however, are unaware of obstacles that were not in the initial map of the environment. To help blind users avoid collisions, traditional mobility aids have been augmented with sensors to detect obstacles with non-contact sensing. WeWalk 3 is a commercial device that attaches to a cane, detects obstacles, and provides GPS navigation. Previous augmented white canes detected obstacles in front of the user (Chen et al., 2018) or at trunk and head level (Han et al., 2017). While these devices can alert a pedestrian of obstacles, the user must still avoid obstacles by themselves.
Footnote 1: [https://www.google.com/maps/preview](https://www.google.com/maps/preview)
Footnote 2: [https://www.microsoft.com/en-us/research/product/soundscape/](https://www.microsoft.com/en-us/research/product/soundscape/)
Footnote 3: [https://wwwalk.io/en/](https://wwwalk.io/en/)
Researchers have proposed replacing traditional mobility aids with wearable devices that alert the user of obstacles and navigate them to their destination using haptic (Kayukawa et al., 2019) or audio (Chen et al., 2018) feedback. These devices achieve hands-free navigation but require custom interfaces and can be too heavy or cumbersome to wear or hold. Robotic navigational aids can be an alternative to wearable devices. CaBot (Chen et al., 2018) is a fully autonomous suitcase-like robot that guides users to a destination. This system, however, lacks shared control with the user, as the user follows a motorized robot's direction and pace. Kayukawa et. al (2019) developed a similar platform but the user can choose to enable an autonomous mode that navigates them with speech or directional guidance around obstacles.
Figure 3. (Top) The Glide-directed mode shows the user being guided around a corner while avoiding a pillar. (Bottom) In the User-directed mode the user twists the handle to the left and Glide guides the user along a left turn.
### Navigation Through Intersections
Navigation through intersections has been previously explored by, e.g., Kuribayashi et. al. (Kuribayashi et al., 2018), who proposed a smartphone-based app that provides an obstacle-avoiding path and intersection detection. This work, however, requires the user to hold both a phone and cane at the same time and may not be suitable for all blind travelers. Lacey et. al. (Lacey et al., 2018) proposed PAM-AID, a "smart walker" that aims to assist the elderly BLVI to walk safely indoors. They use a Bayesian network approach that combines sensor information with user input (three buttons for moving forward, left, or right) which activate autonomous robot control in the desired direction. While this is similar to our User-directed mode, the authors do not explore varying levels of control.
### Cane-like Navigation Assistants
GuideCane (GuideCane, 2010) and the Robotic Cane (Bogorty et al., 2013), each propose a pole attached to a mobile robot base with passive wheels, allowing the user to push the device as it steers around obstacles. Both devices do not provide autonomous navigation to a set goal but will guide the user in avoiding obstacles. The user specifies desired walking direction by pressing directional buttons on the cane.
Augmented Cane (Guide et al., 2011) is a white cane with an omni-wheel at the tip that allows the user to control the forward speed while the device steers. This device provides obstacle avoidance, indoor/outdoor navigation, and object localization, yet it is reported as heavy and does not allow the user to input their desired direction of travel. The Co-Robotic Cane (Cane, 2012) has a rolling tip and two operational modes: an active mode that steers the user to the desired direction of travel while avoiding obstacles and a passive mode where the device behaves as a white cane but also provides speech feedback on the desired travel direction and obstacle information. The device detects the human intent and automatically switches between modes but does not provide autonomous indoor navigation.
Inspired by these works we propose Glide navigation assistant: portable and lightweight, with flexible control scheme enabling both autonomous indoor navigation and obstacles avoidance, as well as manual pace and intuitive walking direction control by twisting the handle. We use Glide to explore distinct levels of controls of navigation aids.
## 3. The Glide System Design
### Hardware
Glide is a robotic device with a user-centered design. We designed a passive robot, pushed by the user, to reduce the physical workload while carrying a sufficient payload. This design also saves a significant amount of power and weight by not having to carry the extra battery power required to power the wheels. Additionally, users' walking pace may rapidly change based on their environmental stimuli. Our goal for this design is to eliminate the sense of being dragged by the device by giving the user control over their movement. Another important design choice was making Glide lightweight and portable. This enables users to carry it if needed (e.g. stairs), and easily pull, rotate and direct it as they wish.
Glide has a pole connected to a small platform, that rolls on the floor, using steerable, brake-able, and unpowered wheels. The user holds the handle connected to the pole (Fig. 2). The user pushes Glide in front of them, and can feel, through the handle, the wheels following any irregularities in the terrain. There is also a linear array of six vibrotactile actuators (ERM) in the handle to render intuitive haptic symbols to the user's hand. In contrast to a cane form factor (Guide et al., 2011), Glide's base, roughly 9-by-9 inches in size, rolls on the ground and supports the majority of the weight. The total weight of Glide is approximately 3 pounds.
Glide's sensors enable it to navigate through the environment and detect obstacles. Glide is equipped with an IMU and wheel encoders for computing odometry, a Realsense D435i camera for sensing obstacles, servos for braking and steering, vibrotactile actuators for providing haptic feedback and a Jetson Xavier NX for onboard processing. Glide also has a Teensy 4.1 Arduino processor for handling a torque sensor that is used for sensing the user's directional input by twisting the handle and the haptic actuators.
Figure 4. Explored shared-control schemes. Glide-directed mode: (left) The user walks, pushing Glide, and their walking direction is set by Glide to follow the blue path to the goal point. User-directed mode: (right) The user is asked to provide Glide a direction at decision points (colored paths shows options available to the user).
Rendering feedback to the user in the form of simple vibrotactile spatial patterns in the handle can help reduce cognitive load (Krause et al., 2017), is robust to noisy environments, and does not load the user's auditory sensing. There are six vibrotactile actuators spread across the handle with three vibration patterns: the left three actuators vibrate when the user twists the handle to the left, the right three vibrate when the user twists the handle to the right and all the vibrators will actuate to indicate that the user should slow down. Many other intuitive haptic symbols are possible, depending on the situation.
To summarize, the user must push Glide for it to move (input to the encoder) and can input their desired direction of travel by twisting the handle (input to torque sensor). Glide will steer the user (output to steering servo), engage or disengage brakes (binary output to brake servos) and provide haptic feedback to the user (output to vibrotactile actuators).
### Navigation System Design
Glide uses ROS2 (Robot Operating System), an open-source platform that provides software for robot control. We specifically use Nav2 within ROS2 for navigation. Glide computes odometry by fusing sensor data from wheel encoders, an IMU and a RGBD camera odometry node. It uses the adaptive Monte Carlo localization (AMCL) package for localization and Regulated Pure Pursuit planner for local planning. We rely on a pre-rendered floor plan of our building as a global map. While Nav2 allows for a global planner to be integrated, for the purpose of the study we generated the global plans offline. While Glide's navigation software runs in a closed loop system, the inclusion of a human in the loop requires a special design. The device cannot move by itself, and requires the user to push it. If the user inputs their desired direction of travel, the device's navigation system has to change its global plan to take the user's feedback into account (see Fig. 2).
## 4. Modes of Glide
### Glide-Directed
One skill taught during O&M training is route planning: learning how to get information about your destination and how to get there. A part of this skill is first building a mental map of your environment. Initial navigation through unfamiliar environments, without a mental map, while using a cane or guide dog can be challenging. The Glide-directed experience intends to alleviate that challenge by guiding a user from their current location to their destination while avoiding obstacles. In this mode, Glide generates a global plan to the goal. As the user pushes Glide, the local controller steers them to follow the global plan and avoid any obstacles. When the user is 2 meters from the goal, six vibrotactile actuators provide feedback to the user to slow down and the brakes engage when the user has reached the goal. Note, this mode is similar to the functionality provided by the Augmented Cane and CaBot. however, our feedback modalities are different.
Note, for the purpose of the study we preset the destination. User goal setting is outside the scope of this work as our goal was to evaluate the effectiveness of Glide steering the user to their destination, not find the optimal interaction modality for goal setting. In future work, we will explore interaction modalities for goal setting.
### User-Directed
Another skill taught during O&M training is independent movement: using landmarks and clues to help the person know where they are along a particular route. This helps people with visual impairments learn new routes. In this mode, we verbally inform the user of which directions are available when they reach a junction (in the future, Glide will provide this information). By providing a description of the junction, the user can learn unfamiliar routes and later choose to navigate with their primary mobility aid instead of Glide. This mode is similar to the GuideCane, Robotic Cane and Co-Robotic Cane in that the user has control over their desired direction of travel but different in that it allows users to navigate through indoor environments to specific known destinations.
When the user reaches a junction, Glide will engage the brakes, causing the user to stop. The user can then select one of three directions: left, right, or forward. The user can twist the handle left or right to indicate which direction they would like to turn. We map the values from the torque sensor on the handle to two discrete directions (left and right) during the handle twist. When Glide receives a handle twist command, it will return a haptic notification from the left three vibrotactile actuators for a left twist or the right three for a right twist, acknowledging the user input. After the vibrotactile feedback is provided, the brakes disengage, Glide plans
\begin{table}
\begin{tabular}{c c c c c}
**ID** & **Age** & **Gender** & **Vision Level** & **Impairment Duration** & **Primary Mobility Aid(s)** \\ \hline
**P1** & 40 & Male & No functional vision & \textgreater{}10 years & White cane \\ \hline
**P2** & 56 & Female & No functional vision; some light perception & \textgreater{}10 years & White cane and guide dog \\ \hline
**P3** & 65 & Male & No vision & \textgreater{}10 years & White cane and guide dog \\ \hline
**P4** & 56 & Male & No vision & \textgreater{}10 years & White cane and guide dog \\ \hline
**P5** & 45 & Male & No vision & \textgreater{}10 years & White cane \\ \hline
**P6** & 19 & Female & Legally blind; no peripheral vision; poor depth perception; no night vision & \textgreater{}10 years & White cane \\ \hline
**P7** & 80 & Male & No functional vision & \textgreater{}10 years & Walker \\ \hline
**P8** & 27 & Female & No functional vision; some light perception & \textgreater{}10 years & White cane \\ \hline
**P9** & 30 & Female & No vision & 5 years & White cane \\ \hline \end{tabular}
\end{table}
Table 1. Demographic information of study participants
a global path to the next junction. The user can begin walking and the controller will steer them to follow the global plan. There are three types of junctions the user can encounter:
* **T-junction**: Turning left or right are the only options. The brakes will remain engaged until the user twists the handle left or right.
* **L-junction**: Going straight and either left or right are the only options. The user can begin walking forward without an explicit input as Glide defaults to a forward global plan to the next junction (if one exists). If the user twists the handle in a direction where there is no feasible path, the brakes will engage for a fixed duration and then disengage allowing user to walk forward if they would like to.
* **Four-way junction**: The user can go straight, left or right.
## 5. User Study
We conducted a user study with 9 BLVI participants. The main goals were to 1) understand if users trusted Glide, 2) understand if users were confident when using Glide, 3) evaluate if users' performance increased over time (i.e., reduction in the number of errors) and 4) understand if Glide was easy-to-use and learn.
### Participants
We recruited 9 participants who are blind or low-vision to be part of this study (Table 1). The inclusion criteria to be a participant in this study was to have no functional vision. Most participants described themselves as confident travelers on familiar routes _(Mdn=5, SD=0.48)_ and not confident travelers on unfamiliar routes _(Mdn=2, SD=0.66)_, ranging from _Not at all Confident_ (1) to _Extremely Confident_ (5). We acknowledge that a sample size of 9 is too small for a quantitative study so we conducted a qualitative study that focuses on relaying the unquantifiable experiences of the participants.
### Study Design
Our study was approved by our Institutional Review Board (IRB). We first obtained informed consent from all participants, provided an overview of the study, and explained how to operate Glide. We informed the user that an experimenter would always be close by to guarantee their safety and would only intervene if necessary.
We divided the study into three sections. In the first section our goal was to evaluate the Glide-directed mode, where the user was guided by Glide from a fixed starting position to predefined goal destination. Each user completed the walking course three times (Fig. 4). Due to a current limitation of the system, we advised participants to walk at a slow pace. We told them to "walk as though they were placing one foot in front of the other."
We recorded whether they completed the course and the time it took them to complete the course. Additionally, we measured the number of errors. This included the number of times the participant became misaligned with Glide (i.e. the user was not standing directly centered behind Glide) and the number of potential collisions (an experimenter intervened before a collision occurred). After the participants completed the course three times, we asked them to state their agreement with a series of statements (Fig. 5) and answer the NASA task load index (TLX) on a 7-point Likert scale. Additionally, they answered three open-ended questions: (1) _What did you like most about this mode?_ (2) _What did you like least about the mode?_ (3) _When would you see yourself using this mode?_
In the second section we evaluated the User-directed mode. We had the user complete three courses of their choosing (Fig. 4). There were three possible destinations: lounge, work area, and kitchen. At each junction we informed the user which directions they could go to get to various destinations. For example, we would say "Go straight to get to the work area or kitchen or turn left to get to the work area or lounge". From there, the user could select their desired direction by twisting the handle, receiving the haptic feedback and begin walking. We recorded the same metrics and answers to open-ended questions as we did in the first section.
In the last section participants filled out an exit questionnaire about their experience using the various modes and their recommendations for improving Glide. Note, we did not counterbalance the sections because our goal was not to compare the two modes but to do a general analysis about each individual mode.
## 6. Findings
**Trust in Glide**-We define trust as the user's assessment of the reliability of the system. About 70% of participants trusted Glide to guide them around obstacles (Fig. 5) in the Glide-directed mode and approximately 60% of participants trusted Glide (Fig. 5) in the User-directed mode. In general, users trusted Glide as its controller was able to safely steer the user around obstacles and avoid walls in most scenarios. A current limitation of the system is that the user must be centered behind Glide and walk at a slower pace to give the controller enough time and space to steer effectively. If a user was mis-aligned or walking too fast, the controller was sometimes unable to steer users back towards the global plan. Some participants noted that they did not like it when Glide ran into or brushed up against walls; this affected their trust in Glide.
\begin{table}
\begin{tabular}{l c c c c c}
**Mode** & **Trial** & **Avg** & **SD** & **Min** & **Max** \\ \hline
**Glide-directed** & 1 & 4.84 & 3.32 & 3.25 & 6.17 \\ \hline
**Glide-directed** & 2 & 4.62 & 1.39 & 3.78 & 6.67 \\ \hline
**Glide-directed** & 3 & 4.37 & 0.89 & 3.01 & 6.17 \\ \hline
**User-directed** & 1 & 3.18 & 1.09 & 2.27 & 5.0 \\ \hline
**User-directed** & 2 & 2.95 & 1.19 & 1.97 & 3.57 \\ \hline
**User-directed** & 3 & 3.15 & 1.07 & 2.52 & 4.0 \\ \hline \end{tabular}
\end{table}
Table 2. Trial time (min) across participants for each mode.
\begin{table}
\begin{tabular}{l c c c c}
**Mode** & **Trial** & **Avg** & **SD** & **Min** & **Max** \\ \hline
**Glide-directed** & 1 & 3.33 & 3.32 & 0 & 10 \\ \hline
**Glide-directed** & 2 & 2.25 & 1.39 & 0 & 4 \\ \hline
**Glide-directed** & 3 & 1.25 & 0.87 & 0 & 3 \\ \hline
**User-directed** & 1 & 0.44 & 0.73 & 0 & 2 \\ \hline
**User-directed** & 2 & 0.25 & 0.46 & 0 & 1 \\ \hline
**User-directed** & 3 & 0.71 & 0.76 & 0 & 2 \\ \hline \end{tabular}
\end{table}
Table 3. Number of errors across participants for each mode.
**Confidence when Using Glide-**We define confidence as the user's assessment in their own abilities to use the system effectively. Approximately 70% of participants agreed that Glide inspired confidence when they were walking in the Glide-directed mode and a little more than 75% agreed in the User-directed mode (Fig. 5). One participant noted a learning curve between the Glide-directed and User-directed mode, showing an increase in confidence in the User-directed mode: _"The [Glide-directed] mode was mostly learning how to use Glide and I felt more prepared in the [User-directed] mode."_ Additionally, allowing the user to choose which direction they went, by twisting the handle, in the User-directed mode increased confidence: _"The [User-directed] mode made me motivated to move faster than the [Glide-directed] mode. The [Glide-directed] mode made me feel more hesitant because I didn't know where I was going."_ More specifically, at a system level, the twist in the handle processed by the torque sensor and the haptic feedback the system provided to acknowledge the user input increased confidence.
**Learnability, Ease-of-Use, and Comfortability-**Approximately 95% of participants agreed that Glide was easy to learn in the Glide-directed mode and 100% agreed in the User-directed mode (Fig. 5). 100% of participants agreed that Glide was easy to use in the Glide-directed mode and approximately 95% agreed in the User-directed mode (Fig. 5). Participants thought Glide was intuitive and they liked the ease of motion: _"It didn't present any difficulty in the motion; it didn't feel forced, it felt natural."_ About 95% of participants agreed that they could tell when Glide was turning and could follow accordingly in the Glide-directed mode and 100% agreed in the User-directed mode (Fig. 5). Many participants pointed out that they liked the way Glide turned: _"I liked the turning, it was easy for me to tell when I had to turn. The first time I tried it I didn't even realize it was giving me a signal, I instinctively turned with it."_, _"I liked that it slowly turned instead of an instant 90-degree angle."_ At at system level, the shape of the global plan influenced the behavior that the participants are describing. The global plan had a large turning radius allowing for more gradual turns. Additionally, approximately 75% of participants agreed that they felt comfortable with the path Glide set in the Glide-directed mode and 100% agreed in the User-directed mode (Fig. 5). More specifically, the user's ability to choose the direction of the global plans Glide set between junctions in the User-directed mode resulted in increased comfort.
**Performance-**We measure performance based on the number of errors and the time taken to complete a trial. An error is when the user becomes mis-aligned with Glide resulting in a potential collision. We show the number of errors and trial completion times for both modes across participants in Table 2 and 3. The number of errors across both modes and trial completion times in the Glide-directed mode decreased as the users continued to use Glide. We cannot make a comparison between the trial times in the User-directed mode as users selected routes which were all different lengths. Overall, participants performance improved over time because of their ability to quickly learn how to use Glide for the aforementioned reasons.
**Control over Movement-**We designed Glide with passive wheels to give users control over their walking pace. We noticed that some users adjusted their speed based on how Glide was steering. For example, if Glide was turning, some users would walk slower than when walking in a straight line. Additionally, if Glide brushed up against the wall users could stop, back up and reorient Glide. One user in particular was able to sense how close they were to obstacles through echolocation and would slow down if they were close to an obstacle. Unlike a motorized robot, this design allows users to adjust their speed to their liking and also stop and/or reorient if they feel unsafe.
**Feedback from Glide-**As participants continued to use Glide, they became more comfortable with slowdown haptic and braking. Approximately 75% of participants agreed that they could easily tell when they had to stop or slowdown in the Glide-directed mode and about 90% agreed in the User-directed mode (Fig. 5). 100% of the participants agreed that twisting the handle to turn in the User-directed mode made sense to them. Overall, most participants noted that they liked the twisting gesture. One participant said, _"I thought it was intuitive to twist the handle, I didn't have to put in too
Figure 5. Participant agreement statement about learnability, ease of use, level of comfort, trust, and confidence for the Glide-directed mode (left) and User-directed mode (right).
much effort or change the orientation of the device. I liked the haptic feedback 1 got before the brakes, so I knew to slow down."_
**Task Workload**-The Task load index was assessed after each mode was explored and medians across participants are shown in Fig. 6. Overall, the physical and temporal demand across modes was low and participants thought they performed well with both modes. Participants thought that the User-directed mode was more mentally demanding than the Glide-directed mode: "_The [User-directed] mode was more challenging because I had to make decisions_" \(T\) did not have to think in the [Glide-directed] mode but in the [User-directed] mode I had to think more and make more choices."_
**Uses for Glide-directed Mode**-Four participants said they would use the Glide-directed mode in an unknown or crowded area: "_It is useful in crowded unknown situations. Unlike a cane where you collide with obstacles to know where they are, this mode just avoids obstacles for you_", _Instead of using assistance to get guided to a conference room in an unfamiliar hotel, Glide could guide me._", "_I would use it when I am in a crowded area like a park, festival or farmer's market._", "_I can see myself using it when going into restaurants._" One participant said that they would never use it and another two participants said that they would use it all the time. One participant suggested combining Glide with a shopping cart so they could push groceries back home. One participant said they were not sure when they would use this mode.
**Uses for User-directed Mode**-Four participants said they would use the User-directed mode indoors. Two of those participants specifically pointed out they would use it in unfamiliar indoor environments: "_I would use it in unfamiliar environments and if I wanted to specific rooms or locations independently._", "_I would use it indoors when receiving verbal feedback of where I can go._" One participant said that they would use it outdoors when crossing streets. But another participant pointed out that using this mode outdoors would be difficult: "_I generally need to follow google maps when going somewhere. I don't know how I would manage to hold both Glide and my phone. I think I would need to exert more effort when outdoors._" One participant said they would use Glide when they wanted to choose the route. Another participant said they would use Glide when they did not want to interact with another person or guide dog. One participant said they are not sure when they would use this mode.
## 7. Limitations & Future Work
**Speed**-We asked user to walk at a slow pace to handle a latency issue with the controller we used. If the user walked too fast the controller was not able to react quickly enough. The users would get too close to a wall and the controller could not find a suitable trajectory to avoid the wall. In these situations, we had to ask the user to stop and back up. Many users commented that they would like to be able to walk faster.
**Alignment with Glide**-Participants commented that it was difficult to stay directly behind Glide. One user said, "_I want to try holding the robot next to me instead of in front._" Glide needs to be able to handle various positions of users with respect to it. Additionally, when the users were not centered behind Glide, they would often ever towards the walls. Glide would compensate by turning the wheels away from the wall, but this would result in a "zig-zagging" motion.
**Global Map**-Currently Glide relies on having a global map of the environment. In future iterations we want Glide to be able to map and navigate its environment at the same time.
**Complex Environments**-Glide can currently only operate in a single indoor floor plan that is flat. We want Glide to be able to operate in more complex environments with overhangs, stairs, elevators, ramps, etc. Additionally, we want Glide to be able to operate outdoors.
**Interaction Modalities**-We acknowledge there are some limitations in our work as we did not compare against existing interaction approaches (e.g. audio-based interaction). However, we limited the interaction mechanisms to not overwhelm the users. Despite our efforts, some users still noted that there was a higher mental workload when using the User-directed mode, which has the most interaction capabilities, and preferred the Glide-directed mode (Fig. 6).
## 8. Discussion & Conclusion
This paper explores various levels of control of assistive navigation that Glide offers. It confirms that Glide is easy to use and easy to learn. Most users trusted Glide but some of the limitations of the current system impacted the remaining users trust in Glide. As users continued to use Glide, they became more confident and their performance improved.
One key insight is that users' preferences in modes and their level of autonomy varied. Additionally, users also would use different modes based on the situation they are in. A future direction to explore is customization. It is not possible to design a single mode that encompasses all user preferences. Users should be able to select between modes that they would like to use.
Furthermore, users had varying opinions about what kinds of interactions and feedback they wanted from Glide. For example, some users said they wanted audio feedback but another user said they would prefer a more complex haptic vocabulary over audio feedback. There is no single interaction technique or type of feedback that is more useful. In future work we plan to experiment with
Figure 6. NASA-TLX ratings by participants ranging from (1) Low to (7) High. The red line is the median.
other interaction techniques and feedback and allow the user to select between the various options.
|
2303.12984 | LMCodec: A Low Bitrate Speech Codec With Causal Transformer Models | We introduce LMCodec, a causal neural speech codec that provides high quality
audio at very low bitrates. The backbone of the system is a causal
convolutional codec that encodes audio into a hierarchy of coarse-to-fine
tokens using residual vector quantization. LMCodec trains a Transformer
language model to predict the fine tokens from the coarse ones in a generative
fashion, allowing for the transmission of fewer codes. A second Transformer
predicts the uncertainty of the next codes given the past transmitted codes,
and is used to perform conditional entropy coding. A MUSHRA subjective test was
conducted and shows that the quality is comparable to reference codecs at
higher bitrates. Example audio is available at
https://mjenrungrot.github.io/chrome-media-audio-papers/publications/lmcodec. | Teerapat Jenrungrot, Michael Chinen, W. Bastiaan Kleijn, Jan Skoglund, Zalán Borsos, Neil Zeghidour, Marco Tagliasacchi | 2023-03-23T01:27:38Z | http://arxiv.org/abs/2303.12984v1 | # LMCodec: A Low Bitrate Speech Codec with Causal Transformer Models
###### Abstract
We introduce LMCodec, a causal neural speech codec that provides high quality audio at very low bitrates. The backbone of the system is a causal convolutional codec that encodes audio into a hierarchy of coarse-to-fine tokens using residual vector quantization. LMCodec trains a Transformer language model to predict the fine tokens from the coarse ones in a generative fashion, allowing for the transmission of fewer codes. A second Transformer predicts the uncertainty of the next codes given the past transmitted codes, and is used to perform conditional entropy coding. A MUSHRA subjective test was conducted and shows that the quality is comparable to reference codecs at higher bitrates. Example audio is available at https://mjenru ngrot.github.io/chrome-media-audio-papers/publ cations/lmcodec.
Teerapat Jernungrot\({}^{1}\), Michael Chinen\({}^{2}\), W. Bastiaan Kleijn\({}^{2,3}\), Jan Skoglund\({}^{2}\), Zalan Borosos\({}^{2}\), Neil Zeghidour\({}^{2}\), Marco Tagliasacchi\({}^{2}\)+\({}^{1}\)University of Washington, Seattle
\({}^{2}\)Google
\({}^{3}\)School of Engineering and Computer Science, Victoria University of Wellington
Footnote †: This work was done during a research internship at Google.
speech coding, Transformers, self-supervised learning, generative adversarial networks.
## 1 Introduction
Speech coding, which consists of compressing speech signals to a limited number of bits with minimal distortion, is at the core of communication technologies such as mobile telephony or voice over IP (VoIP), Opus [1] and EVS [2] are state-of-the-art speech coding techniques that combine traditional coding tools, such as Linear Predictive Coding (LPC), Code Excited Linear Prediction (CELP), and Modified Discrete Cosine Transformation (MDCT) to achieve high coding efficiency over different content types and bitrates. These waveform and parametric codecs rely on psychoacoustic expertise to design signal processing pipelines with maximal coding efficiency. Yet, while fast and interpretable, such handcrafted pipelines only represent a fraction of the potential models for a speech codec.
This has motivated data-driven approaches to train neural networks to perform speech coding. These networks leverage large amounts of training data while relaxing the assumptions made on the type of transformations applied by the system [3, 4, 5, 6, 7, 8, 9, 10]. In particular, the SoundStream neural codec combines a causal convolutional architecture with a residual vector quantizer. This quantization method produces a hierarchy of coarse-to-fine codes, and allows for efficient compression while providing bitrate scalability. As a result, SoundStream at \(3\) kbps matches the quality Opus at \(12\) kbps. However, the quality of most codecs, be they handcrafted or trained, degrades significantly at bitrates lower than \(3\) kbps.
In this work, we introduce LMCodec, a low bitrate speech codec that combines recent advances in neural audio coding and audio generative modeling. LMCodec uses autoregressive Transformers [11] on SoundStream tokens to (i) model the entropy of the distribution of coarse tokens and (ii) predict fine tokens from the coarse ones. At inference, LMCodec extracts the codes of a SoundStream model from the input waveform. However, instead of sending all codes to the receiver like a SoundStream codec would do, LMCodec only transmits entropy-coded coarse tokens. On the receiver side, a generative language model is used to predict fine tokens from the coarse ones, and a SoundStream decoder then reconstructs audio from the complete token sequence.
LMCodec takes inspiration from the AudioLM [12] generative model, which also predicts fine SoundStream tokens from coarse ones. However, unlike AudioLM, LMCodec does low bitrate compression rather than generative modeling, and to do so leverages AudioLM both as a generative model and an entropy model. Other Transformer-based models for low bitrate coding have been proposed [7, 13]. The codec in [13] enriches SoundStream with embeddings extracted from a self-supervised speech representation model [14] and achieves speech compression at a rate of 600 bps. [7] synthesizes speech from a combination of phonetic, pitch and speaker representations to achieve 365 bps. Unlike these models, LMCodec is a fully causal model, which is thus amenable to online encoding and decoding. Our primary contribution is the design of a new neural speech codec, which achieves state-of-the-art results outperforming many previous codecs operating at three to four times the rates according to subjective human evaluation metrics.
Subjective evaluations demonstrate how LMCodec allows for low bitrate speech coding with minimal distortion, with LMCodec at approximately \(1\)-\(1\)-\(5\) kbps matching the performance of Opus at \(12\) kbps. We furthermore analyze the failure modes of our system, as well as the discrepancies in bit allocations between speech and non-speech sections of an audio signal.
## 2 Proposed Model
In this section, we describe our proposed speech codec consisting of four components: an encoder, a residual quantizer, an AudioLM
Figure 1: Overall pipeline of the proposed codec.
block, and a decoder. The encoder, residual quantizer, and decoder follow similar structures from SoundStream. At the very high level, the encoder takes raw speech in the time domain as an input and extracts low-rate features that contain sufficient information to reconstruct the speech. The residual quantizer finds discrete representations of the inherently continuous encoded features. AudioLM poses the modeling of the quantized discrete representation as a language modeling problem and estimates the probability distribution of the next discrete audio token given previous audio tokens. Finally, the decoder reconstructs the input speech signal from the discrete encoded features.
### SoundStream
We now briefly describe the SoundStream model [10] that we used for creating high-quality audio tokens.
#### 2.1.1 Encoder
Given a raw speech signal \(\mathbf{x}\in[-1,1]^{T}\) of length \(T\), the encoder \(\mathcal{E}:[-1,1]^{T}\rightarrow\mathbb{R}^{T_{e}\times N_{e}}\) creates a sequence of embeddings of length \(T_{e}\ll T\), each with dimension \(N_{e}\). In our proposed model, the encoder takes raw waveform speech at \(T=16\,\mathrm{kHz}\) as input and generates \(N_{e}=128\) dimensional speech features with a frame rate of \(50\,\mathrm{Hz}\). The architecture of the encoder is fully convolutional based on causal 1D convolutions. Hence, the algorithmic delay is determined by the overall striding factor (i.e., \(T/T_{e}=320\) samples or \(20\,\mathrm{ms}\)).
#### 2.1.2 Residual Vector Quantizer (RVQ)
Transmission of continuous speech features over low-bandwidth channels is achieved via vector quantizers (VQs) [10], where the features are turned into discrete representations while introducing minimal distortion. Given the encoded features \(\mathbf{e}\in\mathbb{R}^{T_{e}\times N_{e}}\), the residual quantizer \(\mathcal{Q}:\mathbb{R}^{T_{e}\times N_{e}}\rightarrow\{0,\dots,2^{\left\lfloor \log N_{e}\right\rfloor-1}\}^{T_{e}\times N_{q}}\) computes the corresponding binary representation of \(\mathbf{e}\) and its inversion, where \(N_{q}\) is the number of quantizers and \(N_{c}\) is the codebook size of a single quantizer. In our proposed model, we always use the codebook of size \(N_{c}=2^{10}\) and vary the number of layers in the residual VQs: \(N_{q}\in\{3,4,6,12,24\}\).
#### 2.1.3 Decoder
The decoder \(\mathcal{D}:\mathbb{R}^{T_{e}\times N_{e}}\rightarrow[-1,1]^{T}\) synthesizes the original speech signal from the post-quantized embeddings. In our work, we adopt the CNN-based decoder method trained with adversarial loss in addition to losses on waveform and spectral domains. The architecture of the decoder is similar to that of the encoder, with a transposed convolutional layer to upsample the output. The adversarial training framework relies on two types of discriminators: waveform domain and short time Fourier Transform (STFT) domain discriminators.
### AudioLM
In this subsection, we describe the problem of language modeling of SoundStream tokens. Adding a language model in the bottleneck enables interesting modeling tasks, including modeling the distribution of future SoundStream tokens (Section 2.2.1) or tokens at different VQ layers (Section 2.2.2).
For the rest of this paper, let \(N_{\mathcal{C}}\) and \(N_{\mathcal{F}}\) denote the number of quantizers for the coarse-level and fine-level AudioLMs, respectively. Figure 1 shows the overall architecture of our proposed model, in which we use \(N_{\mathcal{C}}=4\) and \(N_{\mathcal{F}}=8\). In our experiment, we use various combination of \((N_{\mathcal{C}},N_{\mathcal{F}})\) ranging from \(N_{\mathcal{C}}+N_{\mathcal{F}}=3\) to \(N_{\mathcal{C}}+N_{\mathcal{F}}=24\). Additionally, let \(c_{k}^{(n)}\) denote the SoundStream token at frame \(n\) and VQ layer \(k\).
#### 2.2.1 Coarse-level AudioLM
The goal of the coarse-level AudioLM is to model the distribution of the next coarse SoundStream tokens. Specifically, we are interested in modeling the conditional distribution of the next SoundStream tokens given the past information
\[pc\Big{(}c_{k}^{(n)}\Bigm{|}\underbrace{c_{k-1}^{(n)},\dots,c_{1}^{(n)}}_{\text {coarse-level current frame}},\underbrace{c_{N_{\mathcal{C}}}^{(n-1)},\dots,c_{1}^{(1)}}_{ \text{past information}}\Big{)} \tag{1}\]
for \(k\in\{1,\dots,N_{\mathcal{C}}\}\).
Given the distribution of the future SoundStream tokens, we build a codec by using lossless Entropy Coding (Section 2.3). More specifically, the discrete probability distribution of SoundStream tokens can be estimated both at the sender and the receiver sides, and we use this to drive an entropy codec. Note that in our proposed method, we only need to transmit \(N_{\mathcal{C}}\) tokens per single audio frame. The remaining \(N_{\mathcal{F}}\) tokens are generated at the receiver side only as described in the next section.
Figure 3: Objective evaluation of different LMCodec models. (left) LMCodec with a fixed number of RVQ layers (i.e., \(N_{\mathcal{C}}+N_{\mathcal{F}}=12\)) on various standard metrics. (right) LMCodec with \(N_{\mathcal{C}}+N_{\mathcal{F}}\in\{6,12,24\}\) on SSL-MOS [15]. Numbers next to the markers refer to the number of coarse-level codes \(N_{\mathcal{C}}\).
Figure 2: MUHSRA-like subjective evaluation from state-of-the-art codecs with medium and low bitrates. LMCodec-\(x/y\) refers to our model with \(N_{\mathcal{C}}=x\) and \(N_{\mathcal{C}}+N_{\mathcal{F}}=y\). wav2vec [13] is a recent neural codec based on SoundStream and Transformer.
#### 2.2.2 Fine-level AudioLM
Similar to the coarse-level AudioLM, the fine-level AudioLM predicts the top VQ layers given the information about bottom VQ layers in addition to the past information. Specifically, we are interested in modeling the distribution of the fine-level SoundStream tokens conditioned on the coarse-level tokens and the past information:
\[p_{\mathcal{T}}\Big{(}c_{k}^{(n)}\Big{|}\underbrace{c_{k-1}^{(n)},\ldots,c_{N_{ c}+1}^{(n)}}_{\text{fine-level current frame}},\underbrace{c_{N_{c}}^{(n)},\ldots,c_{1}^{(n)}}_{\text{coarse-level current frame}},c_{\underbrace{N_{c}+N_{F}}^{(n-1)},\ldots,c_{1}^{(1)}}_{\text{ post information}}\Big{)} \tag{2}\]
for \(k\in\{N_{\mathcal{C}}+1,\ldots,N_{\mathcal{C}}+N_{F}\}\). Note that our model is causal, in contrast to AudioLM.
Since we only transmit the coarse-level tokens, we model the distribution of the fine-level tokens by assuming that we have access to ground-truth coarse-level SoundStream tokens. We note that, while [12] also proposes a similar fine-level AudioLM stage, our contribution here is the causal formulation of the task, which makes our approach more suitable and amenable to online decoding.
### Entropy Coding (EC)
Given the distribution of coarse-level SoundStream tokens, we transmit data by using entropy coding, a lossless data compression technique. In this work, we provide experimental results using Huffman coding, in addition to the estimated entropy rate. We treat each code from the residual VQs separately and do not perform any grouping to reduce the upper bound on the bitrate.
We first note that our proposed codec requires only sending coarse-level SoundStream tokens using entropy coding. Specifically, given raw audio, LMCodec first encodes audio into SoundStream tokens and models the probability distribution of the next SoundStream tokens, driving the entropy codec. Note that the discrete probability distribution of SoundStream tokens can be estimated both at the sender and the receiver sides, so the receiver can losslessly reconstruct the coarse tokens. To generate audio output from only coarse-level tokens, we use a fine-level AudioLM to synthesize fine-level tokens from the transmitted coarse-level tokens and then generate audio from both coarse-level and fine-level tokens using SoundStream decoder.
### Training Strategy
We adopt a 2-stage training paradigm. First, we train only the encoder, quantizer, and decoder. Then, we freeze the weights of these components and train only the AudioLM components. We train the coarse-level and fine-level AudioLM models separately.
#### 2.4.1 Loss Functions
We trained the SoundStream model using the standard adversarial loss, feature matching loss, reconstruction loss, and quantization loss according to [10]. In training AudioLM models, we use the standard cross-entropy loss for language modeling over the vocabulary space.
#### 2.4.2 Training configurations
To create our codec modules, we adapted the architectures of the encoder, quantizer, generator, and discriminators used in SoundStream [10] and AudioLM from T5X. Both AudioLM models are the decoder-only models based on the base model of t5.1.1 (with approximately 250 million parameters).
The SoundStream model is trained on \(16\,\mathrm{kHz}\) audio from the LibriVox dataset [16] for 1M steps. Both coarse-level and fine-level AudioLM models are trained on \(16\,\mathrm{kHz}\) audio from the Libri-Light dataset [17] for 1M steps with a batch size of 32 and sequence length of 1024 SoundStream tokens with Adafactor optimizer [18] with a decay rate of 0.8.
We trained multiple coarse-level and fine-level AudioLM models to achieve varieties of bitrates. The bitrates are calculated based on the entropy coding of codes from coarse-level AudioLM.
## 3 Evaluation
To demonstrate the performance of our proposed method, we evaluate LMCodec using both objective and subjective evaluations. For objective evaluation, we report the accuracy of LMCodec future token prediction and objective metrics including ViSQOL [19], WARP-Q [20], SSL-MOS [15], WER, and CER together with bitrate based on the test split from the clean LibriSpeech dataset [21].
For subjective evaluation, we perform two MUSHRA-like [22] subjective tests to compare the audio quality with standard state-of-the-art speech codecs at medium bitrate (i.e., \(1\) kbps to \(12\) kbps) and low rate (i.e., \(0.5\) kbps to \(1.5\) kbps). The tests were conducted respectively on 91 and 94 crowd-sourced raters using headphones over 32 clean utterances from VCTK dataset [23]. Raters who did not score the reference above 80 at least 80% of the time were discarded, as were raters who rated more than 75% of non-reference samples 80 or above. 40 raters for the medium rate test and 33 raters for the low rate test met this requirement.
As shown in Figure 2, the raters found that LMCodec-4/6 with 4 quantizers at 1.1 kbps perform significantly better than 12 kbps Opus. LMCodec-8/12 with 8 quantizers at \(2.6\) kbps has comparable performance to SoundStream at \(6\) kbps. The low-rate MUSHRA test compares recent transformer neural codecs and lower bitrate SoundStream models. The raters preferred LMCodec to the transformer models from [13] and SoundStream at the same rate.
### Discussion
Table 1 shows the accuracy of the future token prediction and the bitrate performance of LMCodec from the test split of the clean LibriSpeech [21]. For accuracy, we note that perfect accuracy means the model knows perfectly what the next tokens are. In the context of fine-level AudioLM, this suggests that the model does not necessarily need to synthesize the correct code to produce reasonable audio output. The bitrates are computed based on the future token's distributions obtained from LMCodec. For Huffman coding, we use the ground truth tokens encoded with the Huffman algorithm. Additionally, we note that the distributions of future tokens are updated every
Figure 4: Distribution of codes prediction for inputs from the non-voice section and inputs from the middle of phonemes
timestep based on the model, different from how other entropy codecs that may have fixed distributions operate. So, the Huffman bitrate may sometimes be lower than the bitrate derived from the entropy.
In this section, we additionally discuss some of the interesting audio effects from LMCodec. We suggest that readers listen to some of the audio samples from our model. In particular, our model with only one quantizer is able to produce reasonable human voice with some babbling effects. The amount of babbling is reduced as the number of quantizers used in the codec increases. This suggests that there are some underlying hierarchical structure in SoundStream tokens, and the proposed codec can potentially be operating at very low bitrate, given that the coarse-to-fine prediction is accurate.
In Figure 4, we visualize the distribution of code prediction from the AudioLM model when the input is at the middle of a phoneme and between phonemes. We also found that the model is very confident if the audio input is the middle of the phonemes, as the language model network is able to learn underlying linguistic behavior of the utterances. On the other hand, the model has lower confidence in predicting the next token when reaching silence sections, suggesting that our proposed causal model is unable to predict future word really well. This confirms the babbling effect that we observed in the audio output from our proposed codec, which increases as we restrict the amount of information to describe each frame (e.g., by transmitting fewer codes or dropping frames).
Figure 2 shows the comparison of LMCodec with low-rate and medium-rate audio codecs. In particular, we find that LMCodec-4/6 performs better than SoundStream with 3 quantizers at \(1.5\) kbps but slightly worse than SoundStream with 12 quantizers at 6 kbps which is on par with LMCodec-8/12. We note that LMCodec-4/6 and LMCodec-8/12 are based on SoundStream with 6 and 12 quantizers respectively. Our results suggest that LMCodec effectively takes advantages from entropy coding and synthesizing reasonable fine-level codes from coarse-level codes. When comparing with SoundStream at similar rate, LMCodec essentially outperforms.
### Voice Activity Detection (VAD)
In this section, we show the performance of LMCodec applied only on audio regions with voice activity. We use an open-source RNNoise model [24], which uses Mel-Frequency Cepstral Coefficients (MFCC) and outputs the probability of voice activity every \(10\,\mathrm{ms}\) frame size. Since the frame size of SoundStream tokens is \(20\,\mathrm{ms}\), we run RNNoise on 2 consecutive 10-ms frames and define that the 20-ms SoundStream frame has a voice activity if and only if the probability that 2 consecutive frames have voice is over 0.8.
Table 2 shows the bitrate of LMCodec on two scenarios: (i) transmitting only voices and (ii) transmitting entire speech signals but using zero bits for non-voices. We report the bitrate derived from the entropy and the bitrate based on Huffman coding. We note the first scenario has slightly lower bitrates as compared to bitrates from Table 1 because the entropy for non-speech signals is usually higher than the entropy for speech signals. Additionally, the second scenario provides the lower bound estimate of bitrates when transmitting very low bits for non-voice signals similar to Opus with variable bitrate scheme.
### Objective Evaluation
We present an objective evaluation on the audio examples from VCTK dataset [23] in Figure 3. First, we demonstrate that the word error rate (WER) and character error rate (CER) are decreasing as the number of quantizers used in the LMCodec increases until around 4-6 quantizers, suggesting that the semantic content is stored in the coarse tokens. To evaluate WER and CER, we use two ASR models from AWS Transcribe service and Conformer model [14] trained on LibriSpeech [21]. Second, ViSQOL [19] and WARP-Q [20], metrics designed for neural speech codecs, increases and decreases respectively, implying that the fine tokens are responsible for fine-grained acoustic details. Third, SSL-MOS [15] shows that the overall speech quality improves by increasing the number of quantizers.
Despite neural speech codecs metrics ViSQOL and WARP-Q indicating worse performance at about 4-6 quantizers, our listening test shows very high quality audio results with small number of quantizers. This suggests that the language model of LMCodec is able to model the distribution of the fine tokens given the coarse tokens reasonably well even if the synthesized fine tokens are different from the ground truth ones. This drives metrics like ViSQOL and WARP-Q down as they primarily rely on the comparison between synthesized audio and its corresponding ground truth reference audio.
When comparing LMCodec with different total number of quantizers, we first note that the upper bound performance of LMCodec with 6 quantizers is lower than the upper bound performance of LMCodec with 12 or 24 quantizers. However, LMCodec with a lower total number of quantizers reaches better performance faster than LMCodec with a higher total number of quantizers.
## 4 Conclusion
Our experiments show that the proposed codec significantly outperforms the original neural speech codec with respect to the quality of synthesized speech when operating in the ultra-low bitrate regime. In addition, the subjective experiments indicate comparable to or better perceptual speech quality compared to conventional codecs operating at higher rates.
\begin{table}
\begin{tabular}{l l l l} \hline \((N_{\mathcal{C}},\,N_{\mathcal{F}})\) & **Accuracy** & **Entropy** & **Huffman** \\ \hline \hline \((2,1)\) & 15.5\% & 534.0 bps & 542.5 bps \\ \((3,1)\) & 14.3\% & 837.1 bps & 845.7 bps \\ \((4,2)\) & 13.1\% & 1163.9 bps & 1173.5 bps \\ \hline \((1,11)\) & 16.1\% & 262.8 bps & 262.6 bps \\ \((2,10)\) & 15.7\% & 533.5 bps & 540.7 bps \\ \((3,9)\) & 14.9\% & 844.6 bps & 847.4 bps \\ \((4,8)\) & 13.4\% & 1154.2 bps & 1174.3 bps \\ \((6,6)\) & 11.9\% & 1853.7 bps & 1861.2 bps \\ \((8,4)\) & 10.6\% & 2561.8 bps & 2577.6 bps \\ \((10,2)\) & 9.7\% & 3300.0 bps & 3324.8 bps \\ \((12,0)\) & 8.9\% & 4094.5 bps & 4092.1 bps \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy and bitrates. Bitrate without entropy coding is equivalent to 500 bps per quantizer (i.e., \(6\) kbps for 12 quantizers). Given the space limit, we only present the numerical results for LMCodec with 12 RVQ layers and LMCodec models shown in Figure 2.
\begin{table}
\begin{tabular}{l l l l l} \hline \((N_{\mathcal{C}},\,N_{\mathcal{F}})\) & Transmitting only voices & \multicolumn{2}{c}{Transmitting non-voices with zero bits} \\ & **Entropy** & **Huffman** & **Entropy** & **Huffman** \\ \hline \hline \((2,1)\) & 545.6 bps & 554.1 bps & 303.1 bps & 307.9 bps \\ \((3,1)\) & 850.5 bps & 858.6 bps & 472.1 bps & 476.7 bps \\ \((4,2)\) & 1165.6 bps & 1173.7 bps & 647.2 bps & 561.7 bps \\ \((1,11)\) & 268.3 bps & 268.7 bps & 149.3 bps & 149.5 bps \\ \((2,10)\) & 523.7 bps & 530.0 bps & 290.8 bps & 294.3 bps \\ \((3,9)\) & 816.5 bps & 819.1 bps & 453.2 bps & 454.7 bps \\ \((4,8)\) & 1108.5 bps & 1129.7 bps & 615.2 bps & 627.0 bps \\ \((6,6)\) & 1772.5 bps & 1783.6 bps & 985.5 bps & 990.2 bps \\ \((8,4)\) & 2457.3 bps & 2471.5 bps & 1363.5 bps & 1371.4 bps \\ \((10,2)\) & 3170.9 bps & 3196.7 bps & 1763.4 bps & 1777.8 bps \\ \((12,0)\) & 3958.2 bps & 3951.5 bps & 2207.6 bps & 2203.9 bps \\ \hline \end{tabular}
\end{table}
Table 2: Coding performance of LMCodec with VAD. |
2304.06956 | Non-Existence of S-Integrable Three-Point Partial Difference Equations
in the Lattice Plane | Determining if an (1+1)-differential-difference equation is integrable or not
(in the sense of possessing an infinite number of symmetries) can be reduced to
the study of the dependence of the equation on the lattice points, according to
Yamilov's theorem. We shall apply this result to a class of
differential-difference equations obtained as partial continuous limits of
3-points difference equations in the plane and conclude that they cannot be
integrable. | Decio Levi, Miguel A. Rodríguez | 2023-04-14T07:01:46Z | http://arxiv.org/abs/2304.06956v2 | # S-Integrable three-point partial difference equations on the lattice plane
###### Abstract
Determining if an \(1+1\)-differential-difference equation is integrable or not (in the sense of possessing an infinite number of symmetries) can be reduced to the study of the dependence of the equation on the lattice points, according to Yamilov's Theorem. We shall apply this result to a class of differential-difference equations obtained as partial continuous limits of 3-points difference equations in the plane and conclude that they cannot be integrable.
## 1 Introduction
Partial difference equations have always played an important role in physics and this has been noticeable the last decades. From one side, discrete systems seems to be at the base of the basic laws of physics (as in quantum gravity [19]) and on the other side with the increasing use of computers, discretizations are playing an increasing role in physical applications for solving numerical differential equations [7] maybe also preserving some of the main properties of the differential equations, in particular their symmetries [13].
The number of points involved in the equation characterizes these equations, for instance, partial difference equations on four points in a plane have been studied at length [2, 6, 12, 14], being the simplest class such that the evolution can be invertible. The continuous limit corresponds to hyperbolic partial differential equations. We can distinguish two classes of integrable equations, those which are linearizable, C-integrable equations in Calogero's notation [3], and equations
whose integrability requires the solution of a scattering problem (characterized by the compatibility of two linear problems for an auxiliary wave function), S-integrable equations in Calogero's notation.
Partial difference equations defined on three points, which appear as particular situations in triangular lattices, have not received such detailed attention (see [18] for a thoroughly study on the linearizable case) although triangularization is a key tool in differential geometry and they allow to define many discrete models [1, 5, 15, 16, 17]. Often, however, in the references mentioned before, the partial difference equations defined on a triangular lattice involve more than three lattice points.
Boundary value problems can be defined for equations involving only three lattice points. In Fig. 1 we present just an example of the initial data and the corresponding set of points (a more complete set of possible configurations is given in [18]).
We consider an equation relating the values of the field \(u\) at the points, \((m,n)\), \((m,n+1)\), and \((m+1,n)\) (given three points there is only one relative position, disregarding orthogonality or step sizes). Generically, if we write the equation as:
\[f(u_{n,m},u_{n+1,m},u_{n,m+1})=0 \tag{1.1}\]
we can define the value in a point as a function of the values in the other two points, for instance:
\[u_{n,m+1}=g_{1}(u_{n,m},u_{n+1,m}) \tag{1.2}\]
that is, if we give the initial values along \(r_{1}\) (see Fig. 1), we could compute (solving the equation) the values of the field in the upper half plane. We can (again generically) isolate \(u_{n+1,m}\):
\[u_{n+1,m}=g_{2}(u_{n,m},u_{n,m+1}) \tag{1.3}\]
Then, giving the initial values along the line \(r_{2}\) we obtain the values of the field in the right half plane. Finally,
\[u_{n,m}=g_{3}(u_{n,m+1},u_{n+1,m}) \tag{1.4}\]
Figure 1: An example of points related by an equation defined on three points and the choice of initial conditions (see the main text for a discussion of these graphics).
If we provide the initial values along the line \(r_{3}\), the solution of the equation will be obtained in a left descending staircase. Other possible schemes can be used for this kind of lattices and equations (see [9] for a description of different initial-boundary value problems).
Multilinear equations can also be considered as particular cases. In fact, nonlinear equations defined on three lattice points are simpler than those on four points. The number of parameters in these equations is less than in the four points case, since the nonlinearities are at most cubic. As shown in [18], the classification problem for linearizable multilinear partial difference equations can be carried out up to the end and provide many examples of nonlinear three-point partial difference equations.
Let us write a generic equation on three lattice points as
\[\mathcal{E}_{n,m}(u_{n,m},u_{n+1,m},u_{n,m+1})=0. \tag{1.5}\]
If the independent variables are written as \(x_{n}=nh_{n}\) and \(t_{m}=mh_{m}\), in terms of the discrete indices \(n\) and \(m\) and of the lattice parameters \(h_{n}\) and \(h_{m}\) we can get from (1.5) in the continuous limit, when \(h_{n},h_{m}\to 0\) and \(n,m\to\infty\), equations of the form
\[u_{t}=f(u)u_{x}+g(u)u_{xx},\qquad u=u(x,t). \tag{1.6}\]
as, for instance, the continuous Burgers equation [10].
The analysis of the C-integrable nonlinear partial differential equations (linearizable via a transformation) can be found in [4]. In the continuous case the class of possible transformations is richer than in the discrete one as we can have transformations of the independent and dependent variables. In the discrete case we have to restrict ourselves to just the transformations of the dependent variable unless we consider transformable lattices, i.e., partial difference schemes (see [13] and references therein quoted).
Contrary to the case of C-integrable equations, the situation in the case of S-integrable three-point partial difference equation is not known, up to our knowledge.
In this sense, Yamilov's Theorem constitutes a useful tool in the discussion of the integrability of differential-difference equations (see for instance [11] for an example of its application in a particular case).
The content of this article is as follows. In Section 2 we review the main results on the linearization of partial difference equations defined on three lattice points either by point transformations or by Hopf-Cole transformations [18]. Then in Section 3 by using a theorem introduced by Yamilov (see [13]) we show that a partial difference equation defined on three points cannot be S-integrable. The general result is then confirmed by considering multilinear equations where the calculations are explicit. Section 4 is devoted to some concluding remarks and a proposal of future researches.
C-Integrable three-point partial difference equations.
Here we review the results on linearizable three-point partial difference equations presented in [18]. To simplify the presentation, as we will limit ourselves to autonomous equations, we will not need to index the complex dependent variable by its lattice point but just by its relative position with respect to the reference point \(n,m\), i.e. \(u_{n,m}=u_{00}\). Then, equation (1.5) can be written as
\[\mathcal{E}\left(u_{00},u_{10},u_{01}\right)=0. \tag{2.1}\]
In some cases, we can use an autonomous point transformation (for a non-constant function \(f\))
\[\tilde{u}_{00}=f\left(u_{00}\right) \tag{2.2}\]
to linearize (2.1), providing the linear equation
\[a\tilde{u}_{00}+b\tilde{u}_{10}+c\tilde{u}_{01}+d=0, \tag{2.3}\]
with \(a\), \(b\), \(c\) and \(d\) being complex coefficients.
### Linearizability
The following theorem, see [18], provides a solution to this problem:
**Theorem 2.1** (Theorem 2, [18]).: _Necessary and sufficient condition for the linearizability by a point transformation (2.2) of an equation belonging to the class (2.1) is that the equation can be written in the form_
\[u_{00}=\mathcal{H}^{-1}\left(-\beta\mathcal{H}\left(u_{10}\right)-\alpha \mathcal{H}\left(u_{01}\right)-\gamma\right), \tag{2.4}\]
_where \(\mathcal{H}(x)\) is an arbitrary function of its argument, \(\mathcal{H}^{-1}\) is its inverse function and \(\alpha\), \(\beta\), \(\gamma\) are arbitrary integration constants. This equation is linearizable by the point transformation \(\tilde{u}_{00}=\mathcal{H}\left(u_{00}\right)\) to the equation_
\[\tilde{u}_{00}+\beta\tilde{u}_{10}+\gamma\tilde{u}_{01}=0. \tag{2.5}\]
We can also linearize (2.1) by the Cole-Hopf transformation;
\[\tilde{u}_{01}=f\left(u_{00}\right)\tilde{u}_{00}, \tag{2.6}\]
which transforms the linear equation
\[a\tilde{u}_{00}+b\tilde{u}_{10}+c\tilde{u}_{01}=0, \tag{2.7}\]
into (2.1), where \(a\), \(b\) and \(c\) are complex coefficients with \((a,b,c)\neq(0,0,0)\). We refer to [18] for details on the conditions over the equation to be linearizable under this transformation.
### Classification of linearizable multilinear equations.
We can use the above results to classify multilinear difference equations depending on three points in a two dimensional lattice:
\[au_{00}+bu_{01}+cu_{10}+du_{00}u_{10}+eu_{00}u_{01}+fu_{10}u_{01}+gu_{00}u_{10}u_{ 01}+h=0 \tag{2.8}\]
where \(a,\ldots,h\) are arbitrary complex parameters. We can state the following theorem [18]:
**Theorem 2.2** (Theorem 3, [18]).: _Apart from the equations which are Mobius equivalent to a linear equation, the only equations belonging to the class (2.8) which are linearizable by a point transformation are, up to a Mobius transformation of the dependent variable (eventually composed with an exchange of the independent variables \(n\leftrightarrow m\)), the following three_
\[u_{00}u_{10}u_{01}-1 =0, \tag{2.9a}\] \[u_{00}u_{01}-u_{10} =0,\] (2.9b) \[u_{10}u_{01}-u_{00} =0. \tag{2.9c}\]
_Eqs. (2.9a, 2.9b, 2.9c) linearize by the transformation \(\tilde{u}_{00}=\log u_{00}\), with \(\log\) always standing for the principal branch of the complex logarithmic function, respectively to the equations_
\[\tilde{u}_{00}+\tilde{u}_{10}+\tilde{u}_{01} =2\pi\mathrm{i}z, \tag{2.10a}\] \[\tilde{u}_{00}-\tilde{u}_{10}+\tilde{u}_{01} =2\pi\mathrm{i}z,\] (2.10b) \[-\tilde{u}_{00}+\tilde{u}_{10}+\tilde{u}_{01} =2\pi\mathrm{i}z. \tag{2.10c}\]
Let us now consider the linearization via Hopf-Cole transformations (2.6). As we classify up to a Mobius transformation, we can be always set \(g_{00}\left(u_{00}\right)=u_{00}\). We can now state the following theorem:
**Theorem 2.3** (Theorem 5, [18]).: _The class of complex autonomous multilinear discrete equations defined on three points which is linearizable to a homogeneous linear equation by a Cole-Hopf transformation \(\tilde{u}_{01}=u_{00}\tilde{u}_{00}\), is given, up to a Mobius transformation of the dependent variable (eventually composed with an exchange of the independent variables \(n\leftrightarrow m\)), by the following equation_
\[\frac{1+u_{00}}{u_{00}}-\frac{1+u_{01}}{u_{10}}=0.\] (2.11a) _This equation linearizes to the equation_ \[\tilde{u}_{00}+\tilde{u}_{10}+\tilde{u}_{01}=0. \tag{2.11b}\]
## 3 Yamilov's Theorem and the S-integrability of three-point partial difference equations.
In [13] (Section: "Why the Shape of Integrable Equations on the Lattice is Symmetric"), the following theorem is proposed and proved.
**Theorem 3.1** (Yamilov).: _If an equation of the form_
\[\dot{u}_{n}=f_{n}=f(u_{n+N},u_{n+N-1},\dots u_{n+M})\, \tag{3.1a}\] \[N\geq M\,\qquad\frac{\partial f_{n}}{\partial u_{n+N}}\frac{\partial f_{n}}{ \partial u_{n+M}}\neq 0\, \tag{3.1b}\]
_possesses a conservation law of order \(m\), such that_
\[m>\min(|N|,|M|)\, \tag{3.2}\]
_then_
\[N=-M, \tag{3.3}\]
_and \(N\geq 0\)._
This theorem states a necessary condition for a differential-difference equation to be S-integrable based on the fact that while a differential-difference equation with an infinity of symmetries is either S- or C- integrable, if it has not a sufficiently high conservation law it cannot be S-integrable. As the content of this theorem is a necessary condition, an equation which satisfies it may still be not integrable. However, a S-integrable equation has to satisfy necessarily this theorem.
Here in the following we will use this theorem to show that, while as we saw in the previous Section partial difference equations defined on three lattice points can be C-integrable, they may not be S-integrable.
To do so we carry out the partial continuous limit of (1.5). Eq.(1.5) is symmetric in the exchange of \(n\) and \(m\) so in the whole generality we can do the continuous limit when \(h_{m}\to 0\) and \(m\to\infty\) so that \(t=mh_{m}\) remains finite. For convenience we will call \(h_{m}=\epsilon\) and \(u_{n,m+1}=u_{n}(t+h_{m})=u_{n}(t+\epsilon)\). Assuming that \(u_{n}(t)\) is an entire function of \(t\) we can write
\[u_{n}(t+\epsilon)=u_{n}(t)+\epsilon\dot{u}_{n}(t)+\frac{1}{2}\epsilon^{2}\ddot {u}_{n}(t)+\mathcal{O}(\epsilon^{3}). \tag{3.4}\]
So (1.5) becomes:
\[\mathcal{E}_{n,m}(u_{n,m},u_{n+1,m},u_{n,m+1})=\mathcal{E}_{n}(\epsilon,u_{n}( t),u_{n+1}(t),u_{n}(t+\epsilon)). \tag{3.5}\]
Expanding the last result in (3.5) in \(\epsilon\), assuming that \(\mathcal{E}_{n}(\epsilon,u_{n}(t),u_{n+1}(t),u_{n}(t+\epsilon))\) is an entire function, we have
\[\mathcal{E}_{n}(\epsilon,u_{n}(t),u_{n+1}(t),u_{n}(t+\epsilon))= \mathcal{E}_{n}^{(0)}(u_{n}(t),u_{n+1}(t))+\] \[\epsilon\,\mathcal{E}_{n}^{(1)}(u_{n}(t),u_{n+1}(t),\dot{u}_{n} (t))+\mathcal{O}(\epsilon^{2}). \tag{3.6}\]
By a proper choice of the dependence of (1.5) from \(u_{m,n}\), \(u_{m,n+1}\) and \(u_{m+1,n}\) we can make \(\mathcal{E}_{n}^{(0)}=0\) and then its semi continuous limit becomes
\[\mathcal{E}_{n}^{(1)}(u_{n}(t),u_{n+1}(t),\dot{u}_{n}(t))=0. \tag{3.7}\]
Eq (3.7) is not in the form of Yamilov's Theorem and cannot be reduced to it by any lattice re-parametrization as it depends just on two points.
As an example let us consider the semi-continuous limit of (2.8) when the complex parameters \(a,\ldots,h\) are taken to be entire functions of \(\epsilon\), the lattice spacing in the \(m\) direction which we can assume to be a constant along the lattice. We can expand the parameters of (2.8) in powers of \(\epsilon\) and we have
\[a=a^{(0)}+\epsilon a^{(1)}+\epsilon^{2}a^{(2)}+\cdots\]
and similar expressions for the other parameters in the equation. The request that the zero order in \(\epsilon\) of (2.8) be zero implies
\[a^{(0)}=-b^{(0)},\;c^{(0)}=e^{(0)}=g^{(0)}=h^{(0)}=0,\;d^{(0)}=-f^{(0)}, \tag{3.8}\]
and at first order in \(\epsilon\) we get
\[(b^{(0)} +f^{(0)}u_{1})\dot{u}_{0}+(a^{(1)}+b^{(1)})u_{0}+c^{(1)}u_{1}+e^{ (1)}u_{0}^{2}+(d^{(1)}+f^{(1)})u_{0}u_{1}\] \[+g^{(1)}u_{0}^{2}u_{1}+h^{(1)}=0. \tag{3.9}\]
Naturally (3.9) is of the form (3.7) and, by a proper choice of the parameters (3.8), we can always make \({\cal E}_{n}^{(0)}=0\), as required in the general case. In the notation of Yamilov's Theorem, (3.9) has \(N=1\) and \(M=0\) and consequently \(N\geq 0\) but the condition (3.3) is not satisfied. Thus (3.9) cannot be S-integrable as it does not satisfy the necessary S-integrability conditions implied by Theorem 3.1. As (3.9) is not S-integrable for any choice of the parameters so it will be also the multilinear partial difference equation (2.8).
## 4 Conclusions
The application of Yamilov's Theorem to the three-points partial difference equation yields the result that no integrable differential-difference equation can be constructed taking the limit of this class of equations.
We have just discussed the limits in the direct approach, taking the continuous limit in one of the two indices of the equation. We have also considered skew-limits (through a combination of both indices \(n,m\)), since, as it is known [8, 11], in some cases one can obtain differential-difference equations satisfying Yamilov's Theorem using this approach. However, this is not possible in the three-points case.
We plan to extend the results presented here to the case of four-points partial difference equations on the lattice plane. In this case we know there are many S-integrable and C-integrable results. Since the classification of C-integrable equations in the multilinear case is not complete, it would be interesting to prove that, at least in this multilinear case, there is a privileged shape of the equation which might contain S-integrable equations.
### Acknowledgements
MAR acknowledges the support of Universidad Complutense de Madrid (Spain) under grant G/6400100/3000.
_* Prof. Decio Levi passed away at the time we were working on this article. I will always miss him as my colleague and dearest friend._
|
2306.01698 | Universality Conjectures for Activated Random Walk | Activated Random Walk is a particle system displaying Self-Organized
Criticality, in that the dynamics spontaneously drive the system to a critical
state. How universal is this critical state? We state many interlocking
conjectures aimed at different aspects of this question: scaling limits,
microscopic limits, temporal and spatial mixing, incompressibility, and
hyperuniformity. | Lionel Levine, Vittoria Silvestri | 2023-06-02T17:18:16Z | http://arxiv.org/abs/2306.01698v2 | # Universality Conjectures for Activated Random Walk
###### Abstract.
Activated Random Walk is a particle system displaying Self-Organized Criticality, in that the dynamics spontaneously drive the system to a critical state. How universal is this critical state? We state many interlocking conjectures aimed at different aspects of this question: scaling limits, microscopic limits, temporal and spatial mixing, incompressibility, and hyperuniformity.
Key words and phrases:abelian network, activated random walk, hyperuniformity, incompressibility, microscopic limit, quadrature inequality, scaling limit, spatial mixing, stationary distribution, temporal mixing 2
### In search of a universal model of SOC
The most intensively studied model of SOC is called the Abelian Sandpile. In this model, the pile of sand is a collection of indistinguishable particles on the vertices of a fixed graph, for example the \(d\)-dimensional cubic lattice \(\mathbb{Z}^{d}\). When a vertex has at least as many particles as the number of neighbors in the graph (\(2d\), in the case of \(\mathbb{Z}^{d}\)), it _topples_ by sending one particle to each neighboring vertex. As a result, some of those neighboring vertices may now have enough particles to topple, enabling some of their neighbors in turn to topple, and so on: an avalanche. Dhar [9] discovered a beautiful algebraic structure underlying this model.
_Abelian networks_[3] form a larger class of SOC models. Among these, the Stochastic Sandpile [10], the Oslo model [17], and Activated Random Walk [19, 37] (but not the original Abelian Sandpile!) seem to have some "universality" in the sense that when the system size is large, its behavior does not depend much on details like the initial condition, the boundary conditions, or the underlying graph. However, the meaning of "universality" is rarely spelled out. The purpose of this survey is to state precisely several senses in which one of these models, Activated Random Walk, seems to be "universal".
Activated Random Walk (ARW) is a particle system with two species, active particles (a) and sleeping particles (s) that become active if an active particle encounters them (a+s \(\to\) 2a). Active particles perform random walk at rate 1. When an active particle is alone, it falls asleep (a \(\to\) s) at rate \(\lambda\). A sleeping particle stays asleep until an active particle steps to its location. The parameter \(\lambda>0\) is called the _sleep rate_. We denote this dynamics by ARW(\(\mathbb{Z}^{d},\lambda\)).
To draw out the analogy between ARW and sandpiles: The sleeping particles play the role of sand grains, the movement of the active particles plays the role of toppling, and the awakening of sleeping particles by active particles can trigger an avalanche in which many particles wake up. The density of particles in the system plays the role of the slope of the sandpile. Just as a pile of high slope can easily be destabilized by adding a single sand grain, an ARW configuration with a high density of sleeping particles can easily be destabilized by adding a single active particle.
### Plan of the paper
We discuss ARW dynamics in six settings, differing in the initial condition, underlying graph, or boundary conditions. Our conjectures focus on the approach to criticality, and on shared properties of the corresponding critical states. Sections 2 and 3 consider finite particle configurations in infinite volume. Our conjectures touch on the existence of scaling limits (Conjectures 1, 2, 6), microscopic limits (Conjectures 3, 5, 7), and extra symmetry acquired in the limit.
In Section 4 we discuss infinite (stationary ergodic) particle configurations on \(\mathbb{Z}^{d}\). We conjecture existence of a microscopic limit as the threshold density is approached from below (Conjecture 9).
In Sections 5, 6, 7 we consider three different Markov chains on ARW stable configurations on a finite graph. The main themes here are temporal mixing (the system quickly forgets its initial condition: Conjectures 19,24), spatial mixing
(the boundary condition does not affect observables in the bulk: Question 14, Conjecture 15), and a slow-to-fast phase transition (Question 22, Conjecture 25).
Sections 8 and 9 discuss statistical properties of these ARW systems: hyperuniformity (Conjectures 26, 27, Question 28) and site correlations (Tables 1,2,3).
Several conjectures on shared properties of critical or stationary states in the different settings are offered throughout the article (see Conjectures 12,17,21, Question 14 and Proposition 18).
We conclude in Section 10 by contrasting the conjectured behavior of ARW with what is known about the Abelian Sandpile model.
## 2. Point Source
### Spherical limit shape
Consider \(\operatorname{ARW}(\mathbb{Z}^{d},\lambda)\) with initial configuration \(n\delta_{0}\), consisting of \(n\) active particles at the origin and all other sites empty. After a dynamical phase in which each particle performs random walk, and may fall asleep and be awakened many times, activity will die out when all particles fall asleep at distinct sites. We refer to the final configuration of \(n\) sleeping particles as the _ARW aggregate_ (Figure 1).
**Conjecture 1**.: (Aggregate density \(\zeta_{\text{a}}\)) _Let \(A_{n}\) denote the random set of sites visited by at least one walker during the dynamical phase of ARW started from \(n\) particles at the origin in \(\mathbb{Z}^{d}\)._
_There exists a positive constant \(\zeta_{\text{a}}=\zeta_{\text{a}}(\mathbb{Z}^{d},\lambda)\) such that for any \(\epsilon>0\), with probability tending to \(1\) as \(n\to\infty\), the random set \(A_{n}\) contains all sites of \(\mathbb{Z}^{d}\) that belong to the origin-centered Euclidean ball of volume \((1-\epsilon)n/\zeta_{\text{a}}\); and \(A_{n}\) is contained in the origin-centered Euclidean ball of volume \((1+\epsilon)n/\zeta_{\text{a}}\)._
Figure 1. ARW aggregates formed by stabilizing a point source of \(10000\) active particles at the origin in the square lattice \(\mathbb{Z}^{2}\), at three different sleep rates \(\lambda\). Each pixel represents a site of \(\mathbb{Z}^{2}\): sites with a sleeping particle are colored blue, and empty sites are colored white. Particles spread farther when the sleep rate is lower, so the aggregate density \(\zeta_{\text{a}}\) is an increasing function of \(\lambda\).
A weak form of Conjecture 1 in dimension 1 is proved in [33]. As the sleep rate \(\lambda\uparrow\infty\), Activated Random Walk degenerates to Internal DLA, whose limit shape is proved to be a Euclidean ball [26]. The main barrier to applying Internal DLA methods is proving that sleeping particles are spread uniformly, which is the topic of our next conjecture.
### Macroscopic structure of the aggregate
An _ARW configuration_ in \(\mathbb{Z}^{d}\) is a map
\[\eta:\mathbb{Z}^{d}\to\mathbb{N}\cup\{\mathtt{s}\}\]
where \(\eta(x)=\mathtt{s}\) indicates that there is a sleeping particle at \(x\in\mathbb{Z}^{d}\), and \(\eta(x)=k\) indicates that there are \(k\) active particles at \(x\). We write
\[\mathtt{S}:(\mathbb{N}\cup\{\mathtt{s}\})^{\mathbb{Z}^{d}}\to\{0,\mathtt{s} \}^{\mathbb{Z}^{d}}\]
for the operation of _stabilizing_ an ARW configuration \(\eta\): running ARW dynamics until all particles fall asleep. 1
Footnote 1: \(\mathtt{S}(\eta)\) is always defined if \(\eta\) has finitely many particles, but it may be undefined in general. The situation of infinite \(\eta\) is discussed in Section 4.
Consider \(\mathtt{S}(n\delta_{0})\), the ARW aggregate formed by stabilizing \(n\) particles at the origin in \(\mathbb{Z}^{d}\). We will rescale the aggregate and take a limit as \(n\to\infty\). For \(x\in\mathbb{R}^{d}\) let
\[a_{n}(x)=1_{\left\{\mathtt{S}(n\delta_{0})(\lfloor n^{1/d}x\rfloor)=\mathtt{ s}\right\}}.\]
Write \(f_{n}\stackrel{{*}}{{\to}}f\) for weak-\(*\) convergence: \(\int f_{n}\phi\,\mathrm{d}x\to\int f\phi\,\mathrm{d}x\) for all bounded continuous test functions \(\phi\) on \(\mathbb{R}^{d}\), where \(\,\mathrm{d}x\) is Lebesgue measure on \(\mathbb{R}^{d}\).
**Conjecture 2**.: (Uniformity of the aggregate) _The rescaled ARW aggregates \(a_{n}\) satisfy_
\[a_{n}\stackrel{{*}}{{\to}}\zeta_{a}\mathbf{1}_{B}\]
_with probability one, where \(B\) is the origin-centered ball of volume \(1/\zeta_{a}\) in \(\mathbb{R}^{d}\)._
In other words, in the weak-\(*\) scaling limit, the random locations of the sleeping particles in the aggregate blur out to a constant density \(\zeta_{a}\) everywhere in the ball.
### Microscopic structure of the aggregate
The next conjecture zooms in to the fine scale random structure of the aggregate near the origin (Figure 2).
Write \(\alpha_{n}\) for the law of the aggregate \(\mathscr{S}(n\delta_{0})\). This is a probability measure on \(\{0,\mathtt{s}\}^{\mathbb{Z}^{d}}\). We examine its marginals 2 on a finite subset of \(\mathbb{Z}^{d}\), as \(n\to\infty\).
Footnote 2: For a probability measure \(\mu\) on \(\{0,\mathtt{s}\}^{\mathbb{Z}^{d}}\) and a finite set \(V\subseteq\mathbb{Z}^{d}\), we write \(\mu|_{V}\) for the marginal distribution on \(\{0,\mathtt{s}\}^{V}\), that is \(\mu|_{V}(\xi):=\mu(\{\eta\in\{0,\mathtt{s}\}^{\mathbb{Z}^{d}}:\eta(v)=\xi(v)\ \forall v\in V\})\), for \(\xi\in\{0,\mathtt{s}\}^{V}\).
**Conjecture 3**.: (Microscopic limit of the aggregate) _For all finite \(V\subset\mathbb{Z}^{d}\) and all \(\xi\in\{0,\mathtt{s}\}^{V}\), the sequence \(\alpha_{n}|_{V}(\xi)\) converges as \(n\to\infty\)._
This conjecture would imply, by Kolmogorov's extension theorem, the existence of the infinite-volume limit
\[\alpha:=\lim_{V\uparrow\mathbb{Z}^{d}}\lim_{n\to\infty}\alpha_{n}|_{V},\]
which is a probability measure on the set of infinite stable configurations \(\{0,\mathsf{s}\}^{\mathbb{Z}^{d}}\). The outer limit is over an exhaustion of \(\mathbb{Z}^{d}\), that is, a sequence of finite sets \(V_{1}\subset V_{2}\subset\cdots\) such that \(\bigcup_{n\geq 1}V_{n}=\mathbb{Z}^{d}\). To spell the limit out: For any finite \(V\subset\mathbb{Z}^{d}\) and any configuration \(\xi\in\{0,\mathsf{s}\}^{V}\),
\[\alpha_{n}|_{V}(\xi)\to\alpha|_{V}(\xi).\]
Note the order of limits: we are restricted to a fixed window \(V\) as the size of the aggregate \(n\to\infty\). Even though \(\alpha\) is supported on configurations with an infinite number of particles, a sample from \(\alpha\) is best imagined as tiny piece of an even larger aggregate!
**Conjecture 4**.: _The limit \(\alpha\) is invariant with respect to translations of \(\mathbb{Z}^{d}\)._
**Conjecture 5**.: _The limit \(\alpha\) is supported on configurations of density \(\zeta_{\mathsf{a}}\)._
Figure 2. An ARW aggregate of 100000 particles in \(\mathbb{Z}^{2}\) at sleep rate \(\lambda=0.25\), with a zoom-in of the microscopic structure deep inside.
## 3. Multiple sources
Let \(A\subset\mathbb{R}^{d}\) be a bounded open set satisfying \(\int_{\bar{A}\setminus A}\,\mathrm{d}x=0\) where \(\,\mathrm{d}x\) denotes \(d\)-dimensional Lebesgue measure. For \(\epsilon>0\), let \(\mathcal{S}(1_{A}\cap\epsilon\mathbb{Z}^{d})\) be the configuration of sleepers that results from starting one active particle at each point of \(A\cap\epsilon\mathbb{Z}^{d}\) and running activated random walk on \(\epsilon\mathbb{Z}^{d}\) with sleep rate \(\lambda\). The following conjectured scaling limit for ARW is inspired by Theorem 1.2 of [31], which describes the scaling limit of internal DLA in \(\mathbb{Z}^{d}\).
**Conjecture 6**.: (Quadrature inequality) _As \(\epsilon\to 0\),_
\[\mathcal{S}(1_{A}\cap\epsilon\mathbb{Z}^{d})\stackrel{{*}}{{ \rightarrow}}\zeta_{\mathrm{a}}1_{A^{*}}\]
_where \(A^{*}\) is the unique (up to measure zero) open subset of \(\mathbb{R}^{d}\) satisfying_
\[\int_{A}u\,\mathrm{d}x\geq\zeta_{\mathrm{a}}\int_{A^{*}}u\,\mathrm{d}x \tag{1}\]
_for all integrable superharmonic functions \(u\) on \(A^{*}\)._
The intuition behind this conjecture is that the uniform density \(1\) on \(A\) spreads out to uniform density \(\zeta_{\mathrm{a}}<1\) on the larger set \(A^{*}\). If \(u\) is a superharmonic function on \(A^{*}\), then the sum of the values of \(u\) at all particle locations is approximately a supermartingale, leading to (1) by optional stopping. In the case of multiple point sources, \(A^{*}\) is a smash sum of Euclidean balls [31, Theorem 1.4].
The next conjecture examines the microstructure of the aggregate near \(\mathbf{0}\).
**Conjecture 7**.: (Microstructure looks the same everywhere) _Assume \(\mathbf{0}\in A^{*}\). For any finite \(V\subset\mathbb{Z}^{d}\), the law of \(\mathcal{S}(1_{A}\cap\epsilon\mathbb{Z}^{d})|_{eV}\) has a limit as \(\epsilon\to 0\), in the sense that for any configuration \(\eta\in\{0,\mathtt{s}\}^{V}\)_
\[\mathbb{P}\big{(}\mathcal{S}(1_{A}\cap\epsilon\mathbb{Z}^{d})(\epsilon v)= \eta(v)\text{ for all }v\in V\big{)}\to\alpha|_{V}(\eta)\]
_The limiting probability measure \(\alpha\) on \(\{0,\mathtt{s}\}^{\mathbb{Z}^{d}}\) is the same as in Conjecture 3. In particular, \(\alpha\) does not depend on \(A\)._
So far we have examined initial conditions with a finite number of particles only. The next section examines infinite configurations.
## 4. Stationary Ergodic
For \(\mathrm{ARW}(\mathbb{Z}^{d},\lambda)\), start with a stationary ergodic configuration \(\eta:\mathbb{Z}^{d}\to\mathbb{N}\), where all particles are initially active. Running ARW dynamics, will all particles fall asleep? If each site of \(\mathbb{Z}^{d}\) is visited only finitely often, then we say that \(\eta\)_stabilizes_. Rolla, Sidoravicius, and Zindy proved the remarkable fact that stabilizing depends only on the mean number of particles per site
\[\zeta:=\mathbb{E}(\eta(\mathbf{0})).\]
**Theorem 8**.: (Universality of threshold density \(\zeta_{\mathrm{c}}\), [38]) _There exists a constant \(\zeta_{\mathrm{c}}=\zeta_{\mathrm{c}}(\mathbb{Z}^{d},\lambda)\) such that if \(\zeta<\zeta_{\mathrm{c}}\) then \(\eta\) stabilizes with probability \(1\), and if \(\zeta>\zeta_{\mathrm{c}}\) then with probability \(1\), \(\eta\) does not stabilize._
### Approaching the threshold from below
Theorem 8 ensures that the stabilization \(\mathcal{S}(\eta)\) is always defined if \(\zeta<\zeta_{\mathrm{c}}\). What happens to the microstructure of \(\mathcal{S}(\eta)\) as \(\zeta\uparrow\zeta_{\mathrm{c}}\)? Start with a stationary ergodic configuration \(\eta_{0}:\mathbb{Z}^{d}\to\mathbb{N}\cup\{\mathtt{s}\}\) with mean \(\zeta_{0}<\zeta_{\mathrm{c}}\), and sprinkle some extra active particles: Letting \((\xi_{t}(x))_{x\in\mathbb{Z}^{d}}\) be independent Poisson random variables with mean \(t<\zeta_{\mathrm{c}}-\zeta_{0}\), the configuration \(\eta_{0}+\xi_{t}\) stabilizes with probability \(1\).
**Conjecture 9**.: (Universal limit of subcritical measures) _Fix \(\lambda>0\) and let \(\mu_{t}\) be the law of the ARW stabilization of \(\eta_{0}+\xi_{t}\) with sleep rate \(\lambda\). There exists a limiting measure_
\[\mu:=\lim_{t\uparrow\zeta_{\mathrm{c}}-\zeta_{0}}\mu_{t}\]
_supported on configurations \(\eta\in\{0,\mathtt{s}\}^{\mathbb{Z}^{d}}\) of density \(\zeta_{\mathrm{c}}\). Moreover, \(\mu\) depends only on \(\lambda\) and not on the initial configuration \(\eta_{0}\)._
## 5. The wired Markov chain
Fix a finite set \(V\subset\mathbb{Z}^{d}\), and consider the particle system \(\operatorname{ARW}(V,\lambda)\) in which particles evolve as in ARW with sleep rate \(\lambda\), with the additional rule that when a particle exits \(V\) it is killed (i.e. removed from the system). Fix \(v\in V\). The ARW _wired Markov chain_\((w_{k})_{k\geq 0}\) on the state space \(\{0,\mathtt{s}\}^{V}\) has the update rule: add one active particle at \(v\) and stabilize, i.e.
\[w_{k+1}=\mathcal{S}_{V}(w_{k}+\delta_{v}),\]
where \(\mathcal{S}_{V}\) denotes ARW stabilization with killing of any particles that exit \(V\).
### Stationary distribution
The stationary distribution of the Markov chain \((w_{k})_{k\geq 0}\) does not depend on the choice of the site \(v\) where particles are added, as for different \(v\) the Markov transition operators commute! The next result gives an efficient way to sample exactly from the stationary distribution of this chain.
Start with the configuration \(\mathbf{1}_{V}\), consisting of one active particle on each site of \(V\), and let the particles perform \(\operatorname{ARW}(V,\lambda)\) until no active particles remain. Some particles exit the system, and the remaining particles fall asleep in \(V\). Denote by \(\mathcal{S}_{V}(\mathbf{1}_{V})\) the resulting random configuration of sleepers.
**Proposition 10** (Exact sampling, [28]).: _The law of \(\mathcal{S}_{V}(\mathbf{1}_{V})\) is the unique stationary distribution of the ARW wired Markov chain on \(V\) with sleep rate \(\lambda\)._
For any given set \(V\subset\mathbb{Z}^{d}\) let \(\partial V\) denote its boundary and \(\#V\) denote its cardinality. Write \(|\mathcal{S}_{V}(\mathbf{1}_{V})|\) for the total number of (sleeping) particles in \(\mathcal{S}_{V}(\mathbf{1}_{V})\).
**Conjecture 11**.: (Stationary density \(\zeta_{\mathrm{s}}\)) _Then there exists a constant \(\zeta_{\mathrm{s}}=\zeta_{\mathrm{s}}(\mathbb{Z}^{d},\lambda)\) such that for any exhaustion \(V_{1}\subset V_{2}\subset\dots\subset\mathbb{Z}^{d}\) satisfying \(\#(\partial V_{n})/\#V_{n}\to 0\) as \(n\to\infty\),_
\[\lim_{n\to\infty}\frac{|\mathcal{S}_{V_{n}}(\mathbf{1}_{V_{n}})|}{\#V_{n}}= \zeta_{\mathrm{s}}\]
_in probability._
Figure 4. Stationary configurations for the ARW wired chain on a \(100\times 100\) box, at three different sleep rates \(\lambda\). The stationary density \(\zeta_{s}\) is an increasing function of \(\lambda\).
**Conjecture 12**.: _The critical densities from Sections 2, 4 and 5 coincide:_
\[\zeta_{\mathrm{a}}=\zeta_{c}=\zeta_{\mathrm{s}}.\]
### Infinite volume limit
Let \(\pi_{V}\) denote the stationary distribution of the ARW wired Markov chain, as defined above. For a subset \(W\subset V\), write \(\pi_{V}|_{W}\) for the restriction of \(\pi_{V}\) to \(W\).
**Conjecture 13**.: _For any fixed finite set \(W\subset\mathbb{Z}^{d}\), the measures \(\pi_{V}|_{W}\) have a limit as \(V\uparrow\mathbb{Z}^{d}\), and this limit does not depend on the exhaustion of \(\mathbb{Z}^{d}\)._
This conjecture would imply, by Kolmogorov's extension theorem, the existence of a limiting probability measure
\[\pi=\lim_{W\uparrow\mathbb{Z}^{d}}\lim_{V\uparrow\mathbb{Z}^{d}}\pi_{V}|_{W}. \tag{2}\]
on the space of infinite stable configurations \(w:\mathbb{Z}^{d}\to\{0,\mathtt{s}\}\). We can then ask how this limit relates to the measures \(\alpha\) and \(\mu\) from Sections 2 and 4 above.
**Question 14**.: _Is \(\pi=\mu=\alpha\)?_
Can the wired boundary condition be felt deep inside \(V\)? We conjecture that as \(V\uparrow\mathbb{Z}^{d}\), the particle density deep inside \(V\) coincides with the overall density \(\zeta_{s}\).
**Conjecture 15**.: _For \(w\sim\pi\) we have \(\pi\{w(\mathbf{0})=\mathtt{s}\}=\zeta_{s}\)._
### The hockey stick conjecture
Write \((w_{k})_{k\geq 0}\) for the ARW wired chain on the box \(V:=[1,L]^{d}\subset\mathbb{Z}^{d}\) with initial state \(w_{0}=0\) (all sites are empty) and with _uniform driving_: instead of adding particles at a fixed vertex, we add them at a sequence of independent vertices \(v_{1},v_{2},\ldots\) with the uniform distribution on \(V\):
\[w_{k+1}=\mathtt{S}_{V}(w_{k}+\delta_{v_{k+1}}),\qquad k\geq 0.\]
When does the wired chain begin to lose a macroscopic number of particles at the boundary? A theorem of Rolla and Tournier partially answers this question. Define
\[\zeta_{w}:=\inf\Big{\{}t>0\,:\,\limsup_{L}\frac{\mathbb{E}(|w_{tL^{d}}|)}{L^{ d}}<t\Big{\}}\]
where, as usual, \(|w_{k}|\) denotes the number of particles in \(w_{k}\).
**Theorem 16**.: _[_36_, Proposition 3]__\(\zeta_{w}\geq\zeta_{c}\)._
We conjecture \(\zeta_{w}=\zeta_{\mathrm{c}}\), and that the stabilized density has the following simple piecewise linear form.
**Conjecture 17** (Hockey stick).: _The wired chain on \([1,L]^{d}\) with uniform driving satisfies_
\[\frac{|w_{tL^{d}}|}{L^{d}}\to\begin{cases}t,&t\leq\zeta_{\mathrm{c}}\\ \zeta_{\mathrm{c}},&t\geq\zeta_{\mathrm{c}}\end{cases}\]
_in probability as \(L\to\infty\), where \(\zeta_{\mathrm{c}}\) is the threshold density of Theorem 8._
The name for this conjecture comes from the graph of the piecewise linear limit, which has the shape of a hockey stick (Figure 5).
**Proposition 18**.: _If Conjecture 17 holds, then \(\zeta_{w}=\zeta_{\mathrm{c}}\)._
Proof.: By Theorem 16, it suffices to prove that \(\zeta_{w}\leq\zeta_{\mathrm{c}}\). We argue by contradiction: suppose that \(\zeta_{\mathrm{c}}<\zeta_{w}\). Then there is a \(\zeta\in(\zeta_{\mathrm{c}},\zeta_{w})\) such that both
\[\limsup_{L}\frac{\mathbb{E}(|w_{\zeta L^{d}}|)}{L^{d}}\geq\zeta \tag{3}\]
since \(\zeta<\zeta_{w}\), and for any \(\epsilon>0\)
\[\mathbb{P}\bigg{(}\bigg{|}\frac{|w_{\zeta L^{d}}|}{L^{d}}-\zeta_{\mathrm{c}} \bigg{|}\geq\epsilon\bigg{)}\to 0\qquad\text{ as }L\to\infty, \tag{4}\]
since \(\zeta>\zeta_{\mathrm{c}}\) and Conjecture 17 holds true. But by (3) there exists a diverging subsequence \((L_{k})_{k\geq 1}\) such that
\[\lim_{k\to\infty}\frac{\mathbb{E}(|w_{\zeta L_{k}^{d}}|)}{L_{k}^{d}}\geq\zeta,\]
and since each term in this sequence is at most \(\zeta\) by construction, it must be that the above limit actually equals \(\zeta\). It follows that for any \(\epsilon>0\)
\[\lim_{k\to\infty}\mathbb{P}\bigg{(}\bigg{|}\frac{|w_{\zeta L_{k}^{d}}|}{L_{k} ^{d}}-\zeta\bigg{|}\geq\epsilon\bigg{)}=\lim_{k\to\infty}\mathbb{P}\bigg{(} \zeta-\frac{|w_{\zeta L_{k}^{d}}|}{L_{k}^{d}}\geq\epsilon\bigg{)}\leq\lim_{k \to\infty}\frac{1}{\epsilon}\bigg{(}\zeta-\frac{\mathbb{E}(|w_{\zeta L_{k}^{d} }|)}{L_{k}^{d}}\bigg{)}=0.\]
Figure 5. The hockey stick: As particles are added, the density of the ARW wired chain increases to \(\zeta_{\mathrm{c}}\) and then flatlines. Here \(V\subset\mathbb{Z}^{2}\) is a box of side length \(L=256\), the sleep rate is \(\lambda=2\), and \(\zeta_{\mathrm{c}}\approx 0.813\). “Global density” is the total number of particles divided by \(L^{2}\). “Bulk density” is the number of particles in a central window of side length \(L/2\), divided by \((L/2)^{2}\).
This contradicts (4) by uniqueness of the limit in probability.
### Fast mixing
Consider the ARW wired chain in a discrete Euclidean ball \(V=\{x\in\mathbb{Z}^{d}\,:\,\sum x_{i}^{2}<L^{2}\}\) with uniform driving. We highlight the following conjecture from [28] to the effect that this chain mixes immediately after reaching the stationary density \(\zeta_{\text{s}}\).
**Conjecture 19**.: (Cutoff, [28]) _The ARW wired chain has cutoff in total variation at the time_
\[t_{mix}=\zeta_{\text{s}}\#V.\]
In [28] it is shown that \(t_{mix}\leq(1+o(1))\#V\). The proof uses a coupling between Activated Random Walk and Internal DLA. Bristiel and Salez [7] show that the relaxation time is much smaller: \(O(L^{d-1})\) in dimensions \(d\neq 2\) and \(O(L\log L)\) in dimension \(2\). They also prove separation cutoff at time \(\#V\).
### Incompressibility
A recurring challenge in proving several of the above conjectures is to show that "dense clumps" are unlikely. We conjecture that clumps denser than the mean in the infinite-volume stationary state \(w\sim\pi\) have exponentially small probability. Write \(|w|_{L}:=\sum_{x\in[1,L]^{d}}1_{\{w(x)=\texttt{s}\}}\) for the number of particles in the cube \([1,L]^{d}\).
**Conjecture 20**.: (Incompressibility) _For each \(\zeta>\zeta_{\text{s}}\), there is a constant \(c=c(\zeta,\lambda)>0\) such that for \(w\sim\pi\)_
\[P(|w|_{L}\geq\zeta L^{d})\leq\exp(-cL^{d}).\]
The ideas introduced in [1, 16] may be useful in proving incompressiblity for \(\zeta\) sufficiently close to \(1\).
## 6. The free Markov chain
Fix a finite connected graph \(V\), an initial configuration \(\phi_{0}:V\to\{0,\texttt{s}\}\), and let
\[\phi_{k+1}=\mathcal{S}(\phi_{k}+\delta_{v_{k+1}})\]
be the configuration of sleeping particles obtained by adding one active particle at a random vertex \(v_{k}\), and then stabilizing by ARW dynamics in \(V\) with sleep rate \(\lambda\). The vertices \(v_{1},v_{2},\dots\) are independent with the uniform distribution on \(V\).
Unlike the ARW wired Markov chain in Section 5, particles cannot escape \(V\). So the total number of particles is deterministic: \(|\phi_{k}|=|\phi_{0}|+k\). As long as this number does not exceed \(\#V\), stabilization happens in finite time, but if the number of particles is large then it could take a long time (even exponentially long, [5])! We will define the _threshold time_ as the first time \(k\) such that \(\phi_{k}\) takes "too long" to stabilize.
Let \((\phi_{k})_{k\geq 0}\) denote the ARW free chain on \(V\) initiated from the empty configuration \(\phi_{0}=0\). For \(k\geq 0\), denote by \(U_{k}\) be the total number of random walk steps needed to stabilize \(\phi_{k}+\delta_{v_{k+1}}\). For any function \(f:\mathbb{N}\to\mathbb{R}\), let
\[\tau_{f}(V)=\inf\{k\geq 0\,:\,U_{k}\geq f(\#V)\}.\]
**Conjecture 21**.: (Concentration of the threshold time) _Let \(V=\mathbb{Z}_{L}^{d}\) be the \(d\)-dimensional torus of side-length \(L\). There exists a superlinear function \(f:\mathbb{N}\to\mathbb{R}\) such that, as \(L\uparrow\infty\),_
\[\lim_{L\uparrow\infty}\frac{\tau_{f}(\mathbb{Z}_{L}^{d})}{L^{d}}=\zeta_{\rm c}\]
_in probability, where \(\zeta_{\rm c}\) is the threshold density of Section 4._
A stronger formulation would posit a sharp transition from linear to exponential time:
**Question 22**.: _Is it true that_
\[U_{tL^{d}}=\begin{cases}O(L^{d}),&t<\zeta_{c}\\ \exp(\Omega(L^{d})),&t>\zeta_{c}?\end{cases}\]
## 7. The Wake Markov Chain
Fix a finite connected graph \(V\) and an initial configuration \(\varphi_{0}:V\to\{0,\mathfrak{s}\}\) with \(|\varphi_{0}|\leq\#V\). Let \(\mathcal{W}\) denote the operator that acts on stable particle configurations on \(V\) by waking all particles up. The ARW wake Markov chain, supported on stable particle configurations on \(V\), is defined by
\[\varphi_{k+1}=\mathcal{S}(\mathcal{W}(\varphi_{k})).\]
So in one time step of the wake chain, we wake all particles up and then stabilize. Note that stabilization is always possible, though it may take a long time, since \(|\varphi_{k}|=|\varphi_{0}|\leq\#V\) for all \(k\geq 0\).
### Stationary measure
Take \(V=\mathbb{Z}_{L}^{d}\) and let \(\nu_{L,\zeta}\) denote the stationary measure of the ARW wake chain on \(V\). Denote further by \(\tilde{\nu}_{L,\zeta}\) the law of the ARW free chain \((\phi_{k})_{k\geq 0}\) at time \(k=\zeta L^{d}\). Note that sleepers configurations drawn according to \(\nu_{L,\zeta}\) and \(\tilde{\nu}_{L,\zeta}\) have the same density \(\zeta\) (in fact, the same number of particles). It is natural to conjecture that in the supercritical regime these measures are close.
**Conjecture 23**.: _If \(\zeta>\zeta_{c}\) then \(d\,_{TV}\bigl{(}\nu_{L,\zeta},\tilde{\nu}_{L,\zeta}\bigr{)}=o(1)\) as \(L\to\infty\), where \(d\,_{TV}\) denotes the Total Variation distance._
This would follow from the following, more general conjecture.
**Conjecture 24**.: (Dense stabilized configurations are hard to distinguish) _Fix the dimension \(d\) and sleep rate \(\lambda\), and let \(\zeta_{c}\) be the threshold density of Theorem 8. For each \(\zeta>\zeta_{c}\) and \(\epsilon>0\) there is an \(L_{0}\) such that for all \(L\geq L_{0}\) and any two configurations \(\eta,\tilde{\eta}\) of active particles on the discrete torus \(\mathbb{Z}_{L}^{d}\) with \(|\eta|=|\tilde{\eta}|\geq\zeta L^{d}\), their stabilizations \(\mathcal{S}(\eta),\mathcal{S}(\tilde{\eta})\) satisfy_
\[d\,_{\rm TV}(\mathcal{S}(\eta),\mathcal{S}(\tilde{\eta}))<\epsilon.\]
The underlying mechanism here is that the system takes a long time to stabilize. In particular, it is known that for small enough sleep rate, stabilization takes exponentially many steps in \(L\) with high probability: This was proved in dimension 1 by Basu, Ganguly, Hoffman and Richey [5], and recently in all dimensions by
Forien and Gaudilliere [16, Theorem 3]. Their result plus a coupling argument proves Conjecture 24 for \(\lambda\) small enough: One can couple the trajectories of any two particles in the ARW systems starting from \(\eta\) and \(\tilde{\eta}\) so that they will meet prior to stabilization with high probability. Provided all these couplings are successful, the processes stabilize to the same configuration \(\mathcal{S}(\eta)=\mathcal{S}(\tilde{\eta})\).
### Mixing time
How long does it take for the ARW wake chain to reach stationarity? We conjecture a transition from slow to instantaneous mixing at the threshold density \(\zeta_{c}\).
**Conjecture 25**.: _Let \(\eta\) be any stable configuration on the \(d\)-dimensional cycle \(\mathbb{Z}_{L}^{d}\) on \(L^{d}\) vertices, and denote by \(\zeta=|\eta|/L^{d}\) its particle density. Then the total variation mixing time of the ARW wake chain starting from \(\eta\) is \(1\) if \(\zeta>\zeta_{c}\), while it is \(\Omega(L^{2})\) if \(\zeta<\zeta_{c}\), where \(\zeta_{c}\) is the threshold density of Theorem 8._
The first part of this conjecture, fast mixing at high density, would follow directly from Conjecture 24.
## 8. Hyperuniformity
Experiments suggest that the stationary states for the Activated Random Walk Markov chains introduced in Sections 5,6,7 above are _hyperuniform_.
### The wired chain
For a random configuration \(\eta\in\{0,\mathtt{s}\}^{\mathbb{Z}^{d}}\) with \(\eta\sim\pi\), write \(|\eta|_{L}\) for the total number of particles in the cube \([1,L]^{d}\).
**Conjecture 26**.: _Under \(\pi\), the variance of \(|\eta|_{L}\) is \(O(L^{\alpha})\) as \(L\to\infty\), for some \(\alpha<d\)._
Figure 6. Site covariance of the ARW Wake Markov Chain on the torus \(\mathbb{Z}_{L}\times\mathbb{Z}_{L}\) with \(L=2501\), sleep rate \(1\), and subcritical density \(0.3\) (much less than \(\zeta_{c}\approx 0.68\)). Left to right: covariances after \(k=1,2,10\) steps of the wake chain from a uniform random initial condition with \(\left\lfloor 0.3L^{2}\right\rfloor\) particles. Each non-central site is shaded according to the covariance between the events that a sleeping particle is located at that site and at the central site. Initial positive correlations with nearby sites become negative after a few time steps.
For comparison, observe that if the particles are placed in the box \([1,L]^{d}\) in an i.i.d. fashion, the variance is of order \(L^{d}\). Thus hyperuniformity implies a kind of rigid repulsion among particles: in order to make the variance of the number of particles in the box \([1,L]^{d}\) grow sublinearly with its volume \(L^{d}\), the particle counts \(\eta(v)\) for \(v\in[1,L]^{d}\) must have significant negative correlations [18].
Burdzy has proved hyperuniformity in a related particle system called the Meteor Model [8]. The challenge in adapting his proof to ARW lies in adapting the i.i.d. driving of the meteors to the correlated driving that results from active particles waking sleeping particles.
### The free chain
For the free chain the number of particles increases by one at each time step, and we expect hyperuniformity to manifest starting at the threshold density \(\zeta_{c}\). To state a hyperuniformity conjecture for the free chain, we will count particles in a box \(B\subset\mathbb{Z}_{L}^{d}\). Write \((\phi_{k})_{k\geq 0}\) for the ARW free chain on the torus \(\mathbb{Z}_{L}^{d}\), and write \(|\phi_{k}|_{B}\) for the total number of particles in \(B\) at time \(k\).
Figure 7. The onset of hyperuniformity in the ARW wired chain on a square of side length \(L=50\) at sleep rate \(\lambda=2\). The variance \(\mathbb{E}|\eta_{t}|^{2}-(\mathbb{E}|\eta_{t}|)^{2}\) of the total number of particles in the system increases and then levels off after time \(t=\zeta_{s}L^{2}\). The variance of the number of particles in the bulk (central square of side length \(L/2\)) peaks and then drops steeply as \(t/L^{2}\) approaches the stationary density \(\zeta_{s}\approx 0.81\).
**Conjecture 27**.: (Onset of hyperuniformity in the free chain) _There exists \(\epsilon>0\) such that for any box \(B=[0,\ell_{1}-1]\times\cdots\times[0,\ell_{d}-1]\subset\mathbb{Z}_{L}^{d}\) we have_
\[\operatorname{Var}(|\phi_{\zeta L^{d}}|_{B})=\begin{cases}\Theta(\ell_{1}\dots \ell_{d}),&\zeta<\zeta_{c}\\ O((\ell_{1}\cdots\ell_{d})^{1-\epsilon}),&\zeta\geq\zeta_{c}.\end{cases}\]
_The implied constants depend only on \(d,\lambda,\zeta\)._
### The wake chain
Write \(\nu_{L,\zeta}\) for the stationary distribution of the ARW wake chain of \(\left\lfloor\zeta L^{d}\right\rfloor\) particles on the discrete torus \(\mathbb{Z}_{L}^{d}\). For a stationary configuration \(\eta\sim\nu_{L,\zeta}\), we can once again count the number of particles in a box \(B=\prod_{i=1}^{d}[0,\ell_{i}-1]\) and ask whether the variance is linear or sublinear in the size of \(B\).
**Question 28**.: _Is there \(\epsilon>0\) such that the stationary state \(\eta\) of the ARW wake chain satisfies_
\[\operatorname{Var}_{\eta\sim\nu_{L,\zeta}}(|\eta|_{B})=\begin{cases}\Theta( \ell_{1}\dots\ell_{d}),&\zeta<\zeta_{c}\\ O((\ell_{1}\cdots\ell_{d})^{1-\epsilon}),&\zeta\geq\zeta_{c}?\end{cases}\]
Figure 8. The onset of hyperuniformity in the ARW free chain on the discrete torus \(\mathbb{Z}_{L}\times\mathbb{Z}_{L}\) with \(L=50\) and sleep rate \(\lambda=2\). The initial configuration of \(k\) active particles at independent uniformly distributed sites, stabilizes to a final configuration of \(k\) sleeping particles on the torus. The variance of the number of sleeping particles in the left half \(\mathbb{Z}_{L/2}\times\mathbb{Z}_{L}\) initially increases with \(k\), then peaks around \(k=0.66L^{2}\), and bottoms out around \(\zeta_{c}L^{2}\) where \(\zeta_{c}\approx 0.81\).
Figure 9 tests the case \(d=1\) at density \(\zeta=1/2\) and two different sleep rates: \(\zeta_{c}(0.2)>1/2\) and \(\zeta_{c}(0.15)\approx 1/2\).
## 9. Site correlations
To address Question 14 (Is \(\pi=\mu=\alpha\)?) we performed experiments comparing the site correlations in the ARW free chain, ARW wired chain, and ARW point source aggregate. For the free chain on the torus \(\mathbb{Z}_{L}\times\mathbb{Z}_{L}\) we are able to average with respect to translation and reflection symmetries of the torus to increase the precision of the numerical estimates. For the wired chain and point source, only reflection symmetries are available, so precision is lower.
### Free site correlations
For small \(x,y\) we computed the empirical correlation coefficient
\[\frac{\mathbb{E}(1_{f}(0,0)1_{f}(x,y))-\zeta^{2}}{\zeta-\zeta^{2}}\]
where \(1_{f}(x,y)\) is the indicator of the event that site \((x,y)\) has a sleeping particle in the stabilization of \(\zeta L^{2}\) particles started at independent uniform random sites on the torus \(\mathbb{Z}_{L}\times\mathbb{Z}_{L}\) with side length \(L=63\) at sleep rate \(\lambda=2\) and density
Figure 9. The ARW wake chain on the cycle \(\mathbb{Z}_{N}\) starting with \(N/2\) particles at uniformly random sites and run for \(100N\) time steps. The variance of the number of particles on the left half of the cycle grows linearly with \(N\) at the subcritical sleep rate \(\lambda=0.2\), but sublinearly with \(N\) at the critical sleep rate \(\lambda=0.15\). (Here \(\zeta=1/2\) is less than \(\zeta_{c}(0.2)\) and approximately equal to \(\zeta_{c}(0.15)\).)
\(\zeta=0.81\approx\zeta_{c}\). We averaged over 80000 independent samples, and over translation and reflection symmetries of the torus. The results are shown in Table 1.
### Wired site correlations
For small \(x,y\) we computed the empirical correlation coefficient
\[\frac{\mathbb{E}(1_{w}(0,0)1_{w}(x,y))-\zeta^{2}}{\zeta-\zeta^{2}}\]
where \(1_{w}(x,y)\) is the indicator of the event that site \((x,y)\) has a sleeping particle in the stationary state \(\mathcal{S}(1_{V})\) of the ARW wired chain on the square \(V=[-31,31]^{2}\subset\mathbb{Z}^{2}\) at sleep rate \(\lambda=2\), and \(\zeta=0.81\approx\zeta_{s}\). We averaged over \(5\cdot 10^{7}\) independent samples, and over the \(D_{8}\) symmetry of the square lattice. The results are shown in Table 2.
### Point source site correlations
For small \(x,y\) we computed the empirical correlation coefficient
\[\frac{\mathbb{E}(1_{a}(0,0)1_{a}(x,y))-\zeta^{2}}{\zeta-\zeta^{2}}\]
where \(1_{a}(x,y)\) is the indicator of the event that site \((x,y)\) has a sleeping particle in the stabilization of \(n=3215\) particles started at \((0,0)\), and \(\zeta=0.81\approx\zeta_{a}\). This value of \(n\) was chosen to make the total number of particles match the free chain experiment described above. We averaged over \(10^{5}\) independent samples, and over the \(D_{8}\) symmetry of the square lattice. The results are shown in Table 3.
Comparing Tables 1 and 2, the spatial decay of correlations is faster in the wired chain than in the free chain. Is this an artifact of the small system size, or are the limiting measures \(\mu\) and \(\pi\) different? Comparing Tables 2 and 3, the short range
\begin{table}
\begin{tabular}{c|r r r r r r} \(y\backslash x\) & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline
0 & 1.0000 & -0.0238 & -0.0101 & -0.0061 & -0.0045 & -0.0037 \\
1 & & -0.0139 & -0.0082 & -0.0056 & -0.0044 & -0.0037 \\
2 & & & -0.0063 & -0.0048 & -0.0040 & -0.0035 \\
3 & & & & -0.0042 & -0.0037 & -0.0034 \\
4 & & & & & -0.0034 & -0.0032 \\
5 & & & & & & -0.0031 \\ \end{tabular}
\end{table}
Table 1. Short-range correlations for the ARW free Markov chain on the discrete torus \(\mathbb{Z}_{63}\times\mathbb{Z}_{63}\).
\begin{table}
\begin{tabular}{c|r r r r} \(y\backslash x\) & 0 & 1 & 2 & 3 \\ \hline
0 & 1.000 & -0.021 & -0.008 & -0.003 \\
1 & & -0.012 & -0.006 & -0.003 \\
2 & & & -0.004 & -0.002 \\
3 & & & & -0.001 \\ \end{tabular}
\end{table}
Table 2. Short-range correlations for the ARW wired Markov chain on a square of side length 63 in \(\mathbb{Z}^{2}\).
correlations are consistent with the measures \(\pi\) and \(\alpha\) being equal, but the low precision prevents us from conjecturing this confidently.
### Wake chain site correlations
The ARW wake chain has a family of stationary distributions, one for each density. We wondered whether the site correlations depend on the density. The answer appears to be yes, as shown in Figure 10. For three different densities \(\zeta\in\{0.2,0.6,0.7\}\) we computed the empirical correlation coefficient
\[\frac{\mathbb{E}(1_{z}(0,0)1_{z}(x,y))-\zeta^{2}}{\zeta-\zeta^{2}}\]
where \(1_{z}(x,y)\) is the indicator of the event that site \((x,y)\) has a sleeping particle in the stationary distribution of the ARW wake chain at density \(\zeta\) on the torus \(\mathbb{Z}_{15}\times\mathbb{Z}_{15}\) with sleep rate 1. After a burn-in period to reach stationarity, we averaged over 100000 subsequent time steps of the ARW wake chain and over translation symmetries of the torus. Short-range correlations are negative at all densities, and nothing notable seems to happen at the threshold density \(\zeta_{c}\approx 0.68\). The strongest nearest-neighbor site correlation occurs at a density somewhat less than \(\zeta_{c}\).
Figure 10. Short-range correlations of the ARW Wake Markov Chain at three different densities \(\zeta=0.2,0.6,0.7\). Each non-central site is shaded according to the correlation coefficient between the events that a sleeping particle is located at that site and at the central site.
\begin{table}
\begin{tabular}{r|r r r} \(y\backslash x\) & 0 & 1 & 2 \\
0 & 1.000 & -0.021 & -0.007 \\
1 & & -0.010 & -0.006 \\
2 & & & -0.000 \\ \end{tabular}
\end{table}
Table 3. Short-range correlations for the ARW point source aggregate of 3215 particles in \(\mathbb{Z}^{2}\).
## 10. Contrasts with the Abelian sandpile model
We close by comparing Activated Random Walk to the Abelian Sandpile, and in particular highlighting which results, among the ones we believe to hold for ARW, are known to fail for the sandpile model.
### Point Source
Pegden and Smart [35] proved existence of a limit shape for the point source Abelian Sandpile in \(\mathbb{Z}^{d}\). The scaling limit of the Abelian sandpile on \(\mathbb{Z}^{2}\) obeys a PDE that is not rotationally symmetric [29, 30], so its limit shape from a point source is unlikely to be a Euclidean disk (although it has not been formally proved not to be a disk!). One symptom of the failure of universality in the Abelian Sandpile is the existence of "dense clumps" in the point-source sandpile. These are macroscopic regions whose density is higher than the average density of the whole pile. By contrast, we believe Activated Random Walkers are incompressible (Conjecture 20).
### Stationary Ergodic
The Abelian Sandpile in \(\mathbb{Z}^{d}\) for \(d\geq 2\) has an interval of threshold densities: any density between \(d\) and \(2d-1\) can be threshold, depending on the law of the initial configuration [14, 34, 12, 15]). By contrast, Activated Random Walk at a given sleep rate has a single threshold density (Theorem 8).
Conjecture 9 fails for the Abelian Sandpile, due to slow mixing: the sandpile stabilization of \(\eta_{0}+\xi_{t}\) retains a memory of its initial state \(\eta_{0}\) even as \(t\uparrow\zeta_{\mathrm{c}}-\zeta_{0}\)[27]. Terms like "the self-organized critical state" result in lot of confusion in the physics literature on the Abelian Sandpile because there are many such states! One of them, the limit of the uniform recurrent state, is amenable to exact calculations [40, 25, 32, 24], but slow driving from a subcritical state will usually produce a critical state with different properties (e.g. different density).
### Wired Markov Chain
Fast mixing of ARW stands in contrast to the slow mixing of the Abelian sandpile, where \(t_{mix}=\Theta(L^{2}\log L)\) for the wired chain on \((\mathbb{Z}/L\mathbb{Z})^{2}\)[20] and on \([1,L]^{2}\)[21]. This logarithmic factor is responsible for the discrepancy between the stationary density \(\zeta_{s}=2.125000\) and the threshold density \(\zeta_{c}=2.125288\) observed in [11].
To approach Conjectures 11-13 and Question 14 it may be useful to find a combinatorial description of the ARW stationary distribution. An important tool available for the Abelian sandpile, which has no counterpart yet in the ARW setting, is a bijection between recurrent states and spanning trees. This bijection is useful because of the well-developed theory of infinite-volume limits like (2) for trees [6]. The bijection from sandpiles to trees plays a starring role in Athreya and Jarai's proof that the uniform recurrent sandpile on a finite set \(V\subset\mathbb{Z}^{d}\) has an infinite-volume limit [2], and in Jarai and Redig's study of infinite-volume sandpile dynamics [23]. Hutchcroft used the bijection with spanning trees to prove universality results for high-dimensional sandpiles [22].
### Free Markov Chain
The Hockey Stick Conjecture 17 is believed to be false for the Abelian Sandpile due to its slow mixing. There is, however, a weaker conjectured relationship between the sandpile free and wired chains: the threshold
time of the free chain coincides with the first time when a macroscopic number of particles exit the wired chain [13].
## Acknowledgments
We thank Ahmed Bou-Rabee, Hannah Cairns, Deepak Dhar, Shirshendu Ganguly, Chris Hoffman, Feng Liang, SS Manna, Pradeep Mohanty, Leonardo Rolla, Vladas Sidoravicius, and Lorenzo Taggi for many inspiring conversations. Thanks to Chris Hoffman for pointing out that Conjecture 11 requires a condition on the boundary of \(V_{n}\), and that Conjecture 17 requires a condition on the driving. This project was partly supported by the Funds for joint research Cornell-Sapienza. LL was partly supported by the NSF grant DMS-1105960 and IAS Von-Neumann Fellowship. We thank Cornell University, Sapienza University, IAS and ICTS-TIFR for their hospitality.
|
2301.12125 | Coherent Curvature Radiation Spectrum by Dynamically Fluctuating Bunches
in Magnetospheres | Coherent curvature radiation by charged bunches has been discussed as the
radiation mechanism for radio pulsars and fast radio bursts. Important issues
for this radiation mechanism include how the bunches form and disperse in the
magnetosphere of a pulsar or magnetar. More likely, bunches form and disperse
continuously and it remains unclear what the spectral features are for these
fluctuating bunches. In this work, we consider that the bunches in a
magnetosphere have a formation rate of $\lambda_B$, a lifetime of $\tau_B$, and
a typical Lorentz factor of $\gamma$, and analyze the spectral features of
coherent curvature radiation by these fluctuating bunches. We find that the
emission spectrum by a single fluctuating bunch is suppressed by a factor of
$\sim(\lambda_B\tau_B)^2$ compared with that of a single persistent bunch, and
there is a quasi-white noise in a wider band in the frequency domain. The
high-frequency cutoff of the spectrum is at $\sim\max(\omega_{\rm
peak},2\gamma^2/\tau_B)$, where $\omega_{\rm peak}$ is the peak frequency of
curvature radiation. If the observed spectrum is not white-noise-like, the
condition of $2\gamma^2\lambda_B\gtrsim \min(\omega_{\rm
peak},2\gamma^2/\tau_B)$ would be required. Besides, the radiation by multiple
fluctuating bunches along a field line is the incoherent summation of the
radiation by single bunches if the bunch separation is longer than the
wavelength. Conversely, a coherent summation should be involved. We also
discuss the effects of bunch structures and the mechanism of bunch formation
and dispersion. | Yuan-Pei Yang, Bing Zhang | 2023-01-28T08:44:01Z | http://arxiv.org/abs/2301.12125v2 | # Coherent Curvature Radiation Spectrum by Dynamically Fluctuating Bunches in Magnetospheres
###### Abstract
Coherent curvature radiation by charged bunches has been discussed as the radiation mechanism for radio pulsars and fast radio bursts. Important issues for this radiation mechanism include how the bunches form and disperse in the magnetosphere of a pulsar or magnetar. More likely, bunches form and disperse continuously and it remains unclear what the spectral features are for these fluctuating bunches. In this work, we consider that the bunches in a magnetosphere have a formation rate of \(\lambda_{B}\), a lifetime of \(\tau_{B}\), and a typical Lorentz factor of \(\gamma\), and analyze the spectral features of coherent curvature radiation by these fluctuating bunches. We find that the emission spectrum by a single fluctuating bunch is suppressed by a factor of \(\sim(\lambda_{B}\tau_{B})^{2}\) compared with that of a single persistent bunch, and there is a quasi-white noise in a wider band in the frequency domain. The high-frequency cutoff of the spectrum is at \(\sim\max(\omega_{c},2\gamma^{2}/\tau_{B})\), where \(\omega_{c}\) is the typical frequency of curvature radiation. If the observed spectrum is not white-noise-like, the condition of \(2\gamma^{2}\lambda_{B}\gtrsim\min(\omega_{c},2\gamma^{2}/\tau_{B})\) would be required. On the other hand, due to the random fluctuation of bunches, the radiation by multiple fluctuating bunches along a field line is the incoherent summation of the radiation by single bunches, and the spectral shape is the same as that of a single bunch. We further discuss the effects of bunch structures and some possible mechanisms of bunch formation and dispersion.
keywords: radiation mechanisms: non-thermal - radio continuum: general - (transients:) fast radio bursts - (stars:) pulsars: general
## 1 Introduction
The brightness temperatures of both radio pulsars and fast radio bursts (FRBs) are extremely high and are much greater than any plausible thermal temperature of the emitting electrons (e.g., Melrose, 2017; Petroff et al., 2019; Cordes and Chatterjee, 2019; Zhang, 2020, 2022; Xiao et al., 2021; Lyubarsky, 2021; Bailes, 2022). This suggests that the radiation mechanism of radio pulsars and FRBs must be coherent. For incoherent waves with random phases, the amplitude square of their superposition is approximately the sum of the amplitude squares of each wave. Thus, the observed emission power would be the simple summation of the emission power of individual charged particles, as proposed in most astrophysical scenarios. For coherent waves with certain phase differences, the amplitude square of their superposition could be significantly enhanced or reduced due to the wave coherent superposition process. In particular, "coherently enhanced" waves usually require that the phase differences of superposing waves must be much less the half wavelength of the waves. In the literature about radiation mechanism, "coherent" is mainly defined as "coherently enhanced".
Some coherent emission mechanisms have been invoked to interpret the emissions of radio pulsars and FRBs (e.g., Melrose, 2017; Zhang, 2022): coherent radiation by charged bunches (i.e., antenna mechanism), maser by hydrodynamic instabilities or kinetic instabilities, etc. In this paper, we mainly focus on coherent curvature radiation by charged bunches that has been proposed as one of the popular ideas to explain the emission of pulsars (Sturrock, 1971; Ginzburg and Zhelezniakov, 1975; Ruderman and Sutherland, 1975; Buschauer and Benford, 1976; Benford and Buschauer, 1977; Melikidze et al., 2000; Gil et al., 2004; Basu et al., 2022) and FRBs (Katz, 2014, 2018; Kumar et al., 2017; Lu and Kumar, 2018; Yang and Zhang, 2018; Kumar and Bonsiak, 2020; Lu et al., 2020; Yang et al., 2020; Cooper and Wijers, 2021; Wang et al., 2022; Tong and Wang, 2022; Liu et al., 2022; Qu et al., 2023). Due to the two-stream instabilities in the magnetosphere of a neutron star, charged bunches might form and radiate electromagnetic radiation coherently (Ruderman, 1971; Benford and Buschauer, 1977; Cheng and Ruderman, 1980; Usov, 1987; Kumar et al., 2022). However, as pointed out by some authors (e.g., Melrose, 2017; Lyubarsky, 2021), these models involving charged bunches have some important issues: (1) The charged bunches might be short-lived; (2) The radiation might be strongly suppressed by the magnetosphere plasma. For the latter issue, Gil et al. (2004) and Lyubarsky (2021) pointed out that electromagnetic waves with frequencies below the plasma frequency could propagate in the highly magnetized plasma in the
magnetosphere, but the radiation power would be significantly suppressed. However, Qu et al. (2023) recently found that the plasma suppression effect could be ignored in the case of FRBs because of the existence of a parallel electric field in the FRB emission region, as is required to power the bright FRB emission. The former issue leads to a more fundamental question: Is it necessary that the charged bunches have to be long-lived in order to explain the observed features of radio pulsars and FRBs? In other words, how do the formation and dispersion of bunches affect the observed radiation?
In this work, we will analyze the spectral features of the coherent curvature radiation by dynamically fluctuating bunches in the magnetosphere of a neutron star. We consider that the bunches in the magnetosphere form with an average rate of \(\lambda_{B}\) and have an average lifetime of \(\tau_{B}\), and discuss how \(\lambda_{B}\) and \(\tau_{B}\) affect the radiation spectral feature. The paper is organized as follows. In Section 2, we discuss the brightness temperature of the curvature radiation in a magnetosphere using a more physical treatment. In Section 3, we analyze the spectral features of coherent curvature radiation by fluctuating bunches, including the features by a single persistent bunch with different structures (Section 3.1), a single fluctuating bunch (Section 3.2), and multiple fluctuating bunches (Section 3.3). In Section 4, we discuss the formation and dispersion mechanisms of bunches in the magnetosphere and calculate \(\lambda_{B}\) and \(\tau_{B}\) in various physical scenarios. The results are summarized and discussed in Section 5. The convention \(Q_{x}=Q/10^{x}\) is adopted in cgs units unless otherwise specified.
## 2 Brightness Temperature of Coherent Curvature Radiation in a Magnetosphere: A Physical Treatment
Before discussing the spectral features of coherent curvature radiation by charged bunches, we first point out that the physical brightness temperature in such a scenario is different from the standard definition. The flux-intensity relation is generally given by \(F_{\nu}=\pi I_{\nu}\left(I_{e}/d\right)^{2}\), where \(I_{e}\) is emphasized to be _the emission region scale perpendicular to the line of sight_, \(d\) is the distance between source and observer. Since the brightness temperature, \(T_{B}\), is defined by the Rayleigh-Jeans law as \(I_{\nu}=2\nu^{2}k_{B}T_{B}/c^{2}\), it can be written as
\[T_{B}\simeq\frac{c^{2}}{2\pi k_{B}\nu^{2}}\left(\frac{d}{l_{e}}\right)^{2}F_{ \nu}, \tag{1}\]
where \(\nu\) is the frequency of the electromagnetic wave. If the emission region satisfies the following conditions: (1) The emission region is non-relativistic; (2) Its perpendicular scale is the same order of magnitude as the transverse scale; (3) The radiation is isotropic at any point of the emission region; \(l_{e}\) could be estimated by the observed duration \(\Delta t\) of a transient, i.e., \(I_{e}\sim c\Delta t\), and the classical formula for the brightness temperature is obtained,
\[T_{B}\simeq\frac{1}{2\pi k_{B}}\left(\frac{d}{\nu\Delta t}\right)^{2}F_{\nu}= 10^{25}\ {\rm K}\ d_{\rm Gpc}^{2}\nu_{9}^{-2}\Delta t_{-3}^{-2}F_{\nu,{\rm J}}, \tag{2}\]
where \(d_{\rm Gpc}=d/1\ {\rm Gpc}^{1}\) and \(F_{\nu,{\rm J}}=F_{\nu}/1\ {\rm Jy}\). Once the distance \(d\) is obtained, the brightness temperature could be directly estimated (e.g., Xiao & Dai, 2022; Luo et al., 2023; Zhu-Ge et al., 2023).
However, if the emission region is within a magnetosphere and charged particles are relativistic, as are envisaged in many theoretical models of radio pulsars and FRBs (e.g., Sturrock, 1971; Ruderman & Sutherland, 1975; Kumar et al., 2017; Yang & Zhang, 2018; Lu et al., 2020), the above condition (2) and (3) would not be satisfied. We consider that the charged particles/bunches are relativistically moving with a Lorentz factor \(\gamma\) along the curved field lines with a curvature radius \(\rho\). For \(\omega\lesssim\omega_{c}\), where \(\omega_{c}\sim\gamma^{3}c/\rho\) is the typical frequency of curvature radiation, the radiation beaming angle at the angular frequency \(\omega\) is approximately (Jackson, 1998)
\[\theta_{e}(\omega)\sim\frac{1}{\gamma}\left(\frac{\omega_{c}}{\omega}\right)^ {1/3}, \tag{3}\]
which involves the field line direction. Thus, the transverse lengthscale \(l_{e}\) of the emission region at the distance \(r\) from the neutron star center should be estimated by
\[l_{e}\sim r\theta_{e}(\omega)\sim\frac{3}{\rho}\theta\theta_{e}(\omega)\] \[\sim\frac{3}{4}\theta\left(\frac{c\rho^{2}}{\omega}\right)^{1/3} \simeq 2.7\times 10^{5}\ {\rm cm}\ \rho_{8}^{2/3}\nu_{9}^{-1/3}\theta, \tag{4}\]
where \(\rho\simeq\alpha r/3\theta\) is the curvature radius at the position \((r,\theta)\), and \(\theta\) is the poloidal angle between the emission region and the magnetic axis. We can see that for the above typical parameters, the transverse lengthscale \(l_{e}\) is much smaller than that estimated by \(c\Delta t\sim 3\times 10^{7}\ {\rm cm}\ \Delta t_{-3}\). As a result, a more physical brightness temperature for the curvature radiation can be estimated as
\[T_{B} =\frac{32\pi}{9k_{B}}\left(\frac{d}{\theta}\right)^{2}\left( \frac{c}{\rho\omega}\right)^{4/3}F_{\nu}\] \[=1.3\times 10^{39}\ {\rm K}\ d_{\rm Gpc}^{2}\theta^{-2}\rho_{8}^{-4/3} \nu_{9}^{-4/3}F_{\nu,{\rm J}}. \tag{5}\]
One can see that for curvature radiation in a magnetosphere, the brightness temperature should depend on the emission region parameters \((\rho,\theta)\). For typical parameters, the brightness temperature is much higher than the value estimated by the classical formula Eq.(2). Meanwhile, it is worth noting that such a brightness temperature is independent of the burst duration \(\Delta t\). The reason is that the burst duration does not directly reflect the transverse size of the emission region.
## 3 Coherent Curvature Radiation by Fluctuating Bunches
Charged bunches are thought to be formed by certain plasma instabilities in the magnetosphere, e.g., two-stream instability (Ruderman & Sutherland, 1975; Benford & Buschauer, 1977; Cheng & Ruderman, 1980; Usov, 1987), and the bunch formation rate \(\lambda_{B}\) could be estimated by the growth rate of the plasma instability. Since the bunch is charged, over-dense, and composed of particles with different velocities, it would be dispersed by plasma instabilities, electrostatic repulsion, velocity dispersion, radiation cooling, etc. In this section, we mainly calculate the spectrum of the coherent curvature radiation by fluctuating bunches and discuss how the spectral feature is affected by the bunch formation and dispersion.
### Radiation by a single persistent bunch with different structures
First, we briefly summarize the spectral properties of a single persistent bunch. We consider that the bunch has the velocity \(v\) with
Lorentz factor \(\gamma=(1-v^{2}/c^{2})^{-1}\) and moves along the magnetic field line with a curvature radius \(\rho\), as shown in the panel (a) of Figure 1.
If the bunch lifetime is long enough, \(\tau_{B}>\rho/\gamma c\), which is comparable to the time of a persistent bunch sliding along a curved magnetic field line, the observer will see the radiation with the emission cone of angular width \(\sim 1/\gamma\) around the observer direction, and the typical angular frequency of the emission wave is
\[\omega_{c}=\frac{1}{\tau_{c}}\sim\left[\frac{\rho}{\gamma c}\left(1-\frac{v}{c} \right)\right]^{-1}\simeq\frac{2\gamma^{3}c}{\rho}, \tag{6}\]
where \(\tau_{c}\) is the typical pulse duration of the classical curvature radiation for a single point source, and the factor of \((1-v/c)\) is due to the propagation time-delay effect.
We consider that the classical curvature radiation is in the form of a finite pulse \(E(t)\), and \(E(t)\) vanishes sufficiently rapidly for \(t\rightarrow\pm\infty\). For convenience, we define \(A(t)\equiv(c/4\pi)^{1/2}[RE(t)]_{\rm ret}\), where \(R\) is the distance between the observer and the bunch at the retarded time, and the bracket \([...]_{\rm ret}\) is evaluated at the retarded time. The Fourier transform of \(A(t)\) is defined as
\[A(\omega) =\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}A(t)e^{i\omega t}dt, \tag{7}\] \[A(t) =\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}A(\omega)e^{-i\omega t }d\omega. \tag{8}\]
Here we adopt the above definitions of Fourier transforms as the same as those in Jackson (1998), and the corresponding properties of Fourier transform will be adopted accordingly in the following discussion. The directional emission spectrum (defined as _the radiation energy per unit solid angle per unit angular frequency_) is Jackson (1998)
\[P_{A}(\omega)\equiv\frac{dW}{d\Omega d\omega}=2|A(\omega)|^{2}=\frac{c}{4\pi ^{2}}\left|\int_{-\infty}^{\infty}[RE(t)]_{\rm ret}e^{i\omega t}dt\right|^{2}. \tag{9}\]
For the curvature radiation by a single persistent bunch, the properties of \(P_{A}(\omega)\) mainly depend on the spatial structure of the charged bunch Yang & Zhang (2018); Yang et al. (2020). Here, we briefly summarize the following three scenarios with the different structures as shown in Figure 2:
1. A point-source bunch (see the panel (a) of Figure 2): If the bunch length is much smaller than a half wavelength, the point-source approximation is reasonable. The emission spectrum of the curvature radiation is Jackson (1998); Yang & Zhang (2018)
\[P_{A}(\omega)\propto\omega^{2/3}e^{-\omega/\omega_{c}}. \tag{10}\]
In particular, the spectral index of \(2/3\) is due to the angular spectrum involved in curvature radiation, which is different from the scenario of synchrotron radiation that is usually described by total spectrum with a spectral index of \(1/3\), except the case with a narrow pitch-angle distribution Yang & Zhang (2018). The emission spectrum is shown by the black curve in Figure 3.
2. A one-dimensional bunch with a finite length \(l\) (see the panel (b) of Figure 2): The corresponding emission spectrum of curvature radiation is Yang & Zhang (2018)
\[P_{A}(\omega)\propto\mathrm{sinc}^{2}\left(\frac{\omega}{\omega_{l}}\right) \omega^{2/3}e^{-\omega/\omega_{c}}\quad\mathrm{with}\,\,\omega_{l}\simeq 2c/l, \tag{11}\]
where \(\mathrm{sinc}(x)\equiv\sin x/x\). Here the charge distribution of the bunch is assumed to be uniform. If \(\omega\gg\omega_{l}\), one has \(\mathrm{sinc}^{2}\left(\omega/\omega_{l}\right)\sim\omega^{-2}\) due to the rapid oscillation of the term \(\mathrm{sinc}^{2}\left(\omega/\omega_{l}\right)\) with a unit amplitude in the sinc function, leading to a softer spectrum compared with that of a point source. The emission spectrum is shown by the red curve in Figure 3. In particular, when \(\omega_{l}<\omega_{c}\), the peak radiation specific power would be suppressed by a factor of \(\sim(\omega_{l}/\omega_{c})^{2/3}\), leading to the total radiation energy suppressed by a factor
\[\eta_{l}\simeq\frac{\omega_{l}P_{A}(\omega_{l})}{\omega_{c}P_{A}(\omega_{c})} \simeq\left(\frac{\omega_{l}}{\omega_{c}}\right)^{5/3}\simeq 0.97_{1}^{-5/3}v_{c,9}^{-5/3}, \tag{12}\]
compared with that of a point source given by Eq.(10). This formula can be used to estimate how the bunch length suppresses the total radiation power.
3. A bunch-cavity pair or similar system formed by plasma background fluctuation (see panel (c) and panel (d) of Figure 2). First, we consider that a charged bunch forms in the plasma background and
Figure 1: Schematic configurations of a bunch moving along a magnetic field line and emitting curvature radiation. The black line denotes the magnetic field line, the orange ellipse denotes the bunch, the dashed ellipse denotes the bunch disappearing due to dispersion, and the yellow region denotes the radiation by the bunch due to relativistic motion. Panel (a): the bunch is persistent. Due to the relativistic motion of the bunch, the length scale of the path along the line of sight (LOS) is \(\rho/\gamma\), where \(\rho\) is the curvature radius of the field line, and \(\gamma\) is the bunch Lorentz factor. (b) the bunch is fluctuating due to rapid formation and dispersion when the building plasma moves along the field line.
Figure 2: Charge density distributions for different bunch structures. Panel (a): A single point-source bunch, with the charge density described by a delta function \(\delta\left(x\right)\); Panel (b): A one-dimensional bunch with lengthscale \(l\), with a uniform charge density; Panel (c): A bunch-cavity pair formed in a plasma background, with the separation between the bunch and the cavity being \(d\). Panel (d): A bunch-cavity system satisfying the structures of a soliton, with the separations between the bunch and the cavities being \(d\).
has a charge density larger than the background, then a corresponding cavity with a charge density smaller than the background would form near the bunch, as shown in panel (c) of Figure 2. For simplicity, we treat the bunch-cavity pair as a two-point source with a separation \(d\). Thus, the charge density distribution of the bunch-cavity pair system could be described by \(\rho_{\rm bc}(x)=q\delta(x)-q\delta(x-d)+\rho_{0}\), where \(x\) denotes the pair position, \(\pm q\) in the first two terms correspond to the charges of the bunch and the cavity, respectively, and \(\rho_{0}\) is the charge density of the plasma background. Since a persistent current (i.e., plasma background) cannot generate electromagnetic waves (Yang & Zhang, 2018), only the first two terms contribute to the radiation. Therefore, the radiation of the bunch-cavity pair is consistent with that of a separated electron/positron pair discussed by Yang et al. (2020). Based on the charge density distribution, the pulse profile is given by \(A(t)=A_{0}(t)-A_{0}(t-d/c)\), where \(A_{0}(t)\) and \(-A_{0}(t-d/c)\) correspond to the pulse profiles of the bunch and the cavity, respectively. According to the time-shifting property of the Fourier transform, one has \(A(\omega)=A_{0}(\omega)-A_{0}(\omega)e^{t\omega d/c}\). Using \(P_{A}(\omega)=2|A(\omega)|^{2}\) by Eq.(9), one has (Yang et al., 2020)
\[P_{A}(\omega)\propto 2\left[1-\cos\left(\frac{\omega}{\omega_{d}}\right) \right]\omega^{2/3}e^{-\omega/\omega_{c}}\quad\text{with}\ \omega_{d}\simeq c/d. \tag{13}\]
For \(\omega\ll\omega_{d}\), one has \(1-\cos(\omega/\omega_{d})\propto\omega^{2}\), leading to \(P_{A}(\omega)\propto\omega^{8/3}\) at the low-frequency band. Thus, the radiation spectrum is much narrower and harder than that of a point source given by Eq.(10). The emission spectrum is shown by the blue curve in Figure 3. Furthermore, it can be further proved that the above formula is also available for some more complex bunch-cavity systems. For example, Melikidze et al. (2000) proposed that some plasma solitons with net charges will result from a ponderomotive Miller force. Each soliton consists of one large bunch and two small cavities (see Figure 2 in Melikidze et al. (2000) and panel (d) of Figure 2), because the excess of one charge is compensated by the lack of this charge in the nearby regions. The charge density distribution of the bunch-cavity system could be roughly described by \(\rho_{\rm bc}(x)=-q\delta(x-d)+2q\delta(x)-q\delta(x+d)+\rho_{0}\). Similar to the calculation of the bunch-cavity pair, the same result as Eq.(13) is obtained, as shown by the blue curve in Figure 3.
The above discussion assumes that the particles in the bunch have a Lorentz factor \(\gamma\). If the energy distribution of the charged particles satisfies a power-law distribution, the corresponding radiation spectrum would be characterized by a multi-segment broken power-law, and the details have been discussed in Yang & Zhang (2018).
### Radiation by a single fluctuating bunch
If the bunch lifetime is short, \(\tau_{B}<\rho/\gamma c\), the observed pulse duration will be shorter than \(\tau_{c}\) given by Eq.(6), leading to a higher typical frequency than that of classical curvature radiation, i.e.
\[\tilde{\omega_{c}}\sim\left[\tau_{B}\left(1-\frac{v}{c}\right) \right]^{-1}\sim\frac{2\gamma^{2}}{\tau_{B}}. \tag{14}\]
Therefore, a fluctuating bunch with a short lifetime will generate electromagnetic radiation with a higher frequency than the typical frequency of classical curvature radiation.
We consider that a bunch forms and disperses intermittently when the building particles move along a field line, and the coherent radiation pulses are generated when the bunch exists, as shown in panel (b) of Figure 1. The bunch forms with a rate of \(\lambda_{B}\) and disperses during a lifetime of \(\tau_{B}\). Due to the relativistic motion of the building particles with \(\gamma\gg 1\), the pulse rate \(\lambda_{b}\) and the pulse duration \(\tau_{b}\) should be corrected by the propagation time-delay effect, i.e.
\[\lambda_{B} =\left(1-\frac{v}{c}\right)^{-1}\lambda_{B}\simeq 2\gamma^{2} \lambda_{B}, \tag{15}\] \[\tau_{b} =\left(1-\frac{v}{c}\right)\tau_{B}\simeq\frac{\tau_{B}}{2\gamma^ {2}}\sim\tilde{\omega_{c}}^{-1}, \tag{16}\]
where the factor of \((1-v/c)\) is due to the propagation time-delay effect.
During the time of \(\rho/\gamma c\) when the emission cone sweeps the observing direction, the bunch would disperse and generate multiple times when the building plasma particles move along the magnetic field line. We consider that the radiation is in the form of \(\tilde{A}(t)\). When the bunch exists, \(\tilde{A}(t)\simeq A(t)\), where \(A(t)\) is the radiation form of the classical curvature radiation by a single persistent source, as discussed in Section 3.1; when the bunch disappears, \(\tilde{A}(t)\simeq 0\). Therefore, the form of \(\tilde{A}(t)\) can be written as
\[\tilde{A}(t)=A(t)S(t), \tag{17}\]
where \(S(t)\) is the pulse sampling function with
\[S(t) =\begin{cases}1,&\text{for bunch existing},\\ 0,&\text{for bunch disappearing},\end{cases}\] \[=\sum_{k}s(t-t_{k}) \tag{18}\]
and
\[s(t)=\begin{cases}1,&\text{for }0\leqslant t\leqslant\tau_{b},\\ 0,&\text{otherwise}.\end{cases} \tag{19}\]
Here \(t_{k}\) corresponds to the starting time of the \(k\)-th bunch generation, and \(\tau_{b}\) is the pulse duration from a bunch. The pulse sampling function \(S(t)\) is shown in Figure 4.
We assume that pulse generation satisfies a Poisson process, thus, the probability to generate \(k\) pulses during time \(t\) is
\[P_{k}(t)=\frac{(\lambda_{b}t)^{k}}{k!}e^{-\lambda_{b}t}, \tag{20}\]
where \(\lambda_{b}=2\gamma^{2}\lambda_{B}\) is the pulse rate that is corrected for the propagation time-delay effect. Here \((t_{k})\) in Eq.(18) is distributed in a Poisson
Figure 3: The emission spectrum of a single persistent bunch. The black, red, and blue curves correspond to the emission spectrum of a point-source bunch (panel (a) in Figure 2), a one-dimensional bunch with \(\omega_{b}=0.3\omega_{c}\) (panel (b) in Figure 2), a bunch-cavity system (a bunch-cavity pair (panel (c) in Figure 2) or a soliton (panel (d) in Figure 2)) with \(\omega_{d}=0.3\omega_{c}\), respectively. The unit of the emission spectrum is arbitrary. For easy comparison with the spectral shapes of different scenarios, the emission spectrum of the bunch-cavity system is suppressed by an arbitrary factor in this figure.
distribution with the parameter \(\lambda_{b}\), and the probability density function of \(\{t_{k}\}\) is related to the probability that the \(k\)-th point occurs in a short interval at the arbitrary time \(t\) for short \(\Delta t\),
\[p_{h_{k}}(t)\Delta t =P(t<t_{k}<t+\Delta t)\] \[=P_{k-1}(t)P_{1}(\Delta t)=P_{k-1}(t)\lambda_{b}\Delta t. \tag{21}\]
Thus, one obtains the probability density function for \(\{t_{k}\}\), i.e.
\[p_{h_{k}}(t)=\lambda_{b}P_{k-1}(t). \tag{22}\]
Before calculating the emission spectrum \(P_{\tilde{A}}(\omega)\) of \(\tilde{A}(t)\), we are first interested in its autocorrelation function \(R_{\tilde{A}}(\tau)\). Since \(S(t)\) corresponds to a random sampling process, \(R_{\tilde{A}}(\tau)\) could be described by
\[R_{\tilde{A}}(\tau) =\mathcal{E}\left[\tilde{A}(t)*\tilde{A}^{\dagger}(-t)\right]= \mathcal{E}[\left(A(t)S(t)\right)*\left(A(-t)S(-t)\right)^{\dagger}]\] \[=\mathcal{E}\left[\int A(t+\tau)S(t+\tau)A^{\dagger}(t)S^{ \dagger}(t)dt\right]\] \[=\int A(t+\tau)A^{\dagger}(t)\mathcal{E}\left[S(t+\tau)S^{ \dagger}(t)\right]dt, \tag{23}\]
where \(\tilde{A}(t)*\tilde{A}^{\dagger}(-t)\) denotes the autocorrelation function of \(\tilde{A}(t)\), and \(\mathcal{E}[X]\) denotes the expectation of the random variable \(X\) involved by the random process. The integral range is from \(-\infty\) to \(\infty\), the symbol "\(\approx\)" denotes the convolution operator, and the superscript "\(\approx\)" denotes the conjugation. In the above calculation, the property of \(\int f(t)f^{\dagger}(t-\tau)dt=\int f(t+\tau)f^{\dagger}(t)dt\) is used based on variable substitution \(t-\tau\to t\). For the Poisson sampling process, the autocorrelation of \(R_{S}(\tau)\) satisfies (Franks 1981)
\[R_{S}(\tau) =\mathcal{E}\left[S(t+\tau)S^{\dagger}(t)\right]=\sum_{k}\sum_{j} \mathcal{E}\left[s(t+\tau-t_{k})s(t-t_{j})\right]\] \[=\sum_{k=j}\int s(t+\tau-\xi)s(t-\xi)p_{h_{k}}(\xi)d\xi\] \[+\sum_{k\neq j}\int s(t+\tau-\eta)p_{h_{k}}(\eta)d\eta\int s(t- \sigma)p_{h_{k}}(\sigma)d\sigma\] \[=(\lambda_{b}q_{s})^{2}+\lambda_{b}r_{s}(\tau), \tag{24}\]
where
\[q_{s}\equiv\int s(t)dt, \tag{25}\] \[r_{s}(\tau)\equiv\int s(t+\tau)s(t)dt. \tag{26}\]
Notice that both \(S(t)\) and \(s(t)\) are real functions according to Eq.(18) and Eq.(19). Since the autocorrelation function \(R_{S}(\tau)\) is independent of \(t\) (i.e., a wide-sense-stationary process), Eq.(23) could be finally written as
\[R_{\tilde{A}}(\tau)=R_{A}(\tau)R_{S}(\tau), \tag{27}\]
where \(R_{A}(\tau)\) is the autocorrelation function of \(A(t)\). Therefore, the autocorrelation function of the product of \(A(t)\) and \(S(t)\) is the product of the autocorrelation of each one.
The emission spectrum of \(\tilde{A}(t)\) is given by Eq.(9),
\[P_{\tilde{A}}(\omega)\equiv\frac{d\tilde{W}}{d\Omega d\omega}=2|\tilde{A}( \omega)|^{2}. \tag{28}\]
According to the convolution theorem and conjugation property of Fourier transform, \(|\tilde{A}(\omega)|^{2}\) could be written as
\[|\tilde{A}(\omega)|^{2} =\tilde{A}(\omega)\tilde{A}^{\dagger}(\omega)\] \[=\frac{1}{\sqrt{2\pi}}\mathcal{F}(\tilde{A}(t)*\tilde{A}^{\dagger }(-t))=\frac{1}{\sqrt{2\pi}}\mathcal{F}(R_{\tilde{A}}(\tau)), \tag{29}\]
where \(\mathcal{F}(...)\) denotes the Fourier transform, the factor of \(1/\sqrt{2\pi}\) is involved due to the definition of Fourier transform Eq.(7) and Eq.(8). Thus, the Fourier transform of the autocorrelation function is the power spectrum, known as Wiener-Khinchin's theorem. According to Eq.(27), Eq.(28), Eq.(29) and the convolution theorem, the emission spectrum of \(\tilde{A}(t)\) could be finally written as
\[P_{\tilde{A}}(\omega) =\frac{2}{\sqrt{2\pi}}\mathcal{F}(R_{A}(\tau)R_{S}(\tau))=2|A( \omega)|^{2}*\frac{1}{\sqrt{2\pi}}\mathcal{F}(R_{S}(\tau))\] \[=P_{A}(\omega)*P_{S}(\omega), \tag{30}\]
where \(\mathcal{F}(R_{A}(\tau)R_{S}(\tau))=(1/\sqrt{2\pi})\mathcal{F}(R_{A}(\tau))* \mathcal{F}(R_{S}(\tau))\) is used, and the emission spectrum of the pulse sampling function \(S(t)\) is defined as
\[P_{S}(\omega)\equiv\frac{1}{\sqrt{2\pi}}\mathcal{F}(R_{S}(\tau)). \tag{31}\]
Equation (30) is the most important formula in this section, and we will use it to analyze the spectral features of the coherent radiation by fluctuating bunches. In the following discussion, we will discuss two mathematical models of the pulse sampling profile \(S(t)\): impulsive sampling profile and rectangular sampling profile.
#### 3.2.1 Impulsive sampling profile
If the pulse duration \(\tau_{b}\) is much shorter than \(\lambda_{b}^{-1}\), i.e., \(\tau_{b}\lambda_{b}\ll 1\), the pulse profile \(s(t)\) in Eq.(19) can be well described using the delta function \(\delta(t)\), \(s(t)\simeq\tau_{b}\delta(t)\) for \(\tau_{b}\to 0\). According to Eq.(25) and Eq.(26), one has \(r_{s}\simeq\tau_{b}^{2}\delta(\tau)\) and \(q_{s}\simeq\tau_{b}\). Using Eq.(18) and Eq.(24), the autocorrelation function \(R_{S}(\tau)\) of \(S(t)\) is
\[R_{S}(\tau)=(\lambda_{b}\tau_{b})^{2}+\lambda_{b}\tau_{b}^{2}\delta(\tau). \tag{32}\]
According to Eq.(24) and Eq.(31), the emission spectrum of \(S(t)\) is
\[P_{S}(\omega)=(\lambda_{b}\tau_{b})^{2}\delta(\omega)+\frac{\lambda_{b}\tau_{b}^{ 2}}{2\pi}. \tag{33}\]
Therefore, based on Eq.(30), the emission spectrum of \(\tilde{A}(t)\) is
\[P_{\tilde{A}}(\omega)=P_{A}(\omega)*P_{S}(\omega)=(\lambda_{b}\tau_{b})^{2}P_{A}( \omega)+\frac{\lambda_{b}\tau_{b}^{2}}{2\pi}P_{A,\text{tot}}, \tag{34}\]
where \(P_{A,\text{tot}}\) is the total radiation energy of \(A(t)\),
\[P_{A,\text{tot}}=\int_{-\infty}^{\infty}P_{A}(\omega)d\omega. \tag{35}\]
Figure 4: The pulse sampling function \(S(t)\) given by Eq.(18). \(\{t_{k}\}\) is the starting time of the \(k\)-th pulse. Each pulse has a duration of \(\tau_{b}\).
Compared with that of a persistent bunch, \(P_{A}(\omega)\), the emission spectrum of a fluctuating bunch is suppressed by a factor of \(\sim(\lambda_{b}\tau_{b})^{2}=(\lambda_{B}\tau_{B})^{2}\). For the scenario of an impulsive sampling profile, \(\lambda_{b}\tau_{b}\ll 1\) has been potentially assumed. Meanwhile, the emission spectrum of a single fluctuating bunch is the sum of the emission spectrum of a persistent bunch and a white noise that is independent of frequency \(\omega\). The "signal-to-noise ratio" at the peak frequency \(\omega_{c}\) in the frequency domain is given by
\[\frac{S}{N}\bigg{|}_{\text{peak}}\simeq\frac{(\lambda_{b}\tau_{b})^{2}P_{A}( \omega_{c})}{(\lambda_{b}\tau_{b}^{2}/2\pi)P_{A,\text{tot}}}=\frac{2\pi\lambda _{b}}{\omega_{c}}, \tag{36}\]
where \(P_{A,\text{tot}}\sim\omega_{c}P_{A}(\omega_{c})\) is adopted. The typical frequency bandwidth for classical curvature radiation is approximately \(\omega_{c}\sim\gamma^{3}c/\rho\). We can see that \((S/N)_{\text{peak}}\) is independent of \(\tau_{b}\), and the larger the pulse rate \(\lambda_{b}\), the larger \((S/N)_{\text{peak}}\).
If one observes a non-white-noise signal in the frequency domain, i.e., \((S/N)_{\text{peak}}\gg 1\), \(\lambda_{b}\gg\omega_{c}\) is required. For the GHz signal with \(\omega_{c}/2\pi\sim 10^{9}\) rad s\({}^{-1}\), the bunch formation rate should be \(\lambda_{b}\gg 10^{9}\) s\({}^{-1}\), leading to \(\lambda_{B}\simeq\lambda_{b}/2\gamma^{2}\gtrsim 10^{3}\) s\({}^{-1}\)\(\gamma_{3}^{-2}\). In particular, for an FRB with a typical duration of a few milliseconds, at least one bunch is produced during \(\Delta t\sim 1\) ms, leading to \(\lambda_{B}\gtrsim 1/\Delta t\sim 10^{3}\) s\({}^{-1}\) and \(\lambda_{b}\simeq 2\gamma^{2}\lambda_{B}\gtrsim 10^{9}\) s\({}^{-1}\)\(\gamma_{3}^{2}\sim\omega_{c}\) and \((S/N)_{\text{peak}}\gtrsim 1\). Thus, the white-noise signal might not be significant for an FRB, if the FRB is produced by the bunch with a Lorentz factor \(\gamma\gtrsim 10^{3}\). Notice that the above conclusion potentially assumes that \(\tau_{b}\lambda_{b}\ll 1\) for the impulsive sampling profile.
#### 3.2.2 Rectangular sampling profile
Next, we generally consider that the function \(s(t)\) given by Eq.(19) could be well described by a rectangular profile with width \(\tau_{b}\), i.e.
\[s(t)=\text{rect}\left(\frac{t}{\tau_{b}}\right)\equiv\begin{cases}1,&\text{ for }\left|\frac{t}{\tau_{b}}\right|<\frac{1}{2},\\ 0,&\text{for }\left|\frac{t}{\tau_{b}}\right|>\frac{1}{2},\end{cases} \tag{37}\]
where \(\text{rect}(x)\) is the rectangular function. The autocorrelation function \(R_{S}(\tau)\) is
\[R_{S}(\tau)=(\lambda_{b}\tau_{b})^{2}+\lambda_{b}\tau_{b}\Lambda\left(\frac{ \tau}{\tau_{b}}\right), \tag{38}\]
where \(\Lambda(x)\) is the triangular function
\[\Lambda(x)\equiv\begin{cases}1-|x|,&\text{for }|x|\leqslant 1,\\ 0,&\text{otherwise}.\end{cases} \tag{39}\]
According to Eq.(24) and Eq.(31), the emission spectrum of \(S(t)\) is
\[P_{S}(\omega)=(\lambda_{b}\tau_{b})^{2}\delta(\omega)+\frac{\lambda_{b}\tau_{ b}^{2}}{2\pi}\text{sinc}^{2}\left(\frac{\tau_{b}\omega}{2}\right). \tag{40}\]
Using Eq.(30), the emission spectrum of \(\tilde{A}(t)\) is
\[P_{\tilde{A}}(\omega)=(\lambda_{b}\tau_{b})^{2}P_{A}(\omega)+\frac{\lambda_{b }\tau_{b}^{2}}{2\pi}P_{A}(\omega)*\text{sinc}^{2}\left(\frac{\tau_{b}\omega}{ 2}\right). \tag{41}\]
Compared with that of a single persistent bunch, the emission spectrum of a fluctuating bunch is suppressed by a factor of \(\sim(\lambda_{b}\tau_{b})^{2}=(\lambda_{B}\tau_{b})^{2}\). The signal-to-noise ratio at the peak frequency in the frequency domain is given by
\[\frac{S}{N}\bigg{|}_{\text{peak}}=\frac{2\pi\lambda_{b}P_{A}(\omega_{c})}{P_{ A}(\omega)*\text{sinc}^{2}(\tau_{b}\omega/2)}. \tag{42}\]
According to the property of the convolution of two pulse profiles, we have the following conclusions: (1) If \(\tau_{b}>\omega_{c}^{-1}\), one has \((S/N)_{\text{peak}}\sim\lambda_{b}\tau_{b}=\lambda_{B}\tau_{B}\), and the cutoff frequency of the whole spectrum is at \(\sim\omega_{c}\), see the top panel of Figure 5. (2) If \(\tau_{b}<\omega_{c}^{-1}\), one has \((S/N)_{\text{peak}}\sim\lambda_{b}/\omega_{c}\), and there is a high-frequency cutoff in the white noise at \(\tau_{b}^{-1}\sim\tilde{\omega}_{c}\), see the bottom panel of Figure 5. In particular, when \(\tau_{b}\to 0\), one has \(\text{sinc}^{2}(\tau_{b}\omega/2)\sim 1\), so the above results become the case of an impulsive sampling profile as discussed in Section 3.2.1. In summary, for both cases, the cutoff frequency is at
\[\omega_{\text{cut}}\sim\text{max}(\omega_{c},\tau_{b}^{-1}), \tag{43}\]
and the signal-to-noise ratio at the peak frequency in the frequency domain is
\[\frac{S}{N}\bigg{|}_{\text{peak}}\sim\frac{\lambda_{b}}{\text{min}(\omega_{c}, \tau_{b}^{-1})}. \tag{44}\]
### Radiation by multiple fluctuating bunches
Next, we discuss the radiation by multiple fluctuating bunches along a field line, that is, there is a bunch train (with more than one bunch) along the field line. We consider that the radiation by the first fluctuating bunch in the bunch train is \(\tilde{A}(t)\), then the radiation by multiple
Figure 5: The emission spectrum of a single fluctuating bunch. The black, red, and blue curves correspond to the suppressed emission spectrum \((\lambda_{b}\tau_{b})^{2}P_{A}(\omega_{c})\), the quasi-white noise \((\lambda_{b}\tau_{b}^{2}/2\pi)P_{A}(\omega)*\text{sinc}^{2}\left(\tau_{b} \omega/2\right)\), and the total emission spectrum, respectively. The top panel is the case with \(\lambda_{b}=0.1\,\omega_{c}\) and \(\tau_{b}=10\omega_{c}^{-1}\), and the bottom panel is the case with \(\lambda_{b}=0.1\,\omega_{c}\) and \(\tau_{b}=0.1\,\omega_{c}^{-1}\). Here we take \(P_{A}(\omega)\) as the emission spectrum of a single point source given by Eq.(10). The unit is arbitrary.
fluctuating bunches could be described as
\[\hat{A}(t)=\sum_{j}^{N}\hat{A}(t-t_{j}), \tag{45}\]
where \(t_{j}\) is the arrival time of the pulse generated by the \(j\)-th bunch, and \(N\) is the total number of bunches along a field line. Since the generation process of a bunch has been considered to be a Poisson process, the radiation pulses from multiple fluctuating bunches also satisfy the Poisson distribution, i.e., \(\{t_{j}\}\) satisfies \(p_{t_{j}}(t)=\lambda_{B}P_{j-1}(t)\). Notice that here the rate of the Poisson process is the bunch formation rate, rather than the observed pulse rate \(\lambda_{B}\), which is corrected for the propagation time-delay effect.
According to the time shifting property of Fourier transform, \(\mathcal{F}[\hat{A}(t-t_{j})]=e^{i\omega t_{j}}\hat{A}(\omega)\), the Fourier transform of \(\hat{A}(t)\) is
\[\hat{A}(\omega)=\mathcal{F}[\hat{A}(t)]=\tilde{A}(\omega)\sum_{j}^{N}e^{i \omega t_{j}}. \tag{46}\]
Therefore, the emission spectrum of multiple fluctuating bunches is
\[P_{\hat{A}}(\omega) =2|\hat{A}(\omega)|^{2}=2|\hat{A}(\omega)|^{2}\left|\sum_{j}^{N}e ^{i\omega t_{j}}\right|^{2}\] \[=P_{\hat{A}}(\omega)\left(N+\sum_{j\neq k}e^{i\omega(t_{j}-t_{k} )}\right)=NP_{\hat{A}}(\omega), \tag{47}\]
where \(\sum\sum_{j\neq k}e^{i\omega(t_{j}-t_{k})}\simeq 0\), because \(t_{j}\) and \(t_{k}\) are randomly distributed for the Poisson process. In conclusion, the radiation by multiple fluctuating bunches is the incoherent sum of that of each single fluctuating bunch, and the spectral shape of \(P_{\hat{A}}(\omega)\) is the same as that of \(P_{\hat{A}}(\omega)\).
## 4 Formation and dispersion of fluctuating bunches
In this section, we will discuss the formation and dispersion mechanisms of bunches in a pulsar or magnetar magnetosphere and constrain \(\lambda_{B}\) and \(\tau_{B}\) in various physical scenarios.
### Bunch formation by two-stream instability
The most popular theory for the bunching mechanism in radio pulsars is that charged bunches are generated in the magnetosphere due to the two-stream instability developed by the interaction between two plasma components with different Lorentz factors (Ruderman & Sutherland, 1975; Benford & Buschauer, 1977; Cheng & Ruderman, 1980; Usov, 1987). A similar mechanism has also been proposed as a possible mechanism for FRBs (Kumar & Bosnjak, 2020; Kumar et al., 2022). For an effective two-stream instability that is responsible for the observed radio emission, the typical timescale for the development of the two-stream instability, \(\tau_{\lambda}=\lambda_{B}^{-1}\) in the pulsar frame should be shorter than the dynamical timescale of the plasma stream \(\tau_{0}=r/c\) at the distance \(r\) from a neutron star, i.e.
\[\tau_{\lambda}=\lambda_{B}^{-1}<\tau_{0}=\frac{r}{c}=3\times 10^{-5}\ \mathrm{s}\ r_{6}. \tag{48}\]
We consider two plasma components with relative motion in the pulsar frame, which are denoted by "1" and "2", respectively. Their typical Lorentz factors are \(\gamma_{1}\) and \(\gamma_{2}\) with \(\gamma_{1}>\gamma_{2}\), and their number densities are \(n_{1}\) and \(n_{2}\) with \(n_{1}/\hat{\gamma}n_{2}\ll 1\), where \(\hat{\gamma}\simeq(1/2)(\gamma_{1}/\gamma_{2}+\gamma_{2}/\gamma_{1})\simeq(1/2 )(\gamma_{1}/\gamma_{2})\) is the relative Lorentz factor. Then the one-dimensional electrostatic dispersion relation is given by (Benford & Buschauer, 1977; Usov & Usov, 1988),
\[1-\frac{\omega_{p,2}^{2}}{\omega^{2}}-\frac{\omega_{p,1}^{2}}{\hat{\gamma}^{3 }(\omega-k\hat{\gamma})^{2}}=0, \tag{49}\]
where \(\omega_{p,j}=(4\pi en_{j}/m_{e})^{1/2}\) is the plasma frequency of the component \(j\), \(\hat{\gamma}\) is the relative velocity between the two plasma components with \(\hat{\gamma}=(1-\hat{v}^{2}/c^{2})^{-1}\), and \(k\) is the wavevector of the electrostatic wave. According to this dispersion relation, the typical timescale for the development of a two-stream instability in the pulsar frame is (Benford & Buschauer, 1977; Usov, 1987)
\[\tau_{\lambda}\sim[\mathrm{Im}(\omega_{\mathrm{max}})]^{-1}\sim\left(\frac{n_{ 2}}{n_{1}}\right)^{1/3}\gamma_{1}\gamma_{2}^{1/2}\omega_{p,2}^{-1}, \tag{50}\]
where \(\mathrm{Im}(\omega_{\mathrm{max}})\) corresponds to the fastest growth rate obtained by dispersion relation.
First, we discuss the two-stream instability caused by the relative motion between an ultrarelativistic beam plasma and a relativistic cascade pair plasma. Generally, the plasma that flows out from the pulsar can be divided into two components: (1) An ultrarelativistic primary beam (denoted by \(u\)) that is directly accelerated by the charge-starved regions named "gaps". It has a typical Lorentz factor \(\gamma_{u}\) and a density \(n_{u}\); (2) A relativistic electron-positron plasma (denoted by \(\pm\)) that is produced by the pair cascade process. It has a typical Lorentz factor \(\gamma_{\pm}\ll\gamma_{u}\) and a density is
\[n_{\pm}\simeq\left(\frac{\gamma_{u}}{2\gamma_{\pm}}\right)n_{u}. \tag{51}\]
The number density of the primary ultrarelativistic beam should generally follows the number density of net charges in the magnetosphere. There are two scenarios for the magnetosphere of a neutron star: (1) If the magnetosphere is non-twisting, the number density of net charges is the Goldreich-Julian density (Goldreich & Julian, 1969),
\[n_{\mathrm{GI}}\equiv\frac{\Omega B(r)}{2\pi ec}, \tag{52}\]
where \(B(r)=B_{p}(r/R_{n})^{-3}\) is the strength of a dipole field at the distance \(r\), \(B_{p}\) is the surface magnetic field, \(R_{n}\) is the neutron star radius, and \(\Omega\) is the neutron star angular velocity. (2) If the magnetosphere is twisted during an activity (e.g. a magnetar activity), the number density of net charges is
\[n_{\mathrm{twist}}\equiv\frac{1}{4\pi e}\nabla\times\mathbf{B}\sim\frac{B(r)}{4\pi er}\sin^{2}\theta\Delta\phi, \tag{53}\]
where \(\theta\) is the poloidal angle and \(\Delta\phi\) is the twisting angle of the field. Generally, for a certain neutron star, one usually has
\[n_{u}\sim\mathrm{max}(n_{\mathrm{GI}},n_{\mathrm{twist}}). \tag{54}\]
According to Eq.(50), the typical timescale for the development of the two-stream instability in the pulsar frame is
\[\tau_{\lambda}\sim\left(\frac{n_{\pm}}{n_{u}}\right)^{1/3}\gamma_{u}\gamma_{ \pm}^{1/2}\omega_{p}^{-1}, \tag{55}\]
where \(\omega_{p}=(4\pi e^{2}n_{\pm}/m_{e})^{1/2}\) is the pair plasma frequency. For the typical parameters of a pulsar with a non-twisting magnetosphere, the two-stream instability for such a stationary environment may not have enough time to develop, i.e., \(\tau_{\lambda}>\tau_{0}\)(Usov, 1987). Here, we are mainly interested in the scenario of the twisted magnetosphere of a magnetar, in which case, the number density of charged particles in
the magnetosphere is large enough, leading to a larger growth rate for the two-stream instability.
We consider that a magnetar has the surface magnetic field \(B_{p}\sim 10^{14}\) G, rotation period \(P\sim 0.1\) s and twisting angle \(\Delta\phi\sim 0.1\). We take a polar angle \(\sin^{2}\theta\sim 0.1\), and the two components of the outflowing plasmas have Lorentz factors of \(\gamma_{u}\sim 10^{5}\) and \(\gamma_{\pm}\sim 100\), respectively. Because \(n_{\rm twisting}\gg n_{\rm GJ}\), one has \(n_{u}=n_{\rm twisting}\sim 1.6\times 10^{14}\ {\rm cm}^{-3}r_{6}^{-4}\) and \(n_{\pm}=(\gamma_{u}/2\gamma_{\pm})n_{u}\sim 8.3\times 10^{16}\ {\rm cm}^{-3}r_{6}^{-4}\). Thus, the timescale for the development of a two-stream instability is \(\tau_{\lambda}\sim 4.9\times 10^{-7}\ {\rm s}\ r_{6}^{2}\). According to the condition of \(\tau_{\lambda}<\tau_{0}\), the instability would develop within the distance of \(r\sim(10^{7}-10^{8})\) cm. Since \(\omega>\omega_{p}/\sqrt{\gamma_{\pm}}\) at \(r\sim(10^{7}-10^{8})\) cm, the plasma environment would be transparent for GHz wave2.
Footnote 2: Notice that in the work, the transparency condition of the electromagnetic waves does not involve the non-linear plasma effect of strong waves, which would cause a much lower plasma oscillation frequency but a larger scattering opacity (Yang and Zhang, 2020; Beloborodov, 2022; Qu et al., 2022).
The transparency condition \(\omega>\omega_{p}/\sqrt{\gamma_{\rm min}}\) can give a general constraint on the instability timescale,
\[\tau_{\lambda}>\gamma_{u}^{4/3}\gamma_{\pm}^{-1/3}\omega^{-1}. \tag{56}\]
For example, we consider that \(\gamma_{u}\sim 10^{5}\), \(\gamma_{\pm}\sim 100\), and \(\omega/2\pi\sim 10^{9}\ {\rm s}^{-1}\), then the timescale of the two-stream instability is constrained to be \(\tau_{\lambda}>1.6\times 10^{-4}\) s. Using the condition of \(\tau_{\lambda}<\tau_{0}=r/c\), the bunching formation region should be at \(r>(10^{6}-10^{7})\) cm. One also has \(\lambda_{B}<6.3\times 10^{3}\ {\rm s}^{-1}\) and \(\lambda_{b}\lesssim 2\gamma_{\pm}^{2}\lambda_{B}\simeq 1.2\times 10^{8}\ {\rm s}^{-1}\). Since \((S/N)_{\rm peak}=2\pi\lambda_{b}/\omega_{c}\ll 1\) for GHz wave with \(\omega_{c}/2\pi\sim 10^{9}\ {\rm rad}\ {\rm s}^{-1}\), the corresponding spectrum would appear as white-noise in the frequency domain.
Another possibility for the development of a two-stream instability is due to the non-stationarity of the plasma stream (Usov, 1987; Ursov and Usov, 1988). In this case, the pair plasma that flows out from the pulsar is inhomogeneous and can gather into separate clouds along the field lines. We consider that the Lorentz factors of the electron/positrons in the pair plasma have a wide range distribution from \(\gamma_{\rm min}\) up to \(\gamma_{\rm max}\), so the pair clouds disperse as they flow out from the pulsar. The energy distribution of the plasma particles satisfies
\[n(\gamma)\,\mathcal{J}=n_{\gamma}\gamma^{-p}\,\mathcal{J}\,\mathcal{J}\, \mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J} \,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J} \,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J} \,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J} \,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J} \,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J} \,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\, \mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\, \mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\, \mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\, \mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\, \mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J }\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\mathcal{J}\,\
length \(l\), width \(b\), and Lorentz factor \(\gamma\). Due to the magnetosphere rotation, the stationary charge density distribution is the Goldreich-Julian density (Goldreich & Julian, 1969), and only the fluctuating density with \(\delta\rho=\rho-\rho_{\rm GJ}>0\) would be affected by the electrostatic repulsion, where \(\rho_{\rm GJ}=\Omega B/2\pi c\). The bunch expands along the field line due to the binding of the magnetic field. In the bunch comoving frame corrected by the magnetosphere rotation, a charged particle in the bunch end is acted by the electrostatic force
\[m_{e}\frac{l^{\prime}}{r_{\rm rep}^{2}}-\frac{\delta Qe}{l^{\prime 2}}-\frac{ \delta Qe}{l^{\prime}b^{2}}\frac{b^{2}}{l^{\prime}}-e\delta\rho^{\prime}\frac {b^{2}}{l^{\prime}}, \tag{63}\]
where \(\delta Q=Q-Q_{\rm GJ}\) is the bunch total charge deducting the Goldreich-Julian charge contribution, the quantities labeled by a prime are in the bunch comoving frame, \(\tau_{\rm rep}^{\prime}\) is the typical repulsion timescale in the bunch comoving frame, \(l^{\prime}/\tau_{\rm rep}^{\prime}\) is the typical acceleration of the particle in Newton's second law. Therefore, the typical repulsion timescale in the pulsar frame is
\[\tau_{\rm rep}\sim\gamma\tau_{\rm rep}^{\prime}\sim\gamma^{2}\left(\frac{ \gamma m_{e}}{e\delta\rho}\right)^{1/2}\frac{l}{b}, \tag{64}\]
where the relativistic transformations of \(\delta\rho^{\prime}\sim\delta\rho/\gamma\) and \(l^{\prime}\sim\gamma l\) are used. If we further assume that the bunch is approximately isotropic in the comoving frame, \(l^{\prime}\sim b\), one would have \(l/b\sim 1/\gamma\), leading to
\[\tau_{\rm rep}\sim\gamma\left(\frac{\gamma m_{e}}{e\delta\rho}\right)^{1/2} \simeq 6.3\times 10^{-8}\ {\rm s}\ \gamma_{2}^{3/2}\delta n_{12}^{-1/2}, \tag{65}\]
where \(\delta n=\delta\rho/e\). According to the transparency condition \(\omega>\omega_{p}/\sqrt{\gamma}\) with \(\omega_{p}=(4\pi e\kappa\rho/m_{e})^{1/2}\) and \(\delta\rho\sim\rho\), where \(\kappa\) is the pair multiplicity, the repulsion timescale could be constrained by
\[\tau_{\rm rep}>\frac{(4\pi\kappa)^{1/2}\gamma}{\omega}. \tag{66}\]
For \(\omega/2\pi\sim 10^{9}\ {\rm rad\ s^{-1}}\), \(\gamma\sim 100\) and \(\kappa\sim 10^{4}\), one has \(\tau_{\rm rep}>5.6\times 10^{-6}\ {\rm s}\).
Another possibility is that the bunches are fast cooling by the radiation reaction. Due to the large coherent radiation power, the bunches likely cool rapidly within a very short timescale. For simplicity, we consider a bunch as a point source with a net charge number \(N\), the radiation power of coherent curvature radiation is
\[P_{c}=N^{2}\frac{2e^{2}c\gamma^{4}}{3\rho^{2}}. \tag{67}\]
Thus, the isotropic equivalent luminosity is given by
\[L_{\rm iso}\sim\gamma^{4}N_{\rm patch}P_{c}\sim N_{\rm patch}N^{2}\frac{2e^{ 2}c\gamma^{8}}{3\rho^{2}}, \tag{68}\]
where \(N_{\rm patch}\) is the number of coherent patches, the factor of \(\gamma^{4}\) is attributed to the radiation beaming effect and the retarded time corrected by the relativistic propagation effect (Kumar et al., 2017). If the radiation energy is mainly from the kinetic energy of the charged particles in the bunch, the cooling time is
\[\tau_{\rm cool} \sim \frac{N\gamma m_{e}c^{2}}{P_{c}}\sim\frac{3m_{e}c^{2}\rho}{2e^{2 }\omega_{c}N}\sim\frac{3^{1/2}m_{e}c^{5/2}\gamma^{4}N_{\rm patch}^{1/2}}{2^{1/ 2}e\omega_{c}L_{\rm iso}^{1/2}} \tag{69}\] \[\simeq 5.8\times 10^{-14}\ \kappa\ \gamma_{2}^{3}N_{\rm patch}^{1/2}v_{ \rm c,9}^{-1}L_{\rm iso,40}^{-1/2},\]
where \(\omega_{c}\sim\gamma^{3}c/\rho\) is used. Thus, the cooling time of a charged bunch could be very short for ranging from radio pulsars with \(L_{\rm iso}\sim 10^{30}\ {\rm erg\ s^{-1}}\) and FRBs with \(L_{\rm iso}\sim 10^{40}\ {\rm erg\ s^{-1}}\). However, this discussion assumes that the radiation is mainly from the particle kinetic energy \(N\gamma m_{e}c^{2}\). For pulsars and especially for FRBs, the radio emission is usually believed to be generated at the charge-starved region where there exists an electric field parallel to the local magnetic field with an energy density \(U_{E}\sim E^{2}/8\pi\gg\gamma m_{e}c^{2}n_{e}\), where \(n_{e}\) is the electron number density. (Ruderman & Sutherland, 1975; Kumar et al., 2017, 2022). The parallel electric field accelerates the charged bunches and cancels the dispersion due to radiative cooling, leading to a much longer longer cooling time.
## 5 Conclusions and Discussions
Although coherent curvature radiation by charged bunches has been proposed to explain the coherent emissions of radio pulsars and FRBs, this mechanism still encounters some issues, including how the charged bunches form and disperse and what the radiation features are for the case of fluctuating bunches in the emission region. In this work, we consider that the bunches in a neutron star magnetosphere form with an average rate of \(\lambda_{B}\) and have an average lifetime of \(\tau_{B}\). We mainly analyze the spectral features of coherent curvature radiation by dynamically fluctuating bunches and discuss some possible physical mechanisms for the formation and dispersion of the charged bunches in the magnetosphere of a neutron star. The following conclusions are drawn:
1. We first point out that the classical formula of calculating the brightness temperature of FRB emission, i.e. Eq.(2), that involves the transient duration \(\Delta t\) is not applicable to the scenario of the magnetospheric curvature radiation, because \(\Delta t\) does not directly reflect the transverse size \(l_{e}\) of the emission region for curvature radiation. Considering that the charged bunches move along the field line with a Lorentz factor \(\gamma\) at a distance \(r\) from the neutron star center, the transverse size of the emission region is estimated as \(l_{e}\sim r/\gamma\) for \(\omega\sim\omega_{c}\), leading to \(l_{e}\ll c\Delta t\). Therefore, for the typical parameters of the magnetosphere, the brightness temperature should be much larger than that given by Eq.(2).
2. The classical theory of curvature radiation potentially assumes that the bunch lifetime satisfies \(\tau_{B}>\rho/\gamma c\), where \(\rho\) is the curvature radius and \(\gamma\) is the bunch Lorentz factor. Both the typical frequency and the cutoff frequency are \(\omega_{c}\sim\gamma^{3}c/\rho\) and the spectral feature depends on the spatial structure of the bunch. For example, we consider that the bunch has a lengthscale of \(l\). Compared with the radiation spectrum of a single point-charge persistent bunch, the radiation spectrum by an extending bunch is corrected by a factor of \(\sin^{2}(\omega/\omega_{l})\) with \(\omega_{l}\sim 2c/l\), and the radiation power is suppressed by a factor of \((\omega_{l}/\omega_{c})^{5/3}\). In particular, since the excess of one charge is usually compensated by the lack of this charge in the nearby regions, a bunch-cavity system might form in the magnetosphere. It has an emission spectrum much narrower than that of a single persistent bunch and with the emission spectrum of \(P_{A}(\omega)\propto\omega^{8/3}\) in the low-frequency band.
3. If the bunch lifetime is short enough, \(\tau_{B}<\rho/\gamma c\), the cutoff frequency would become \(\tilde{\omega}\sim\gamma^{2}/\tau_{B}\gtrsim\omega_{c}\). Thus, a short-lived bunch will radiate electromagnetic waves with a higher frequency compared with that of classical curvature radiation. Considering that bunches form and disperse intermittently when the building plasma particles move along a magnetic field line, the emission spectrum of such a fluctuating bunch is a convolution between the emission spectrum of a single persistent bunch \(P_{A}(\omega)\) and that of the pulse sampling function \(P_{S}(\omega)\), \(P_{A}(\omega)=P_{A}(\omega)*P_{S}(\omega)\), where \(P_{S}(\omega)\) is the emission spectrum of the pulse sampling function \(S(t)\) that is described by Eq. (18) and Figure 4.
4. According to the above point, we obtained the emission spectrum of a single fluctuating bunch, \(P_{\tilde{A}}(\omega)\). We find that compared with that of a single fluctuating bunch, \(P_{\tilde{A}}(\omega)\) is suppressed by a factor of \((\lambda_{b}\tau_{b})^{2}\), where \(\lambda_{b}\simeq 2\gamma^{2}\lambda_{B}\) and \(\tau_{b}\simeq\tau_{B}/2\gamma^{2}\) are the pulse rate and duration, respectively, \(\lambda_{B}\) and \(\tau_{B}\) are the bunch formation rate and lifetime, respectively, and the factor of \(2\gamma^{2}\) is corrected by the propagation time-delay effect. Meanwhile, there is a quasi-white noise in the wider band. We define \((S/N)_{\rm peak}\) as the "signal-to-noise ratio" at the peak frequency \(\omega_{c}\) in the frequency domain, and \((S/N)_{\rm peak}\gtrsim 1\) means that the spectrum is non-white-noise. If \(\tau_{b}<\omega_{c}^{-1}\), where \(\omega_{c}\sim\gamma^{3}c/\rho\) is the typical frequency of the classical curvature radiation, one has \((S/N)_{\rm peak}\sim\lambda_{B}/\omega_{c}\), and there is a high-frequency cutoff in the white noise at \(\tilde{\omega}_{c}\sim\tau_{b}^{-1}\). If \(\tau_{b}>\omega_{c}^{-1}\), one has \((S/N)_{\rm peak}\sim\lambda_{B}\tau_{b}=\lambda_{B}\tau_{B}\), the cutoff frequency of the whole spectrum is at \(\sim\omega_{c}\).
5. If there are multiple fluctuating bunches along a field line, the emission spectrum of multiple fluctuating bunches is the incoherent sum of that each single fluctuating bunch because the separation between adjacent bunch is randomly distributed. In this scenario, the spectral shape is the same as that of single fluctuating bunches, and the total radiation power is incoherently enhanced.
6. We briefly discussed some mechanisms for bunch formation and dispersion. Since the radiation power of a fluctuating bunch is suppressed by a factor \((\lambda_{B}\tau_{B})^{2}\), if the coherent radiation power by bunch fluctuation is not significant, the condition \((\lambda_{B}\tau_{B})^{2}\sim 1\) should be satisfied. Generally, for a plasma instability, the decay rate is usually of the order of the growth rate, leading to \((\lambda_{B}\tau_{B})^{2}\sim 1\). However, some extreme mechanisms, e.g, velocity dispersion, electrostatic repulsion, and radiation cooling may cause \(\tau_{B}\ll\lambda_{B}^{-1}\), which might cause the radiation power to be suppressed. On the other hand, if the observed spectrum is non-white-noise, the condition of \((S/N)_{\rm peak}\gtrsim 1\) causes \(\lambda_{b}\gtrsim\min(\omega_{c},\tau_{b}^{-1})\), leading to \(\lambda_{B}\sim\omega_{c}/2\gamma^{2}\simeq 3\times 10^{5}\) s\({}^{-1}\)\({}^{-}\)\({}^{1}\)\({}^{-}\)\({}^{1}\)\({}^{-}\)\({}^{1}\)\({}^{-}\) for \(\omega_{c}<\tau_{b}^{-1}\).
At last, we also notice that the theory of the spectral analysis for fluctuating bunches not only applies to curvature radiation but also to coherent inverse Compton scattering (ICS) by charged bunches (Zhang 2022b). This mechanism generally has a much higher radiation power than curvature radiation, so that a lower degree of coherence is needed to interpret FRBs. Similar to the scenario of curvature radiation, a white-noise component in the emission spectrum would also appear due to bunch fluctuations. Compared with coherent curvature radiation, the major difference is that the emission spectrum of coherent ICS mainly depends on the properties of the incident electromagnetic waves which should be involved in the discussion of the emission of a single persistent bunch. On the other hand, since the radiation direction of coherent ICS is also along the magnetic field line due to the relativistic motion of the bunches, its emission spectrum might be modulated by the typical frequency \(\omega_{c}\). A detailed analysis of this mechanism will be performed in the future.
## Acknowledgements
We thank Qiao-Chu Li for the constructive discussion about the signal theory. Y-PY's work is supported by the National Natural Science Foundation of China grant No.12003028, the National Key Research and Development Program of China (2022SKA0130101), and the China Manned Spaced Project (CMS-CSST-2021-B11).
## Data Availability
This theoretical study did not generate any new data.
|
2308.11127 | How Expressive are Graph Neural Networks in Recommendation? | Graph Neural Networks (GNNs) have demonstrated superior performance on
various graph learning tasks, including recommendation, where they leverage
user-item collaborative filtering signals in graphs. However, theoretical
formulations of their capability are scarce, despite their empirical
effectiveness in state-of-the-art recommender models. Recently, research has
explored the expressiveness of GNNs in general, demonstrating that message
passing GNNs are at most as powerful as the Weisfeiler-Lehman test, and that
GNNs combined with random node initialization are universal. Nevertheless, the
concept of "expressiveness" for GNNs remains vaguely defined. Most existing
works adopt the graph isomorphism test as the metric of expressiveness, but
this graph-level task may not effectively assess a model's ability in
recommendation, where the objective is to distinguish nodes of different
closeness. In this paper, we provide a comprehensive theoretical analysis of
the expressiveness of GNNs in recommendation, considering three levels of
expressiveness metrics: graph isomorphism (graph-level), node automorphism
(node-level), and topological closeness (link-level). We propose the
topological closeness metric to evaluate GNNs' ability to capture the
structural distance between nodes, which aligns closely with the objective of
recommendation. To validate the effectiveness of this new metric in evaluating
recommendation performance, we introduce a learning-less GNN algorithm that is
optimal on the new metric and can be optimal on the node-level metric with
suitable modification. We conduct extensive experiments comparing the proposed
algorithm against various types of state-of-the-art GNN models to explore the
explainability of the new metric in the recommendation task. For
reproducibility, implementation codes are available at
https://github.com/HKUDS/GTE. | Xuheng Cai, Lianghao Xia, Xubin Ren, Chao Huang | 2023-08-22T02:17:34Z | http://arxiv.org/abs/2308.11127v3 | # How Expressive are Graph Neural Networks in Recommendation?
###### Abstract.
Graph Neural Networks (GNNs) have demonstrated superior performance in various graph learning tasks, including recommendation, where they explore user-item collaborative filtering signals within graphs. However, despite their empirical effectiveness in state-of-the-art recommender models, theoretical formulations of their capability are scarce. Recently, researchers have explored the expressiveness of GNNs, demonstrating that message passing GNNs are at most as powerful as the Weisfeiler-Lehman test, and that GNNs combined with random node initialization are universal. Nevertheless, the concept of "expressiveness" for GNNs remains vaguely defined. Most existing works adopt the graph isomorphism test as the metric of expressiveness, but this graph-level task may not effectively assess a model's ability in recommendation, where the objective is to distinguish nodes of different closeness. In this paper, we provide a comprehensive theoretical analysis of the expressiveness of GNNs in recommendation, considering three levels of expressiveness metrics: graph isomorphism (graph-level), node automorphism (node-level), and topological closeness (link-level). We propose the topological closeness metric to evaluate GNNs' ability to capture the structural distance between nodes, which closely aligns with the recommendation objective. To validate the effectiveness of this new metric in evaluating recommendation performance, we introduce a learning-less GNN algorithm that is optimal on the new metric and can be optimal on the node-level metric with suitable modification. We conduct extensive experiments comparing the proposed algorithm against various types of state-of-the-art GNN models to explore the effectiveness of the new metric in the recommendation task. For the sake of reproducibility, implementation codes are available at [https://github.com/HKUDS/GTE](https://github.com/HKUDS/GTE).
2023
Graph Neural Networks, Recommender Systems +
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
which primarily focuses on node similarity rather than graph-level properties. In their study on link prediction with subgraph sketching, Chamberlain et al. (Chamberlain et al., 2017) emphasize the importance of distinguishing automorphic nodes (symmetric nodes in the same orbit induced by the graph automorphism group) for link prediction, highlighting that the message passing mechanism of GNNs with equivalent power to the WL test lacks this ability. Although node automorphism is more relevant to link prediction and recommendation than graph isomorphism, it still does not fully align with the objective of recommendation, as it only requires distinguishing different nodes without considering their relative proximity. Secondly, existing works primarily focus on general graphs, while recommendation systems typically involve bipartite user-item interaction graphs, allowing for stronger conclusions regarding the expressiveness of GNNs (as we will demonstrate in Section 3). Figure 0(a) provides an overview of the current progress in formulating the expressiveness of GNNs, depicted by the light orange and green boxes.
In this paper, we propose a comprehensive theoretical framework for analyzing the expressiveness of GNNs in recommendation, encompassing three levels of expressiveness metrics: i) Graph-level: The ability to distinguish isomorphic graphs, while less directly relevant to recommendation tasks, is included in our framework to ensure coherence and consistency with previous works (Beng et al., 2017; Chen et al., 2018; Chen et al., 2018). ii) Node-level: The ability to distinguish automorphic nodes, as mentioned in Chamberlain et al. (Chamberlain et al., 2017), is particularly relevant to recommendation systems as it assesses the model's capability to identify different users and items. At this level, we investigate the impact of equipping message passing GNNs with distinct (yet non-random) initial embeddings. Our research demonstrates that GNNs with distinct initial embeddings can successfully differentiate some of the automorphic nodes, although not all of them. Notably, we establish that when the graph is constrained to a bipartite structure, GNNs with distinct initial embeddings are capable of distinguishing all automorphic nodes. iii) Link-level: The ability to discriminate nodes of different topological closeness to a given node. We define topological closeness in Section 4 and propose it as a new metric of expressiveness that directly aligns with the recommendation objective. Our theoretical analysis shows that the popular paradigm adopted in most GNN-based recommender systems (message passing GNN with random initial embeddings, using the inner product between embeddings as the prediction score) cannot fully discriminate nodes based on topological closeness. The relations between the three levels of metrics to different graph tasks are illustrated in Figure 0(b).
It is worth noting that no single expressiveness metric can fully explain recommendation, as user preferences involve much more complicated factors than what can be encoded in a user-item interaction graph. Even if a metric directly aligns with the recommendation objective, achieving optimality on that metric does not guarantee flawless recommendation performance. Therefore, in Section 5, we analyze the effectiveness of topological closeness, the newly proposed metric, in explaining recommendation performance. Specifically, we propose a lightweight Graph Topology Encoder (GTE) that adopts the message passing framework, but does not have learnable parameters. The learning-less characteristic of GTE makes it much more efficient than learning-based GNN recommends. We prove that GTE is optimal on the new topological closeness metric and can achieve optimality on the node automorphism metric and the expressive power equivalent to the WL test on the graph isomorphism metric with suitable modification of the mapping function. Since GTE is optimal in discriminating nodes by topological closeness, we conduct various experiments with GTE and state-of-the-art GNN and GCL models to evaluate the effectiveness of the new metric in the recommendation task. The theories we prove in this paper and their relations to previous works are presented in Figure 0(a) (highlighted blue boxes).
In summary, our contributions are highlighted as follows:
* We perform a comprehensive theoretical analysis on the expressiveness of GNNs in recommendation under a three-level framework designed specifically for the recommendation task.
* We introduce a new link-level metric of GNN expressiveness, topological closeness, that directly aligns with the recommendation objective and is more suitable for evaluating recommendation expressiveness.
* We propose a learning-less GNN algorithm GTE that is optimal on the link-level metric, whose learning-less feature enables it to be much more efficient than learning-based GNN recommenders.
* We conduct extensive experiments on six real-world datasets of different sparsity levels with GTE and various baselines to explore the effectiveness of topological closeness in recommendation.
## 2. Preliminaries and Related Work
### GNNs for Recommendation
The ability to extract multi-hop collaborative signals by aggregating neighborhood representation makes graph neural networks a prominent direction of research in recommender systems (Kang et al., 2017; Wang et al., 2018). Most GNN-based recommender models adopt the message passing
Figure 1. Theoretical framework of GNN expressiveness.
type of GNNs, or more specifically, the graph convolutional networks (GCN) (GCN) (GCN and GCN, 2017), as the backbone, such as NGCF (Zhou et al., 2018) and PinSage (2018). GCCF (GCN and GCN, 2017) incorporates the residual structure into this paradigm. LightGCN (GCN and GCN, 2017) further adapts to the recommendation task by removing the non-linear embedding transformation. There are also attempts to enhance the GCN framework with supplementary tasks, such as masked node embedding reconstruction in (Zhou et al., 2018).
Recently, the self-supervised learning (SSL) paradigm has been incorporated into GNN-based recommenders to address the data sparsity issue (Zhou et al., 2018; PinSage and GCN, 2018; Zhou et al., 2018; GCN and GCN, 2017). The graph contrastive learning (GCL) is one of the most prominent lines of research that leverages the self-supervision signals produced by aligning embeddings learned from different views. Various GCL approaches have been proposed to produce augmented views, such as random feature dropout (Zhou et al., 2018), hypergraph global learning (Zhou et al., 2018), adaptive dropping based on node centrality (Zhou et al., 2018), embeddings noisy perturbation (Zhou et al., 2018), and SVD-reconstruction (Beng et al., 2018). Despite their promising performance on real data, these models are often validated by experiments, with little theoretical formulation and guarantee on their capability.
### The Graph-Level Expressive Power of GNNs
In recent years, there has been significant research investigating the expressiveness of GNNs using graph-level metrics, such as the graph isomorphism test. In particular, Xu et al. (2018) demonstrate that the expressive power of message passing GNNs is at most equivalent to the Weisfeiler-Lehman (WL) test, a popular graph isomorphism algorithm. Additionally, they establish that this equivalence can be achieved by ensuring that both the aggregation function and the readout function are injective. In our analysis, we adopt and adapt their conclusions, introducing relevant concepts and notations that will be utilized throughout our study.
**Theorem 1** (from Xu et al. (2018)).: _Let \(h_{v}^{(k)}\) be the feature of node \(v\) in the \(k\)-th iteration, and \(\mathcal{N}(v)\) be the set of neighboring nodes of \(v\). With sufficient layers, a GNN can map any two graphs \(G_{1}\) and \(G_{2}\) that the WL-test decides as non-isomorphic to two different multisets of node features \(\{h_{v}^{(k)}\}_{1}\) and \(\{h_{v}^{(k)}\}_{2}\), if the aggregation function \(h_{v}^{(k)}=g((h_{u}^{(k-1)}:u\in\mathcal{N}(v)\cup\{v\}))\) is injective. Here, a multiset is defined as a set that allows multiple instances from its elements._
It is noteworthy that the aforementioned theorem solely focuses on the message passing mechanism of GNNs and assumes that nodes possess identical constant initial features, similar to the WL test. However, recent studies by Abboud et al (2018) and Sato et al (2018) highlight that the capacity of GNNs can be significantly enhanced when nodes are equipped with randomly initialized features. Remarkably, random node initialization empowers GNNs with universality, enabling them to approximate any function on a graph and potentially solve the graph isomorphism problem (Beng et al., 2018). It is important to emphasize that this conclusion does not come with a complexity guarantee since the computational complexity of the graph isomorphism problem remains unknown. Random initialization represents a stronger assumption and presents greater challenges for analysis. In our paper, we specifically investigate the impact of a weaker assumption, distinct initialization, on the expressiveness of GNNs.
However, when evaluating the expressiveness of GNNs in recommendation systems, it is important to use metrics that align with the specific task of recommendation itself. Graph isomorphism tests and graph-level metrics, such as graph classification, may not directly capture the ability of GNNs to distinguish between different item nodes and accurately discriminate their closeness to a user node. In the context of link prediction, Chamberlain et al. (2018) suggest that the performance of message passing GNNs is hindered by their inability to differentiate automorphic nodes. Automorphic nodes refer to symmetric nodes within the same orbit induced by the graph automorphism group (e.g., nodes \(a\) and \(b\), \(c\) and \(d\), \(e\) and \(f\)). The issue arises because automorphic nodes are assigned the same final features during the Weisfeiler-Lehman (WL) test, leading to indistinguishable effects in message passing GNNs. However, it is worth noting that this limitation assumes that all nodes have identical initial features. As a node-level metric, the ability to distinguish automorphic nodes becomes more relevant as it reflects the model's capability to differentiate between users and items. Therefore, it serves as a more appropriate metric for assessing GNN expressiveness in recommendation systems. Subsequent sections of our paper will demonstrate how GNNs, with distinct initial embeddings, can effectively address the issue of distinguishing automorphic nodes in user-item bipartite graphs.
## 3. Expressing Node-Level
Distinguishability of GNNs
This section explores how distinct initial embeddings (DIE) enhance GNNs' ability to distinguish automorphic nodes. It's important to note that distinct initialization is a more relaxed assumption than random initialization in (Beng et al., 2018; GCN and GCN, 2017), as it accepts any set of predefined unique initial embeddings, including the simplest one-hot encoding by node ID. We classify the automorphic nodes into three types:
* **Type I**: a pair of automorphic nodes \(u\) and \(v\) such that \(\mathcal{N}(u)=\mathcal{N}(v)\) (i.e., they share the same set of neighboring nodes). For example, \(c\) and \(d\) in Figure 2.
* **Type II**: a pair of automorphic nodes \(u\) and \(v\) such that \(u\in\mathcal{N}(v)\), \(v\in\mathcal{N}(u)\), and \(\mathcal{N}(u)-\{v\}=\mathcal{N}(v)-\{u\}\) (i.e., they are neighbors to each other, and share the same set of other neighboring nodes). For example, \(a\) and \(b\) in Figure 2.
* **Type III**: a pair of automorphic nodes \(u\) and \(v\) such that \(\mathcal{N}(u)-\{v\}\neq\mathcal{N}(v)-\{u\}\) (i.e., no matter if they are neighbors to each other, their neighborhoods differ by at least a pair of automorphic nodes \(w\) and \(w^{\prime}\), such that \(w\in N(u)\) but \(w^{\prime}\notin N(u)\)1). For example, \(e\) and \(f\) in Figure 2.
Figure 2. Illustrated examples of three types of automorphic nodes.(\(a\) and \(b\) belong to Type II, \(c\) and \(d\) belong to Type I, \(e\) and \(f\) belong to Type III.)
### DIE Does Not Fully Solve Node Automorphism on General Graphs
In this section, we show that GNNs with distinct initial embeddings can only distinguish two of the three types of automorphic nodes.
**Theorem 2**.: _Assuming that the aggregation function \(g\) of the graph neural networks is injective, and every node receives a distinct initial embedding, for a pair of automorphic nodes \(u\) and \(v\), in every iteration, they will be assigned different embeddings if and only if one of the two conditions is satisfied: (i) The GNN implements residual connections, and \(u\) and \(v\) belong to Type I or Type III. (ii) The GNN does not implement residual connections, and \(u\) and \(v\) belong to Type II or Type III. In other words, regardless of whether the GNN implements residual connections or not, it can only distinguish two out of the three types of automorphic nodes._
**Proof**. We first prove case (i), i.e, with residual connections:
We prove that for a pair of automorphic nodes \(u\) and \(v\) of Type I or III, they will be assigned different embeddings in any iteration \(k\). This is obviously true when \(k=0\), because the nodes are initialized with distinct embeddings. If it holds for iteration \(k=i-1\), then in iteration \(k=i\), the embeddings for \(u\) and \(v\) are:
\[h_{u}^{(i)}=g(\{h_{u}^{(i-1)}\}\cup\{h_{w}^{(i-1)}:w\in\mathcal{N}(u)\}) \tag{1}\]
\[h_{v}^{(i)}=g(\{h_{v}^{(i-1)}\}\cup\{h_{w^{\prime}}^{(i-1)}:w^{\prime}\in \mathcal{N}(v)\}) \tag{2}\]
If \(u\) and \(v\) are of Type I, then \(\{h_{u}^{(i-1)}\}\cup\{h_{w}^{(i-1)}:w\in\mathcal{N}(u)\}\) and \(\{h_{v}^{(i-1)}\}\cup\{h_{w^{\prime}}^{(i-1)}:w^{\prime}\in\mathcal{N}(v)\}\) are two distinct multisets, one exclusively containing \(h_{u}^{(i-1)}\) and the other exclusively containing \(h_{v}^{(i-1)}\), and \(h_{u}^{(i-1)}\neq h_{v}^{(i-1)}\). If \(u\) and \(v\) are of Type III, the above-mentioned two multisets are also distinct, because there exists at least a pair of automorphic nodes \(w\) and \(w^{\prime}\) (distinct from \(u\), \(v\)), such that \(w\in N(u)\) but \(w\notin N(v)\), \(w^{\prime}\in N(v)\) but \(w^{\prime}\notin N(u)\). Thus, \(h_{u}^{(i-1)}\) and \(h_{w^{\prime}}^{(i-1)}\) are exclusively in one of the two multisets, respectively. Moreover, \(w,w^{\prime}\) must be Type III automorphic nodes, because their neighborhoods differ by nodes other than each other (\(u\) and \(v\)). By induction assumption, \(h_{w}^{(i-1)}\neq h_{w^{\prime}}^{(i-1)}\). In summary, the input multisets of \(u\) and \(v\) to \(g\) must be different if they are Type I or III. Then, since \(g\) is injective, \(h_{u}^{(i)}\neq h_{v}^{(i)}\). By induction, this holds for all iterations. If \(u\) and \(v\) are of Type II, the inputs to the two multisets are the same, and the above result does not hold.
Next, we prove case (ii), i.e, without residual connection:
Similarly, we can demonstrate that for a pair of automorphic nodes \(u\) and \(v\) of Type II or III, they will be assigned different embeddings in any iteration \(k\). This is evident when \(k=0\) because the nodes are initially initialized with distinct embeddings. Assuming it holds true for iteration \(k=i-1\), we can show that when \(k=i\), the embeddings for nodes \(u\) and \(v\) are as follows:
\[h_{u}^{(i)}=g(\{h_{w}^{(i-1)}:w\in\mathcal{N}(u)\}) \tag{3}\]
\[h_{v}^{(i)}=g(\{h_{w^{\prime}}^{(i-1)}:w^{\prime}\in\mathcal{N}(v)\}) \tag{4}\]
If nodes \(u\) and \(v\) are of Type II, we can observe that the multisets \(h_{w}^{(i-1)}:w\in\mathcal{N}(u)\) and \(h_{w^{\prime}}^{(i-1)}:w^{\prime}\in\mathcal{N}(v)\) are two distinct multisets. Specifically, one exclusively contains the embedding \(h_{v}^{(i-1)}\), while the other exclusively contains the embedding \(h_{u}^{(i-1)}\). It is also established that \(h_{u}^{(i-1)}\neq h_{v}^{(i-1)}\). if nodes \(u\) and \(v\) are of Type III, the arguments presented in the case of Type II still apply. Thus, the input multisets of nodes \(u\) and \(v\) to the aggregation function \(g\) must be different if they are of Type I or II. Then, since the aggregation function \(g\) is injective, we can conclude that \(h_{u}^{(i)}\neq h_{v}^{(i)}\). By induction, we can assert that this holds for all iterations.
However, if \(u\) and \(v\) are of Type I, the inputs to the two multisets are the same, and the above result does not hold.
This theorem suggests that on a general graph, using distinct embedding initialization can only provide partial differentiation between automorphic nodes for GNNs. However, it is important to note that user-item interaction graphs in recommendation systems are bipartite graphs, where a user node cannot be connected to another user node, and an item node cannot be connected to another item node. This bipartite structure allows us to leverage this inherent feature and demonstrate that with distinct initial embeddings, GNNs can fully address the node automorphism problem on user-item bipartite graphs.
### DIE Solves Node Automorphism on Bipartite Graphs
We start by proving that Type II automorphic nodes cannot exist in a connected bipartite graph with more than two nodes.
**Lemma 3**.: _In a connected bipartite graph with more than 2 nodes, if a pair of nodes \(u\) and \(v\) are automorphic, they cannot be Type II automorphic nodes._
**Proof**. If \(u\) and \(v\) are both user nodes or both item nodes, there cannot be an edge between them, so the condition for Type II, \(u\in\mathcal{N}(v)\) and \(v\in\mathcal{N}(u)\), cannot hold. If one of \(u\), \(v\) is a user node and another one is an item node, and they are neighbors to each other, then since all neighbors of a user node must be item nodes, and all neighbors of an item node must be user nodes, \(\mathcal{N}(u)-\{v\}\) cannot be equal to \(\mathcal{N}(v)-\{u\}\), unless \(\mathcal{N}(u)-\{v\}=\emptyset\) and \(\mathcal{N}(v)-\{u\}=\emptyset\). If \(\mathcal{N}(u)-\{v\}=\emptyset\) and \(\mathcal{N}(v)-\{u\}=\emptyset\), then \(u\) and \(v\) must form a connected component that is isolated from the rest of the graph. However, since the graph is a connected bipartite graph with more than 2 nodes, the above scenario cannot occur.
With Lemma 3 and Theorem 2, we prove the following theorem.
**Theorem 4**.: _Assume that the graph is a connected bipartite graph with more than 2 nodes. Assume that the aggregation function \(g\) of the GNN is injective, and residual connections are implemented. Assume that every node receives a distinct initial embedding. For a pair of automorphic nodes \(u\) and \(v\), in every iteration, they will be assigned different embeddings._
**Proof**. By Lemma 3, \(u\) and \(v\) must be either Type I or III automorphic nodes. By Theorem 2, as long as the GNN implements residual connections, \(u\) and \(v\) will be assigned different embeddings in every iteration due to the injectivity of the aggregation function.
This conclusion indicates that a GNN is capable of distinguishing automorphic nodes on a connected user-item bipartite graph with the following three design choices: i) distinct initial embeddings; ii) residual connections; iii) injective aggregation function. These design choices provide a useful guideline for developing powerful
GNN-based recommender systems that aim to achieve optimal expressiveness on the node-level metric.
## 4. Encoding Node Topological Closeness with GNNs
While the capability of node automorphism provides a better metric for recommendation expressiveness, it is not directly aligned with the intended task. This is because it only focuses on distinguishing between different nodes, without considering their structural closeness to one another. In recommendation systems, models should not only differentiate between two items but also determine which one is closer to the target user's profile. Therefore, it is important to focus on a link-level metric that captures the closeness between nodes in the graph's structural space.
The traditional metric for evaluating the closeness between two nodes is the geodesic distance, which is defined as the length of the shortest path between the two nodes (Beng et al., 2017; Chen et al., 2018). However, this metric may not accurately reflect the true similarity between nodes as encoded in the graph structure. This is especially true in tasks where clusters and community structures are important, such as recommendation systems. For instance, in Figure 2(b), nodes \(v_{1}\) and \(v_{2}\) have the same geodesic distance to node \(u\). However, it is evident that \(v_{2}\) should be considered "closer" to \(u\) than \(v_{1}\) because they reside together in a denser cluster.
In light of this limitation, we propose a new link-level metric _topological closeness_ that evaluates the closeness between two nodes by considering not only the length of the shortest path but also the number of possible paths between them.
### Topological Closeness
Here, we define the k-hop topological closeness between two nodes \(u\) and \(v\), denoted as \(k\)-\(TC(u,v)\), as follows.
**Definition 5** (k-Hop Topological Closeness).: _Given two nodes \(u\) and \(v\) in an undirected graph, the \(k\)-hop topological closeness between \(u\) and \(v\) in this graph is defined as:_
\[k\text{-}TC(u,v)=\left|\mathcal{P}^{k}_{u,v}\right|\]
_where \(\mathcal{P}^{k}_{u,v}\) is the set of all possible paths of length \(k\) between \(u\) and \(v\). Note that the paths here allow repeated vertices, repeated edges, and self-loops. We further define the 0-hop topological distance between \(a\) node \(u\) and itself as 0-\(TC(u,u)=1\)._
The capability of topological closeness (TC) to intuitively capture both the distance and clustering information between two nodes is illustrated in Figure 3. In Figure 2(a), both \(v_{1}\) and \(v_{2}\) have only one simple path2 to \(u\), representing the minimum clustering structure they can have when connected to \(u\). Additionally, \(v_{1}\) has a shorter distance to \(u\) compared to \(v_{2}\). Consequently, \(v_{1}\) exhibits a higher TC value to \(u\) than \(v_{2}\). This occurs because TC takes into account self-loops and repeated vertices, and a shorter simple path allows more possible paths of length \(k\) with self-loops and repeated vertices. In Figure 2(b), both \(v_{1}\) and \(v_{2}\) are equidistant from \(u\). However, \(v_{2}\) resides within a denser cluster alongside \(u\). Thus, \(v_{2}\) demonstrates a higher TC value to \(u\) compared to \(v_{1}\), due to the larger number of paths connecting \(v_{2}\) and \(u\) than those connecting \(v_{1}\) and \(u\).
Footnote 2: A simple path is defined as a path in which all the internal vertices are distinct.
It is important to note that there are multiple methods available for combining clustering density information and the shortest distance between two nodes. For instance, one approach involves calculating a weighted sum of all simple paths connecting the two nodes, where the weight is determined by the inverse of the path length. In our case, we define topological closeness in this manner because this definition aligns naturally with the structure of GNNs and can be efficiently computed using a message-passing GNN. The subsequent two subsections will provide further clarity on this concept and its relevance within our framework.
### Popular GNN Recommender Paradigm Does Not Fully Capture Topological Closeness
In recent years, most GNN-based recommender systems (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Wang et al., 2018; Wang et al., 2018) adopt the following paradigm. Initially, each user \(u\) and item \(v\) are assigned random initial embedding vectors \(\mathbf{e}^{(0)}_{u}\in\mathbb{R}^{d}\) and \(\mathbf{e}^{(0)}_{u}\in\mathbb{R}^{d}\) respectively, where \(d\) represents the dimensionality of the embeddings. In the \(k\)-th GNN layer, the embeddings are updated using the following rule, which incorporates residual connections:
\[\mathbf{e}^{(k)}_{u} =\sigma(\sum_{v\in\mathcal{N}(u)}\!\!\phi(\mathbf{e}^{(k-1)}_{u})) +\mathbf{e}^{(k-1)}_{u} \tag{5}\] \[\mathbf{e}^{(k)}_{v} =\sigma(\sum_{u\in\mathcal{N}(v)}\!\!\phi(\mathbf{e}^{(k-1)}_{u})) +\mathbf{e}^{(k-1)}_{v} \tag{6}\]
where \(\sigma\) denoting an activation function and \(\phi\) generally representing the element-wise multiplication with the coefficients in the normalized adjacency matrix, the final embeddings of the nodes can be computed as follows:
\[\mathbf{e}_{u}=\theta(\{\mathbf{e}^{(k)}_{u}\}_{k=0}^{L}),\quad\mathbf{e}_{v}= \theta(\{\mathbf{e}^{(k)}_{v}\}_{k=0}^{L}) \tag{7}\]
where \(L\) represents the number of GNN layers, and \(\theta\) denotes an aggregation function applied to the embeddings from all layers,
Figure 3. Illustrations of Topological Closeness with k=3.
typically implemented as a sum or mean operation. Finally, the predicted score of a user \(u\)'s preference for an item \(v\) is computed by taking the inner product of their final embeddings:
\[\hat{y}_{u,v}=\mathbf{e}_{u}\cdot\mathbf{e}_{v} \tag{8}\]
This section provides a proof demonstrating that the message passing GNN and inner product prediction mechanism employed in this paradigm are unable to completely discriminate between two item nodes based solely on their topological distance from a user node. To further substantiate this claim, we conduct experiments on real-world datasets, which are presented in Section 5.3.
For the sake of simplicity in calculations and clearer presentation, we consider a simplified version of the aforementioned paradigm. Specifically, we assume that \(\sigma\) and \(\phi\) are identity functions. Hence, the embedding updating rule is simplified as:
\[\mathbf{e}_{u}^{(k)}=\sum_{v\in\mathcal{N}(u)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \mathbf{e}_{u}^{(k-1)}+\mathbf{e}_{u}^{(k-1)},\ \ \mathbf{e}_{v}^{(k)}=\sum_{u\in\mathcal{N}(v)}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
only study the model structure of message passing and prediction in the above analysis, without considering the learning process. It is possible that the models could fit the task better with learnable embeddings that are widely adopted in GNN recommenders, but theoretical analysis involving learning would require complicated case-by-case analysis that we cannot include here.
### Graph Topology Encoder (GTE)
Indeed, it is evident that no single metric can fully capture a model's capability in recommendation, even if the metric aligns with the recommendation objective. User behavior is considerably more intricate than what is encompassed within the user-item graph's interaction history. As a result, it is crucial to investigate the level of interpretability of the proposed metric in the context of recommendation. To address this concern, we introduce a learning-less GNN algorithm termed GTE. GTE is proven to be optimal in terms of the link-level metric of topological closeness.
At the beginning, each item \(v_{i}\) is assigned a one-hot initial feature \(\mathbf{h}_{u}^{(0)}\in\mathbb{R}^{I}\), where \(I\) is the total number of items. For \(j\in[0,I)\), the \(j\)-th entry of the initial feature is \(\mathbf{h}_{u_{j}}^{(0)}\left[j\right]=\begin{cases}1&i=j\\ 0&i\neq j\end{cases}.\)
Each user \(u_{i}\) is assigned an initial feature vector \(\mathbf{h}_{u_{i}}^{(0)}\in\mathbb{R}^{I}\), where all entries are initialized to \(0\). It is important to note that, unlike traditional GNN-based recommenders where the embeddings are learnable, the features in GTE are fixed. In fact, the entire algorithm does not involve any learning process. In the \(k\)-th GNN layer, the features are propagated on the graph using a simple rule:
\[\mathbf{h}_{u}^{(k)}=\phi\left(\sum_{u\in\mathcal{N}(u)}\mathbf{h}_{o}^{(k-1 )}+\mathbf{h}_{u}^{(k-1)}\right)\text{, }\mathbf{h}_{o}^{(k)}=\phi\left(\sum_{u\in\mathcal{N}(u)}\mathbf{h}_{u}^{(k-1 )}+\mathbf{h}_{o}^{(k-1)}\right)\]
where \(\phi\) is a mapping function, and in our case we simply use the identity function as \(\phi\). The message propagation is performed \(L\) times, where \(L\) is the number of GNN layers. Finally, for user \(u_{i}\), the predicted preference score for item \(v_{j}\) is \(\hat{y}_{i,j}=\mathbf{h}_{u_{i}}^{(L)}\left[j\right]\). Note that unlike most GNN-based recommenders that use inner product for prediction, GTE directly uses a specific dimension in the feature space to represent the score for an item.
The procedure of GTE is formally presented in Algorithm 1. GTE exploits the graph collaborative filtering signals via a fixed GNN structure without any learnable components, which makes the execution of the algorithm fast and reliable. We prove that GTE is optimal on the proposed topological closeness metric by showing that for a user node \(u_{i}\) and two item nodes \(v_{j}\) and \(v_{k}\), if \(L\)-\(TC(u_{i},v_{j})>L\)-\(TC(u_{i},v_{k})\), it is certain that \(\hat{y}_{i,j}>\hat{y}_{i,k}\).
**Lemma 7**.: _In the \(k\)-th layer of GTE, the \(j\)-th dimension of the feature of a node \(w\) (it could be user or item node) can be written as follows, where \(v_{j}\) is the \(j\)-th item node:_
\[\mathbf{h}_{w}^{(k)}\left[j\right]=k\text{-}TC(w,v_{j})\]
**Proof**. This is true when \(k=0\), because if \(w\neq v_{j}\), \(\mathbf{h}_{w}^{(0)}\left[j\right]=0\), and \(0\)-\(TC(w,v_{j})=0\); if \(w=v_{j}\), \(\mathbf{h}_{w}^{(0)}\left[j\right]=\mathbf{h}_{v_{j}}^{(0)}\left[j\right]=1\), and \(0\)-\(TC(w,v_{j})=0\)-\(TC(v_{j},v_{j})=1\).
If it holds for \(k=l-1\), then in layer \(k=l\), the \(j\)-th entry of the feature of \(w\) is:
\[\mathbf{h}_{w}^{(l)}\left[j\right] =\sum_{x\in\mathcal{N}(w)}\mathbf{h}_{x}^{(l-1)}\left[j\right]+ \mathbf{h}_{w}^{(l-1)}\left[j\right] \tag{18}\] \[=\sum_{x\in\mathcal{N}(w)\cup\{w\}}(l-1)\text{-}TC(x,v_{j})\] (19) \[=\sum_{x\in\mathcal{N}(w)\cup\{w\}}\left[\mathcal{P}_{x,v_{j}}^{ (l-1)}\right]\] (20) \[=\sum_{x\in\mathcal{N}(w)\cup\{w\}}\left[\mathcal{P}_{x,v_{j},x }^{(l)}\right]\] (21) \[=\left|\mathcal{P}_{w,v_{j}}^{(l)}\right|=l\text{-}TC(w,v_{j}) \tag{22}\]
By induction, the expression holds for every iteration k.
**Theorem 8**.: _In GTE, after \(L\) layers, for a user node \(u_{i}\) and two item nodes \(v_{j}\) and \(v_{k}\), if \(L\)-\(TC(u_{i},v_{j})>L\)-\(TC(u_{i},v_{k})\), then \(\hat{y}_{i,j}>\hat{y}_{i,k}\)._
**Proof**. We have shown in Lemma 7 that \(\hat{y}_{i,j}=\mathbf{h}_{u_{i}}^{(L)}\left[j\right]=L\)-\(TC(u_{i},v_{j})\), and \(\hat{y}_{i,k}=\mathbf{h}_{u_{i}}^{(L)}\left[k\right]=L\)-\(TC(u_{i},v_{k})\). Thus, if it holds that \(L\)-\(TC(u_{i},v_{j})>L\)-\(TC(u_{i},v_{k})\), then \(\hat{y}_{i,j}>\hat{y}_{i,k}\).
The above analysis proves that GTE is optimal on the link-level metric topological closeness, and thus its performance on recommendation datasets can be used to evaluate the effectiveness of the proposed metric.
### The Relations between GTE and the Graph and Node-Level Expressiveness Metrics
As shown in Theorem 1 in Section 2.2, Xu et al. (2019) proved that a GNN can have WL-equivalent power of expressiveness on the graph isomorphism test if the aggregation function \(h_{u}^{(k)}=g(\{h_{u}^{(k-1)}:u\in\mathcal{N}(v)\cup\{o\})\})\) is injective. They further proved that the sum aggregator allows injective functions over multisets. We present an adapted version of the latter conclusion below.
**Theorem 9** (from Xu et al. (2019)).: _Assume the feature space \(\mathcal{X}\) is countable. If the initial features are one-hot encodings, then there exists some function \(\phi\) such that the aggregator \(h_{u}^{(k)}=g(\{h_{u}^{(k-1)}:u\in\mathcal{N}(v)\cup\{o\})\})=\phi\left(h_{o}^{(k-1 )}+\sum_{u\in\mathcal{N}(v)}h_{u}^{(k-1)}\right)\) is injective._
In Section 3.2, we demonstrate the effectiveness of a GNN in distinguishing various automorphic nodes on a connected user-item bipartite graph. This is achieved by proving Theorem 4, which
shows that when residual connections are implemented and the aggregation function \(h_{n}^{(k)}=g(h_{u}^{(k-1)}:u\in\mathcal{N}(v)\cup v)\) is injective
In GTE, residual connections are employed. Consequently, based on the aforementioned conclusions, if the aggregation function is injective, GTE will exhibit equivalent expressiveness to the Weisfeiler-Lehman (WI) algorithm on the graph-level metric and achieve optimal expressiveness on the node-level metric. It is worth noting that in our implementation, the aggregation function of GTE is not injective, as we simply utilize the identity function as \(\phi\). However, according to Theorem 9, it is possible to select \(\phi\) carefully, thereby making the aggregation function injective and enhancing the theoretical capability of GTE on both graph and node-level metrics. It is important to acknowledge that while a suitable choice of \(\phi\) can render the aggregation function injective, it may also compromise the optimality of GTE on the topological closeness metric. Addressing this issue will be a topic for future research endeavors.
## 5. Experiments
To evaluate the explainability of topological closeness in recommendation, we perform experiments on real-world datasets to compare GTE, which is optimal on the metric, against various baselines.
### Experimental Settings
1.1. **Datasets. Yelp**: a widely used dataset containing users' ratings on venues from the Yelp platform. **Douban**: consisting of book reviews and ratings collected by Douban. **Tmall**: an e-commerce dataset recording online purchase history of users on Tmall. **Gowalla**: collected by Gowalla with users' check-in records in different locations. **Amazon-beauty**: a dataset containing Amazon users' reviews and ratings of beauty products. **Sparser-Tmall**: a sparser version of the _Tmall_ dataset by sampling users and items with less interactions. The statistics are summarized in Table 2.
#### 5.1.2. **Evaluation Protocols and Metrics.**
The all-rank protocol is commonly adopted to mitigate sampling bias (Gowalla et al., 2018; Gowalla et al., 2019). For the evaluation metrics, we adopt the widely used Recall@N and Normalized Discounted Cumulative Gain (NDCG)@N, where N = {20, 40}.[3; 15; 26; 32]. The p-values are calculated with T test.
#### 5.1.3. **Baseline Models.**
We adopt the following 8 baselines. We tune the hyperparameters of all the baselines with the ranges suggested in the original papers, except that the size of learnable embeddings is fixed as 32 for all the baselines to ensure fairness.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline Dataset & Density & Metric & BiasMF & NGCF & GCCF & LightGCN & SimGRACE & SAIL & HCCF & LightGCL & GTE & _p-val_ & _impr._ \\ \hline \multicolumn{12}{l}{_Denser datasets_} \\ \hline \multirow{3}{*}{Yelp} & & R@20 & 0.0190 & 0.0294 & 0.0462 & 0.0482 & 0.0603 & 0.0471 & 0.0626 & **0.0793** & 0.0647 & 3e\({}^{-13}\) & -18\% \\ & & N@20 & 0.0161 & 0.0243 & 0.0398 & 0.0409 & 0.0435 & 0.0405 & 0.0527 & **0.0668** & 0.0556 & 3e\({}^{-14}\) & -16\% \\ & & R@40 & 0.0371 & 0.0522 & 0.0760 & 0.0803 & 0.0989 & 0.0733 & 0.1040 & **0.1292** & 0.0556 & 2e\({}^{-14}\) & -18\% \\ & & N@40 & 0.0227 & 0.0330 & 0.0508 & 0.0527 & 0.0656 & 0.0516 & 0.081 & **0.0852** & 0.0704 & 4e\({}^{-15}\) & -17\% \\ \hline \multirow{3}{*}{Douban} & & R@20 & 0.0926 & 0.0999 & 0.0986 & 0.1006 & 0.0827 & 0.1211 & 0.1083 & 0.1216 & **0.1269** & 3e\({}^{-7}\) & 4\% \\ & & N@20 & 0.0687 & 0.0739 & 0.0730 & 0.0759 & 0.0603 & 0.0910 & 0.0828 & 0.0927 & **0.1029** & 3e\({}^{-9}\) & 11\% \\ & & R@40 & 0.1424 & 0.1505 & 0.1482 & 0.1530 & 0.1251 & 0.1778 & 0.1593 & 0.1708 & **0.1777** & 3e\({}^{-6}\) & 4\% \\ & & N@40 & 0.0845 & 0.0897 & 0.0887 & 0.0923 & 0.0735 & 0.1090 & 0.0988 & 0.1077 & **0.1182** & 3e\({}^{-9}\) & 10\% \\ \hline \multirow{3}{*}{Tmall} & & R@20 & 0.0103 & 0.0180 & 0.0209 & 0.0225 & 0.0222 & 0.0254 & 0.0314 & 0.0528 & **0.0578** & 1e\({}^{-11}\) & 9\% \\ & & N@20 & 0.0072 & 0.0123 & 0.0141 & 0.0154 & 0.0152 & 0.0177 & 0.0213 & **0.0361** & 0.0290 & 1e\({}^{-13}\) & -20\% \\ & & R@40 & 0.0170 & 0.0310 & 0.0356 & 0.0378 & 0.0367 & 0.0424 & 0.0519 & **0.0852** & 0.0752 & 5e\({}^{-11}\) & -12\% \\ & & N@40 & 0.0095 & 0.0168 & 0.0196 & 0.0208 & 0.0203 & 0.0236 & 0.0284 & **0.0473** & 0.0326 & 5e\({}^{-15}\) & -31\% \\ \hline \multicolumn{12}{l}{_Sparser datasets_} \\ \hline \multirow{3}{*}{Amazon-beauty} & & R@20 & 0.0607 & 0.0734 & 0.0782 & 0.0797 & 0.0539 & 0.0834 & 0.0813 & 0.0896 & **0.0976** & 1e\({}^{-8}\) & 9\% \\ & & N@20 & 0.0249 & 0.0290 & 0.0315 & 0.0326 & 0.0212 & 0.0334 & 0.0339 & 0.0369 & **0.0440** & 1e\({}^{-11}\) & 19\% \\ & & R@40 & 0.0898 & 0.1078 & 0.1155 & 0.1161 & 0.0836 & 0.1196 & 0.1178 & 0.1286 & **0.1322** & 1e\({}^{-4}\) & 3\% \\ & & N@40 & 0.0308 & 0.0360 & 0.0391 & 0.0400 & 0.0272 & 0.0408 & 0.0413 & 0.0447 & **0.0511** & 1e\({}^{-11}\) & 14\% \\ \hline \multirow{3}{*}{Gowalla} & & R@20 & 0.0196 & 0.0552 & 0.0951 & 0.0985 & 0.0869 & 0.0999 & 0.1070 & 0.1578 & **0.1706** & 2e\({}^{-10}\) & 8\% \\ & & N@20 & 0.0105 & 0.0298 & 0.0535 & 0.0593 & 0.0528 & 0.0602 & 0.0644 & 0.0935 & **0.1001** & 8e\({}^{-11}\) & 7\% \\ & & R@40 & 0.0346 & 0.0810 & 0.1392 & 0.1431 & 0.1276 & 0.1472 & 0.1535 & 0.2245 & **0.2400** & 5e\({}^{-11}\) & 7\% \\ & & N@40 & 0.0145 & 0.0367 & 0.0684 & 0.0710 & 0.0637 & 0.0725 & 0.0767 & 0.1108 & **0.1181** & 1e\({}^{-10}\) & 7\% \\ \hline \multirow{3}{*}{Sparser-Tmall} & & R@20 & 0.0328 & 0.0395 & 0.0543 & 0.0542 & 0.0174 & 0.0521 & 0.0501 & 0.0518 & **0.0588** & 1e\({}^{-10}\) & 14\% \\ & & N@20 & 0.0169 & 0.0196 & 0.0290 & 0.0288 & 0.0084 & 0.0282 & 0.0270 & 0.0300 & **0.0368** & 2e\({}^{-9}\) & 23\% \\ \cline{1-1} & & R@40 & 0.0439 & 0.0552 & 0.0717 & 0.0708 & 0.0274 & 0.0685 & 0.0655 & 0.0653 &
* **BiasMF**(Krishna et al., 2017). This model employs matrix factorization to learn latent embeddings for users and items.
* **NGCF**(Krishna et al., 2017). It aggregates feature embeddings with high-order connection information over the user-item interaction graph.
* **GCCF**(Chen et al., 2017). It adopts an improved GNN model that incorporates a residual structure and eliminates non-linear transformations.
* **LightGCN**(Krishna et al., 2017). It implements an effective GNN structure without embedding transformations and non-linear projections.
* **SimGRACE**(Krishna et al., 2017). It creates an augmented view by introducing random perturbations to GNN parameters.
* **SAIL**(Wang et al., 2017). It adopts a self-augmented approach that maximizes the alignment between learned features and initial features.
* **HCCF**(Wang et al., 2017). It contrasts global information encoded with a hypergraph against local signals propagated with GNN.
* **LightGCL**(Chen et al., 2017). It guides the contrastive view augmentation with singular value decomposition and reconstruction.
### Performance Comparison and Analysis
The performance results are presented in Table 1. The number of GNN layers for GTE is set as 3 for all datasets. As shown in the table, GTE can achieve comparable performance to the SOTA GCL models. The mechanism of GTE revolves around effectively discriminating different items based on their topological closeness to a user. Therefore, these results indicate that the proposed topological closeness metric accurately reflects a model's proficiency in performing the recommendation task.
Furthermore, GTE exhibits superior performance on sparse data. Specifically, while GTE consistently outperforms other baselines, it occasionally performs worse than LightGCL on the three denser datasets. However, on the three sparser datasets, GTE consistently outperforms LightGCL. This discrepancy can be attributed to the reliance of learning-based methods on the availability of supervision signals. In scenarios where there is an abundance of user-item collaborative signals, learning-based methods can more effectively capture complex patterns in the data compared to learning-less or heuristic algorithms like GTE. However, in cases where the interaction history is limited, which often occurs in recommendation systems, the performance of learning-based methods deteriorates more rapidly than that of fixed algorithms.
By capturing the topological closeness between nodes with a learning-less structure, GTE achieves satisfactory performance rapidly and reliably. Since GTE is not subject to any random factors, the p-values in Table 1 are of negligible magnitudes. Table 3 presents an intuitive comparison of the running time of GTE and the efficient GCL baseline LightGCL, where GTE is run on CPU only, and LightGCL is run with an NVIDIA GeForce RTX 3090 GPU.
### Topological Closeness Aligns with Real Data
To demonstrate the discriminative ability of the topological closeness metric in real datasets, we randomly sample 400 pairs of positive and negative items and calculate the difference between their topological closeness to the target users. The results are plotted in Figure 4. It is clear from the plot that the majority of the differences are larger than 0, with only negligible exceptions. This indicates that the positive items have a higher topological closeness to the user than the negative items.
### Popular GNN Recommenders Perform Poorly on Topological Closeness
Section 4.2 provides a mathematical demonstration showing that popular GNN recommender models are unable to fully capture the topological closeness between nodes. Specifically, we sort the items based on the prediction scores generated by GTE and the baseline models. We then calculate the Kendall rank correlation coefficient (\(\tau\)), a widely used indicator for measuring the ordinal difference between two rankings (Krishna et al., 2017), between these rankings. Since GTE is optimized for topological closeness, its ranking represents the ideal prediction based on this metric. As depicted in Figure 5, the \(\tau\) values for the baselines cluster around 0.05 to 0.25. This indicates that the predictions made by the baselines exhibit positive correlations with the ideal ranking, but are significantly misaligned with it.
## 6. Conclusion
In this paper, we provide a comprehensive analysis of GNNs' expressiveness in recommendation, using a three-level theoretical framework consisting of graph/node/link-level metrics. We prove that with distinct initial embeddings, injective aggregation function and residual connections, GNN-based recommenders can achieve optimality on the node-level metric on bipartite graphs. We introduce the topological closeness metric that aligns with recommendation, and propose the learning-less GTE algorithm provably optimal on this link-level metric. By comparing GTE with baseline models, we show the effectiveness of the proposed metric. The presented theoretical framework aims to inspire the design of powerful GNN-based recommender systems. While achieving optimality on all three metrics is challenging, the framework serves as a guiding direction for selecting well-justified model structures.
Figure 4. Difference between the topological closeness of users to their positive items and negative items.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & Yelp & Douban & Tmall & Gowalla & Amazon & s-Tmall \\ LightGCL & 216.12 & 38.54 & 65.03 & 284.12 & 21.06 & 30.24 \\ GTE & 1.85 & 0.64 & 0.84 & 5.11 & 0.26 & 2.10 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Efficiency comparison in terms of total running time between GTE (w/o GPU) and LightGCL (w. GPU), in minutes.
Figure 5. The ordinal difference (measured in Kendall’s \(\tau\) coefficient) between the predictions made by the baselines and the ideal predictions based on topological closeness. |
2301.05459 | EWF : simulating exact paths of the Wright--Fisher diffusion | The Wright--Fisher diffusion is important in population genetics in modelling
the evolution of allele frequencies over time subject to the influence of
biological phenomena such as selection, mutation, and genetic drift. Simulating
paths of the process is challenging due to the form of the transition density.
We present EWF, a robust and efficient sampler which returns exact draws for
the diffusion and diffusion bridge processes, accounting for general models of
selection including those with frequency-dependence. Given a configuration of
selection, mutation, and endpoints, EWF returns draws at the requested sampling
times from the law of the corresponding Wright--Fisher process. Output was
validated by comparison to approximations of the transition density via the
Kolmogorov--Smirnov test and QQ plots. All software is available at
https://github.com/JaroSant/EWF | Jaromir Sant, Paul A. Jenkins, Jere Koskela, Dario Spanò | 2023-01-13T10:00:20Z | http://arxiv.org/abs/2301.05459v1 | # EWF : simulating exact paths of the Wright-Fisher diffusion
###### Abstract
The Wright-Fisher diffusion is important in population genetics in modelling the evolution of allele frequencies over time subject to the influence of biological phenomena such as selection, mutation, and genetic drift. Simulating paths of the process is challenging due to the form of the transition density. We present EWF, a robust and efficient sampler which returns exact draws for the diffusion and diffusion bridge processes, accounting for general models of selection including those with frequency-dependence. Given a configuration of selection, mutation, and endpoints, EWF returns draws at the requested sampling times from the law of the corresponding Wright-Fisher process. Output was validated by comparison to approximations of the transition density via the Kolmogorov-Smirnov test and QQ plots. All software is available at [https://github.com/JaroSant/EWF](https://github.com/JaroSant/EWF)
## 1 Introduction
The Wright-Fisher diffusion is a central model for the temporal fluctuation of allele frequencies in a large population evolving under random mating and in the presence of mutation and selection. Despite its importance, it remains difficult to work with from a computational perspective, both in the absence of selection (where the transition density admits an infinite series expansion) and the non-neutral case (where the corresponding infinite series expansion has intractable terms). Additionally, in a dialelic model the diffusion lives on the bounded interval \([0,1]\) and thus even simple approximate sampling techniques such as the Euler-Maruyama scheme require sophisticated modifications to respect its boundary behaviour (Dangerfield _et al._, 2012). Existing approaches in the literature have tackled this by resorting to a combination of discretisation and numerical approximation, e.g. solving the Kolmogorov backwards equation numerically
(Bollback _et al._, 2008; Malaspinas _et al._, 2012), approximating through more tractable processes (Mathieson and McVean, 2013), truncating a spectral expansion of the transition density (Steinrucken _et al._, 2016), and using Riemann sum approximations (Schraiber _et al._, 2016), all of which induce a bias which is hard to quantify.
In some cases, _exact_ sampling routines making use of rejection sampling are available. This class of techniques has been extended to certain variants of the Wright-Fisher diffusion: Jenkins and Spano (2017) showed that neutral Wright-Fisher diffusion paths and bridges can be simulated exactly via simulation techniques tailored for infinite series, and that neutral paths are the natural proposal mechanism for simulating non-neutral paths by rejection. Their work assumes that the mutation parameters are strictly positive and the endpoints for both the diffusion and diffusion bridge lie in the interior of \([0,1]\). The case of diffusion bridges that start and end at 0 was tackled by Griffiths _et al._ (2018), but several other combinations of startpoint, endpoint, and parameters remain unaddressed. Moreover, no simulation package implementing all of the cases of interest exists.
We present EWF, a C++ package producing exact draws from both neutral and non-neutral Wright-Fisher diffusions. The method properly accounts for all types of boundary (entrance, reflecting, and absorbing), incorporates a wide class of selection models, and allows for arbitrary endpoints, substantially extending previous work by Jenkins and Spano (2017); Griffiths _et al._ (2018). These new theoretical details can be found in the accompanying supplement. Additionally, EWF preserves accuracy over long times, in contrast to Euler-Maruyama type schemes where errors accumulate over the simulated path.
## 2 Models
Consider the two-allele non-neutral Wright-Fisher diffusion \((X_{t})_{t\geq 0}\) with mutation parameter \(\boldsymbol{\theta}=(\theta_{1},\theta_{2})\), which is given by the solution to the following stochastic differential equation
\[dX_{t} =\frac{1}{2}\left[\sigma X_{t}(1-X_{t})\eta(X_{t})-\theta_{2}X_{t }+\theta_{1}(1-X_{t})\right]dt\] \[\quad+\sqrt{X_{t}(1-X_{t})}dW_{t} \tag{1}\]
for \(t\geq 0\) with \(X_{0}\in[0,1]\), and \(\eta(x)=\sum_{i=0}^{n}a_{i}x^{i}\) for \(n\) finite (e.g. for genic selection \(\eta(x)=1\), and for diploid selection \(\eta(x)=h+x(1-2h)\) with \(h\) the dominance parameter). When the mutation parameter \(\boldsymbol{\theta}\) has positive entries, the corresponding neutral (i.e. \(\sigma=0\)) transition density can
be decomposed into a mixture distribution
\[p^{(\theta_{1},\theta_{2})}(x,y;t)=\sum_{m=0}^{\infty}q_{m}^{\theta}(t)\sum_{l=0} ^{m}\text{Bin}_{m,x}(l)\text{Beta}_{\theta_{1}+l,\theta_{2}+m-l}(y),\]
where \((q_{m}^{\theta}(t))_{m\in\mathbb{N}}\) is a distribution on the integers and \(\theta:=\theta_{1}+\theta_{2}\). This allows for exact simulation (Jenkins and Spano, 2017, Section 2). EWF extends this approach to the \(\theta_{1}=0\) and/or \(\theta_{2}=0\) cases, when the diffusion is absorbed on hitting 0 and/or 1 in finite time almost surely.
It is often of interest to consider the evolution of a de novo mutation which appears at time \(t_{0}\) and is observed in the population at a sampling time \(t>t_{0}\). If \(\mathbf{\theta}=\mathbf{0}\), one needs to condition the diffusion on non-absorption to recover a non-degenerate transition density. The resulting density can be found in Section 1 in the Supplementary Information (together with the respective details), as well as the corresponding transition densities for the cases when \(\mathbf{\theta}=(0,\theta)\) or \(\mathbf{\theta}=(\theta,0)\).
The transition density for a diffusion _bridge_ can be similarly derived (see Section 2 in the Supplementary Information), whilst in the presence of selection (i.e. \(\sigma\neq 0\) in (1)), draws from the corresponding non-neutral process can be returned by simulating neutral paths as candidates in an appropriate rejection scheme (Jenkins and Spano, 2017, Section 5).
## 3 Methods
The expression for \(p^{(\theta_{1},\theta_{2})}(x,y;t)\) tells us that draws from the transition density can be achieved by the following:
1. Draw \(M\sim\{q_{m}^{\theta}(t)\}_{m\in\mathbb{N}}\)
2. Conditional on \(M=m\), draw \(L\sim\text{Bin}(m,x)\)
3. Conditional on \(M=m,L=l\), draw \(Y\sim\text{Beta}(\theta_{1}+l,\theta_{2}+m-l)\)
Steps 2 and 3 are simple. Step 1 is more involved since each \(q_{m}^{\theta}(t)\) is an infinite series (see Supplementary information Section 3 where we have extended the procedure to generate samples when \(\mathbf{\theta}=\mathbf{0}\) or \(\mathbf{\theta}=(0,\theta)\)).
If the time increment \(t\) is small, approximations are necessary due to numerical instabilities in computing \(q_{m}^{\theta}(t)\). EWF employs a Gaussian approximation of \(q_{m}^{\theta}(t)\) for small \(t\)(Griffiths, 1984, Theorem 4) (\(t\leq 0.08\) by default), with similar approximations used for bridges whenever subsequent time increments fall below some threshold. For full details see Section 5 in the Supplementary Information.
The implementation was tested extensively and validated through a combination of QQ plots and the Kolmogorov-Smirnov test (see Supplementary Information Section 7). An example is shown in Fig. 1.
## 4 Discussion
EWF provides a robust, efficient, and exact sampling routine to target a wide family of Wright-Fisher diffusions featuring a broad class of selective regimes, any mutation parameters, and any start/end points. The implementation can be used as a stand-alone package, or incorporated into simulation-based inference pipelines from time series allele frequency data. This is particularly useful in view of the recent increase in availability of such data (Wutke _et al._, 2016; Fages _et al._, 2019).
Figure 1: Illustration of 30 candidate trajectories for the horse coat color data found in Ludwig _et al._ (2009) simulated using EWF (note that the observed frequencies (black crosses) are assumed to be exact observations of the underlying diffusion). Simulations used the inferred selection coefficient \(s=0.0007\) with a consensus effective population size \(N_{e}=10,000\)(Ludwig _et al._, 2009; Malaspinas _et al._, 2012; Schraiber _et al._, 2016), giving \(\sigma=2N_{e}s=14\). We used \(\mathbf{\theta}=\mathbf{0}\) and a generation time of 5 years.
## Funding
This work has been supported by the EPSRC and the Alan Turing Institute under grants EP/R044732/1, EP/V049208/1, EP/N510129/1.
## 1 Transition densities for neutral Wright-Fisher diffusions
Consider a Wright-Fisher diffusion started from some arbitrary initial point \(x\in[0,1]\) with one of the mutation parameters set to \(0\), say \(\boldsymbol{\theta}=(0,\theta)\). Under such a setup, the diffusion survives up to a time \(T_{0}:=\inf\{t\geq 0:X_{t}=0\}\), when it hits \(0\) and remains there. In this section we derive the transition density when the hitting time \(T_{0}\) is both allowed to occur at any time, and when the sampling time is conditioned on \(\{t<T_{0}\}\). The latter case is slightly harder to tackle because it is necessary to incorporate this conditioning.
Similar arguments apply for the case when mutation is absent (i.e. \(\boldsymbol{\theta}=\boldsymbol{0}\)), and we further point out that the case \(\boldsymbol{\theta}=(\theta,0)\) follows immediately from the case \(\boldsymbol{\theta}=(0,\theta)\) by considering the symmetric mapping \(x\mapsto 1-x\) and observing that the resulting process is once again a Wright-Fisher diffusion with mutation parameter \(\boldsymbol{\theta}^{\prime}=(\theta_{2},\theta_{1})\) and selection parameter \(\sigma^{\prime}=-\sigma\).
### Neutral diffusion with strictly positive mutation
We begin by considering \(\theta_{1},\theta_{2}>0\) such that both \(0\) and \(1\) are non-absorbing boundaries. In this case the transition density can be expressed (Griffiths (1979); Tavare (1984)) as
\[p^{(\theta_{1},\theta_{2})}(x,y;t)=\sum_{m=0}^{\infty}q_{m}^{\theta}(t)\sum_ {l=0}^{m}\mathcal{B}_{m,x}(l)\mathcal{D}_{\theta_{1}+l,\theta_{2}+m-l}(y), \tag{2}\]
where \(\theta=|\boldsymbol{\theta}|=\theta_{1}+\theta_{2}\), \(\mathcal{B}_{m,x}(\cdot)\) denotes the binomial probability mass function with parameters \(m\) and \(x\), \(\mathcal{D}_{\theta_{1}+l,\theta_{2}+m-l}(\cdot)\) denotes the beta probability density function with parameters \(\theta_{1}+l\) and \(\theta_{2}+m-l\), and
\[q_{m}^{\theta}(t):=\sum_{k=m}^{\infty}(-1)^{k-m}\frac{\theta+2k-1}{k!(k-m)!} \frac{\Gamma(\theta+m+k-1)}{\Gamma(\theta+m)}e^{\frac{-k(k+\theta-1)t}{2}},\]
with \(\Gamma(\cdot)\) denoting the gamma function. We point out that \(\{q_{m}^{\theta}(t)\}_{m\in\mathbb{N}}\) correspond to the transition probabilities of the number of lineages in Kingman's coalescent (which is the moment dual to the Wright-Fisher diffusion), such that \(q_{m}^{\theta}(t)\) is the probability that \(m\) lineages survive up to time \(t\) when one starts with an infinite number of lineages at time \(0\). For more details, we refer the interested reader to Griffiths (1979); Tavare (1984). The inclusion of the mutation parameters on the LHS of (2) makes explicit the dependence of the transition density on these quantities, however in an effort to reduce on encumbrance, we shall suppress this notation henceforth and simply write \(p(x,y;t)\) for the transition density of the diffusion, with the specific mutation regime being considered specified exogenously.
### Neutral diffusion with one sided mutation
For \(\boldsymbol{\theta}=(0,\theta)\), the diffusion is absorbed upon hitting \(0\) and the transition density can be expressed as
\[p(x,y;t)=\sum_{m=0}^{\infty}q_{m}^{\theta}(t)\left[\sum_{l=1}^{m}\mathcal{B}_ {m,x}(l)\mathcal{D}_{l,\theta+m-l}(y)+(1-x)^{m}\delta_{0}(y)\right], \tag{3}\]
where \(\delta_{0}(y)\) denotes a point mass at \(0\) and represents the case when the diffusion is absorbed at \(0\). In cases like this we reinterpret 'density' appropriately, with respect to a dominating measure containing both a Lebesgue component and an atom at each of \(0\) and \(1\).
If we condition on the event \(\{t<T_{0}\}\), standard conditional probability gives us that the transition density of the diffusion _conditioned_ on non-absorption until time \(t\) is given by
\[\tilde{p}(x,y;t)=\frac{p(x,y;t)}{\mathbb{P}_{x}\left[T_{0}>t\right]},\]
for \(y\in(0,1]\), where we use the notation \(\tilde{p}(\cdot,\cdot;\cdot)\) to make explicit the fact that this is the transition density of the _conditioned_ diffusion process. Additionally, we have that
\[\mathbb{P}_{x}\left[T_{0}>t\right] =\int_{(0,1]}p(x,u;t)du\] \[=\sum_{m=1}^{\infty}q_{m}^{\theta}(t)\sum_{l=1}^{m}\mathcal{B}_{m,x}(l), \tag{4}\]
and we note that the contributions from \(m=0\) above are missing as the corresponding beta
density collapses to a point mass at \(0\). Thus for \(x,y\in(0,1]\) we have
\[\tilde{p}(x,y;t)=\sum_{m=1}^{\infty}\frac{q_{m}^{\theta}(t)}{\sum_{l=1}^{m} \mathcal{B}_{m,x}(l)\sum_{d=1}^{\infty}q_{d}^{\theta}(t)(1-(1-x)^{d})}\mathcal{ D}_{l,\theta+m-l}(y). \tag{5}\]
For small \(x\), we have the following leading order expansion in \(x\)
\[p(x,y;t)=x\sum_{m=1}^{\infty}q_{m}^{\theta}(t)m(\theta+m-1)(1-y)^{\theta+m-2}+ O(x^{2}), \tag{6}\]
and note further (4) is also of leading order \(x\) for \(x\) small. Thus upon taking the limit \(x\to 0\) in (5) we get that
\[\tilde{p}(0,y;t)=\sum_{m=1}^{\infty}\frac{mq_{m}^{\theta}(t)}{\sum_{d=1}^{ \infty}dq_{d}^{\theta}(t)}\mathcal{D}_{1,\theta+m-1}(y). \tag{7}\]
Putting all of the above together we get that the conditioned diffusion has transition density given by
\[\tilde{p}(x,y;t)=\begin{cases}\sum_{m=1}^{\infty}\frac{mq_{m}^{ \theta}(t)}{\sum_{d=1}^{\infty}dq_{d}^{\theta}(t)}\mathcal{D}_{1,\theta+m-1}( y)&x=0,\\ \\ \sum_{m=1}^{\infty}\frac{q_{m}^{\theta}(t)\sum_{l=1}^{m}\mathcal{B}_{m,x}(l)} {\sum_{d=1}^{\infty}q_{d}^{\theta}(t)(1-(1-x)^{d})}\mathcal{D}_{l,\theta+m-l}( y)&x\in(0,1].\end{cases} \tag{8}\]
We point out that as the diffusion is conditioned on avoiding \(0\), there will always be at least one surviving lineage in the moment-dual Kingman coalescent, and thus the index for \(m\) starts at \(1\).
### Diffusion without mutation
If \(\mathbf{\theta}=\mathbf{0}\), then the diffusion is absorbed upon hitting either boundary, and the corresponding transition density is given by
\[p(x,y;t)=\sum_{m=2}^{\infty}q_{m}^{\theta}(t)\left[\sum_{l=1}^{m-1}\mathcal{B }_{m,x}(l)\mathcal{D}_{l,m-l}(y)+(1-x)^{m}\delta_{0}(y)+x^{m}\delta_{1}(y) \right], \tag{9}\]
Conditioning the diffusion on remaining inside the interior of \([0,1]\), and again employing a leading order analysis of the resulting numerator and denominator allows us to conclude that
the transition density in this case is given by
\[\tilde{p}(x,y;t)=\begin{cases}\sum_{m=2}^{\infty}\frac{mq_{m}^{0}(t) }{\sum_{d=2}^{\infty}dq_{d}^{0}(t)}\mathcal{D}_{1,m-1}(y)&x=0,\\ \\ \sum_{m=2}^{\infty}\frac{mq_{m}^{0}(t)}{\sum_{d=2}^{\infty}dq_{d}^{0}(t)} \mathcal{D}_{m-1,1}(y)&x=1,\\ \\ \sum_{m=2}^{\infty}\frac{q_{m}^{0}(t)\sum_{l=1}^{m-1}\mathcal{B}_{ m,x}(l)}{\sum_{d=2}^{\infty}q_{d}^{0}(t)(1-x^{d}-(1-x)^{d})}\mathcal{D}_{l,m-l}(y)&x \in(0,1).\end{cases} \tag{10}\]
Note that as \(\boldsymbol{\theta}=\boldsymbol{0}\) and we are conditioning on non-absorption, the indices \(m\) and \(d\) are now forced to start from 2. This follows from the fact that the derivations performed above assume the starting point \(x\) to be within \((0,1)\) and subsequently send \(x\) to the corresponding boundary from within the interior of \((0,1)\), which corresponds to starting the diffusion arbitrarily close to the boundary. Thus at all times there is a fraction \(x\) of the population having one type, with the other fraction \(1-x\) having the other, neither of which can be lost by mutation.
## 2 Transition densities for neutral Wright-Fisher diffusion bridges
We now derive the density of a point \(y\in[0,1]\) sampled at time \(s\in(0,t)\) from the law of a Wright-Fisher diffusion bridge started at \(x\) at time \(0\) and ending at \(z\) at time \(t\). In addition to considering each mutation regime separately, we further split our considerations based on the values the start and end points \(x\) and \(z\) assume. As in the diffusion case, we derive the relevant expressions in the case \(\boldsymbol{\theta}=(0,\theta)\), as the other cases (\(\boldsymbol{\theta}=(0,0)\) or \(\boldsymbol{\theta}=(\theta,0)\)) follow using similar arguments. We further consider both cases when (i) the bridge is allowed to be absorbed at any time point within the time interval \((0,t)\), and (ii) the bridge is conditionally non-absorbing: \(X_{s}\in(0,1)\) for all \(s\in(0,t)\). We make use of the following short-hand notation for the different possible end-point combinations.
\begin{tabular}{c|c|c|c} & \(x=0\) & \(x=1\) & \(x\in(0,1)\) \\ \hline \(z=0\) & A1 & B1 & C1 \\ \(z\in(0,1)\) & A2 & B2 & C2 \\ \(z=1\) & A3 & B3 & C3 \\ \end{tabular} We further introduce a letter at the front of each of the above to differentiate between the cases \(\boldsymbol{\theta}=\boldsymbol{0}\) ('Z' for zero), \(\boldsymbol{\theta}=(0,\theta)\) ('O' for one sided), and \(\boldsymbol{\theta}\) with strictly positive entries ('P' for strictly positive).
Before proceeding with deriving the transition densities for all the above outlined cases, observe that the transition density for a Wright-Fisher diffusion bridge started from \(x\in[0,1]\) at time \(0\), ending at \(z\in[0,1]\) at time \(t\) and sampled at time \(s\) can be factorised as follows for \(y\in[0,1]\):
\[p^{x,z;t}(y;s)=\frac{p(x,y;s)p(y,z;t-s)}{p(x,z;t)}, \tag{11}\]
where again, for simplicity the dependence of (11) on the mutation parameters is omitted from the notation.
### Neutral diffusion bridge with one sided mutation \(\mathbf{\theta}=(0,\theta)\)
We start by noting that if the diffusion bridge is allowed to be absorbed at \(0\) at any time within the interval \((0,t)\), then the only cases of interest are when the left endpoint \(x\in(0,1]\), for otherwise the bridge stays at \(0\). Additionally if \(z\in(0,1]\), the bridge could not have been absorbed within the time interval \((0,t)\), and is therefore equivalent to conditioning it on non-absorption (which shall be tackled shortly). Thus we take \(x\in(0,1)\) and \(z=0\), substitute (3) into (11), and re-group terms to get that
\[p^{x,z;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{\theta}(s)q_{k}^ {\theta}(t-s)}{\sum_{d=1}^{\infty}q_{d}^{\theta}(t)(1-x)^{d}}\Bigg{[}\sum_{l=1 }^{m}\mathcal{B}_{m,x}(l)\frac{B(l,\theta+m-l+k)}{B(l,\theta+m-l)}\mathcal{D}_ {l,\theta+m-l+k}(y)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+(1-x )^{m}\delta_{0}(y)\Bigg{]}. \tag{12}\]
where \(B(\cdot,\cdot)\) is the beta function.
To derive the transition density when \(z\in(0,1]\), we first point out that conditioning a diffusion (or conditioning a diffusion bridge) on non-absorption is a special case of taking an \(h\)-transform for said process (see for instance Fitzsimmons _et al._ (1993); Griffiths _et al._ (2018)). Furthermore, diffusion bridges are invariant under \(h\)-transforms (see equation (10) in Griffiths _et al._ (2018)), and thus the distribution of a diffusion bridge conditioned on non-absorption is the same as that of the corresponding unconditioned process. We therefore need not differentiate between the transition density of the conditioned or unconditioned diffusion bridge, and simply use \(p^{x,z;t}(y;s)\) throughout.
Expanding (11) for \(x,z\in(0,1]\) gives
\[p^{x,z;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{\theta}(s)q_{k}^{ \theta}(t-s)}{\sum_{d=1}^{\infty}q_{d}^{\theta}(t)\sum_{f=1}^{d}\mathcal{B}_{d, x}(f)\mathcal{D}_{f,\theta+d-f}(z)}\\ \times\sum_{l,j=1}^{m,k}\binom{k}{j}\frac{B(l+j,\theta+m-l+k-j)}{ B(l,\theta+m-l)}\mathcal{D}_{j,\theta+k-j}(z)\mathcal{D}_{l+j,\theta+m-l+k-j}(y). \tag{13}\]
When \(x=0\), we make use of (6) in both the numerator and denominator above, and subsequently take the limit as \(x\to 0\), to arrive at
\[p^{0,z;t}(y;s) =\lim_{x\to 0}\left(\frac{x\sum_{m=1}^{\infty}q_{m}^{\theta}(s)m( \theta+m-1)(1-y)^{\theta+m-2}+o(x^{2})}{x\sum_{d=1}^{\infty}q_{d}^{\theta}(t)d (\theta+d-1)(1-z)^{\theta+d-2}+o(x^{2})}\right)p(y,z;t-s)\] \[=\sum_{m,k=1}^{\infty}\frac{q_{m}^{\theta}(s)q_{k}^{\theta}(t-s)} {\sum_{d=1}^{\infty}q_{d}^{\theta}(t)d(d+\theta-1)(1-z)^{\theta+d-2}}\] \[\quad\times\sum_{j=1}^{k}\binom{k}{j}\frac{B(j+1,\theta+m-1+k-j)} {B(1,\theta+m-1)}\mathcal{D}_{j,\theta+k-j}(z)\mathcal{D}_{j+1,\theta+m-1+k-j }(y). \tag{14}\]
The above expression can further be used to derive the expression when \(z=0\) by taking leading order terms in \(z\) and taking the limit \(z\to 0\), giving
\[p^{0,0;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{\theta}(s)q_{k} ^{\theta}(t-s)}{\sum_{d=1}^{\infty}q_{d}^{\theta}(t)d(d+\theta-1)}\frac{m(m+ \theta-1)k(k+\theta-1)}{(m+k+\theta-1)(m+k+\theta-2)}\mathcal{D}_{2,\theta+m-1 +k-1}(y). \tag{15}\]
As previously mentioned, the case \(\boldsymbol{\theta}=(\theta,0)\) follows from the above by considering the symmetric map \(x\mapsto 1-x\).
### Neutral diffusion bridge with no mutation
We can replicate all of the above arguments for when \(\boldsymbol{\theta}=\boldsymbol{0}\) to get that if \(x\in(0,1)\) and \(z=0\), for \(y\in[0,1)\) we have
\[p^{x,0;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{0}(s)q_{k}^{0}( t-s)}{\sum_{d=1}^{\infty}q_{d}^{0}(t)(1-x)^{d}}\bigg{[}\sum_{l=1}^{m-1}\mathcal{B}_{m,x}(l)\frac{B(l,m-l+k)}{B(l,m-l)}\mathcal{D}_{l,m-l+k}(y)\\ +(1-x)^{m}\delta_{0}(y)\bigg{]} \tag{16}\]
whilst if \(x\in(0,1)\) and \(z=1\), we get for \(y\in(0,1]\)
\[p^{x,1;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{0}(s)q_{k}^{0}(t-s)}{\sum_{d=1} ^{\infty}q_{d}^{0}(t)x^{d}}\left[\sum_{l=1}^{m-1}\mathcal{B}_{m,x}(l)\frac{B(l+ k,m-l)}{B(l,m-l)}\mathcal{D}_{l+k,m-l}(y)+x^{m}\delta_{1}(y)\right] \tag{17}\]
Note that if \(z=0\), then we cannot have \(y=1\) and similarly if \(z=1\), \(y\) cannot be equal to \(0\).
Computing the transition densities conditioned on non-absorption can be done as in the one-sided mutation case illustrated above, by following the same arguments.
The resulting expressions for the conditioned diffusion bridges under all three mutation regimes can be found below (recall the notation in Table 2).
### Bridge diffusion transition density when \(\boldsymbol{\theta=0}\)
\[p^{x,z;t}(y;s)=\sum_{m,k=2}^{\infty}\frac{q_{m}^{0}(s)q_{k}^{0}(t-s)}{\sum_{d=2} ^{\infty}q_{d}^{0}(t)d(d-1)}\frac{m(m-1)k(k-1)}{(m+k-1)(m+k-2)}\mathcal{D}_{2,m+ k-2}(y) ZA1\]
\[p^{x,z;t}(y;s)=\sum_{m,k=2}^{\infty}\frac{q_{m}^{0}(s)q_{k}^{0}(t-s)}{\sum_{d=2 }^{\infty}q_{d}^{0}(t)d\mathcal{D}_{1,d-1}(z)}m\sum_{j=1}^{k-1}{k\choose j} \frac{B(j+1,m-1+k-j)}{B(1,m-1)}\mathcal{D}_{j,k-j}(z)\] \[\times\mathcal{D}_{j+1,m-1+k-j}(y) ZA2\]
\[p^{x,z;t}(y;s)=\sum_{m,k=2}^{\infty}\frac{q_{m}^{0}(s)q_{k}^{0}(t-s)}{2q_{2}^{ 0}(t)}m(m-1)k(k-1)B(k,m)\mathcal{D}_{k,m}(y) ZA3\]
\[p^{x,z;t}(y;s)=\sum_{m,k=2}^{\infty}\frac{q_{m}^{0}(s)q_{k}^{0}(t-s)}{2q_{2}^{ 0}(t)}m(m-1)k(k-1)B(m,k)\mathcal{D}_{m,k}(y) ZB1\]
\[p^{x,z;t}(y;s)=\sum_{m,k=2}^{\infty}\frac{q_{m}^{0}(s)q_{k}^{0}(t-s)}{\sum_{d =2}^{\infty}q_{d}^{0}(t)d\mathcal{D}_{d-1,1}}m\sum_{j=1}^{k-1}{k\choose j} \frac{B(m-1+j,k-j+1)}{B(m-1,1)}\mathcal{D}_{j,k-j}(y)\] \[\times\mathcal{D}_{m-1+j,1+k-j}(y) ZB2\]
\[p^{x,z;t}(y;s)=\sum_{m,k=2}^{\infty}\frac{q_{m}^{0}(s)q_{k}^{0}(t-s)}{\sum_{d =2}^{\infty}q_{d}^{0}(t)d(d-1)}\frac{m(m-1)k(k-1)}{(m+k-1)(m+k-2)}\mathcal{D}_ {m+k-2,2}(y) ZB3\]
\[p^{x,z;t}(y;s)=\sum_{m,k=2}^{\infty}\frac{q_{m}^{0}(s)q_{k}^{0}(t-s)}{\sum_{d =2}^{\infty}q_{d}^{0}(t)(d-1)\mathcal{B}_{d,x}(1)}\sum_{l=1}^{m-1}\mathcal{B} _{m,x}(l)k(k-1)\frac{B(l+1,m-l+k-1)}{B(l,m-l)}\]
\[\times\mathcal{D}_{l+1,m-l+k-1}(y) ZC1\]
\[p^{x,z;t}(y;s)=\sum_{m,k=2}^{\infty}\frac{q_{m}^{0}(s)q_{k}^{0}(t-s)}{\tilde {p}(x,z;t)}\sum_{l,j=1}^{m-1,k-1}\mathcal{B}_{m,x}(l){k\choose j}\frac{B(l+j,m -l+k-j)}{B(l,m-l)}\mathcal{D}_{j,k-j}\]
\[\times\mathcal{D}_{l+j,m-l+k-j}(y) ZC2\]
\[p^{x,z;t}(y;s)=\sum_{m,k=2}^{\infty}\frac{q_{m}^{0}(s)q_{k}^{0}(t-s)}{\sum_{d =2}^{\infty}q_{d}^{0}(t)(d-1)\mathcal{B}_{d,x}(d-1)}\sum_{l=1}^{m-1}\mathcal{B }_{m,x}(l)k(k-1)\frac{B(l+k-1,m-l+1)}{B(l,m-l)}\]
\[\times\mathcal{D}_{l+k-1,m-l+1}(y) ZC3\]
### Bridge diffusion transition density when \(\boldsymbol{\theta}=(0,\theta)\)
\[p^{x,z;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{\theta}(s)q_{k}^{ \theta}(t-s)}{\sum_{d=1}^{\infty}q_{d}^{\theta}(t)d(d+\theta-1)}\frac{m(m+ \theta-1)k(k+\theta-1)}{(m+k+\theta-1)(m+k+\theta-2)}\mathcal{D}_{2,\theta+m+k -2}(y) OA1\]
\[p^{x,z;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{\theta}(s)q_{k}^ {\theta}(t-s)}{\sum_{d=1}^{\infty}q_{d}^{\theta}(t)d\mathcal{D}_{1,\theta+d-1 }(z)}m\sum_{j=1}^{k}\binom{k}{j}\frac{B(j+1,\theta+m-1+k-j)}{B(1,\theta+m-1)}\] \[\times\mathcal{D}_{j,\theta+k-j}(z)\mathcal{D}_{j+1,\theta+m-1+k- j}(y) OA2\]
\[p^{x,z;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{\theta}(s)q_{k}^ {\theta}(t-s)}{\theta q_{1}}m(m+\theta-1)\frac{B(k+1,\theta+m-1)}{B(k,\theta) }\mathcal{D}_{k+1,\theta+m-1}(y) OA3\]
\[p^{x,z;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{\theta}(s)q_{k}^ {\theta}(t-s)}{\theta q_{1}}k(k+\theta-1)\frac{B(m+1,\theta+k-1)}{B(m,\theta) }\mathcal{D}_{m+1,\theta+k-1}(y) OB1\]
\[p^{x,z;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{\theta}(s)q_{k }^{\theta}(t-s)}{\sum_{d=1}^{\infty}q_{d}^{\theta}(t)\mathcal{D}_{d,\theta}} \sum_{j=1}^{k}\binom{k}{j}\frac{B(m+j,\theta+k-j)}{B(m,\theta)}\mathcal{D}_{j, \theta+k-j}(z)\] \[\times\mathcal{D}_{m+j,\theta+k-j}(y) OB2\]
\[p^{x,z;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{\theta}(s)q_{k }^{\theta}(t-s)}{\sum_{d=1}^{\infty}q_{d}^{\theta}(t)\frac{1}{B(d,\theta)}} \frac{B(m+k,\theta)}{B(m,\theta)B(k,\theta)}\mathcal{D}_{m+k,\theta}(y) OB3\]
\[p^{x,z;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{\theta}(s)q_{k }^{\theta}(t-s)k(k+\theta-1)}{\sum_{d=1}^{\infty}q_{d}^{\theta}(t)(d+\theta-1 )\mathcal{B}_{d,x}(1)}\sum_{l=1}^{m}\mathcal{B}_{m,x}(l)\frac{B(l+1,\theta+k- 1+m-l)}{B(l,\theta+m-l)}\] \[\times\mathcal{D}_{l+1,\theta+k-1+m-l}(y) OC1\]
\[p^{x,z;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{\theta}(s)q_{ k}^{\theta}(t-s)}{\tilde{p}(x,z;t)}\sum_{l,j=1}^{m,k}\mathcal{B}_{m,x}(l) \binom{k}{j}\frac{B(l+j,\theta+m-l+k-j)}{B(l,\theta+m-l)}\mathcal{D}_{j, \theta+k-j}(z)\] \[\times\mathcal{D}_{l+j,\theta+m-l+k-j} OC2\]
\[p^{x,z;t}(y;s)=\sum_{m,k=1}^{\infty}\frac{q_{m}^{\theta}(s)q_{k }^{\theta}(t-s)}{\sum_{d=1}^{\infty}q_{d}^{\theta}(t)\frac{x^{d}}{B(d,\theta)} }\sum_{l=1}^{m}\mathcal{B}_{m,x}(l)\frac{B(l+k,\theta+m-l)}{B(l,\theta+m-l)B( k,\theta)}\mathcal{D}_{l+k,\theta+m-l}(y) OC3\]
### Bridge diffusion transition density when \(\boldsymbol{\theta}=(\theta_{1},\theta_{2})\)
\[p^{x,z;t}(y;s)=\sum_{m,k=0}^{\infty}\frac{q_{m}^{\theta}(s)q_{k}^{ \theta}(t-s)}{\sum_{d=0}^{\infty}q_{d}^{\theta}\frac{1}{B(\theta_{1},\theta_{2 }+d)}}\frac{B(\theta_{1},\theta_{2}+m+k)}{B(\theta_{1},\theta_{2}+m)B(\theta_{ 1},\theta_{2}+k)}\mathcal{D}_{\theta_{1},\theta_{2}+m+k}(y) PA1\]
\[p^{x,z;t}(y;s)=\sum_{m,k=0}^{\infty}\frac{q_{m}^{\theta}(s)q_{k}^{ \theta}(t-s)}{\sum_{d=0}^{\infty}q_{d}^{\theta}\frac{1}{B(\theta_{1},\theta_{2 }+d)}}\sum_{j=0}^{k}\binom{k}{j}\frac{B(\theta_{1}+j,\theta_{2}+m+k-j)}{B( \theta_{1},\theta_{2}+m)}\mathcal{D}_{\theta_{1}+j,\theta_{2}+k-j}(z)\] \[\times\mathcal{D}_{\theta_{1}+j,\theta_{2}+m+k-j}(y) PA2\]
\[p^{x,z;t}(y;s)=\sum_{m,k=0}^{\infty}\frac{q_{m}^{\theta}(s)q_{k}^{ \theta}(t-s)}{q_{0}^{\theta}\frac{1}{B(\theta_{1},\theta_{2})}}\frac{B(\theta _{1}+k,\theta_{2}+m)}{B(\theta_{1},\theta_{2}+m)B(\theta_{1}+k,\theta_{2})} \mathcal{D}_{\theta_{1}+k,\theta_{2}+m}(y) PA3\]
\[p^{x,z;t}(y;s)=\sum_{m,k=0}^{\infty}\frac{q_{m}^{\theta}(s)q_{k}^{ \theta}(t-s)}{q_{0}^{\theta}\frac{1}{B(\theta_{1},\theta_{2})}}\frac{B(\theta _{1}+m,\theta_{2}+k)}{B(\theta_{1}+m,\theta_{2})B(\theta_{1},\theta_{2}+k)} \mathcal{D}_{\theta_{1}+m,\theta_{2}+k}(y) PB1\]
\[p^{x,z;t}(y;s)=\sum_{m,k=0}^{\infty}\frac{q_{m}^{\theta}(s)q_{k} ^{\theta}(t-s)}{\sum_{d=0}^{\infty}q_{d}^{\theta}\frac{1}{B(\theta_{1}+d, \theta_{2})}}\sum_{j=0}^{k}\binom{k}{j}\frac{B(\theta_{1}+m+j,\theta_{2}+k-j)} {B(\theta_{1}+m,\theta_{2})}\mathcal{D}_{\theta_{1}+j,\theta_{2}+k-j}(z)\] \[\times\mathcal{D}_{\theta_{1}+m+j,\theta_{2}+k-j}(y) PB2\]
\[p^{x,z;t}(y;s)=\sum_{m,k=0}^{\infty}\frac{q_{m}^{\theta}(s)q_{k} ^{\theta}(t-s)}{\sum_{d=0}^{\infty}q_{d}^{\theta}\frac{1}{B(\theta_{1}+d, \theta_{2})}}\frac{B(\theta_{1}+m+k,\theta_{2})}{B(\theta_{1}+m,\theta_{2})B( \theta_{1}+k,\theta_{2})}\mathcal{D}_{\theta_{1}+m+k,\theta_{2}}(y) PB3\]
\[p^{x,z;t}(y;s)=\sum_{m,k=0}^{\infty}\frac{q_{m}^{\theta}(s)q_{k} ^{\theta}(t-s)}{\sum_{d=0}^{\infty}q_{d}^{\theta}(t)\frac{(1-x)^{d}}{B(\theta _{1},\theta_{2}+d)}}\sum_{l=0}^{m}\mathcal{B}_{m,x}(l)\frac{B(\theta_{1}+l, \theta_{2}+m-l+k)}{B(\theta_{1},\theta_{2}+k)B(\theta_{1}+l,\theta_{2}+m-l)}\] \[\times\mathcal{D}_{\theta_{1}+l,\theta_{2}+m-l+k}(y) PC1\]
\[p^{x,z;t}(y;s)=\sum_{m,k=0}^{\infty}\frac{q_{m}^{\theta}(s)q_{k} ^{\theta}(t-s)}{\tilde{p}(x,z;t)}\sum_{l,j=0}^{m,k}\mathcal{B}_{m,x}(l)\binom{ k}{j}\frac{B(\theta_{1}+l+j,\theta_{2}+m-l+k-j)}{B(\theta_{1}+l,\theta_{2}+m-l)}\] \[\times\mathcal{D}_{\theta_{1}+j,\theta_{2}+k-j}(z)\mathcal{D}_{ \theta_{1}+l+j,\theta_{2}+m-l+k-j}(y) PC2\]
\[p^{x,z;t}(y;s)=\sum_{m,k=0}^{\infty}\frac{q_{m}^{\theta}(s)q_{k} ^{\theta}(t-s)}{\sum_{d=0}^{\infty}q_{d}^{\theta}(t)\frac{x^{d}}{B(\theta_{1}+d,\theta_{2})}}\sum_{l=0}^{m}\mathcal{B}_{m,x}(l)\frac{B(\theta_{1}+l+k,\theta_ {2}+m-l)}{B(\theta_{1}+k,\theta_{2})B(\theta_{1}+l,\theta_{2}+m-l)}\] \[\times\mathcal{D}_{\theta_{1}+l+k,\theta_{2}+m-l}(y) PC3\]
Sampling schemes
We now detail how to obtain sample draws from the above transition densities for both the diffusion and diffusion bridge case.
### Sampling from the law of the diffusion
Note that we need only consider the cases \(\mathbf{\theta}=(0,\theta)\) and \(\mathbf{\theta}=\mathbf{0}\), as the case \(\mathbf{\theta}=(\theta_{1},\theta_{2})\) is already covered in Jenkins and Spano (2017). Furthermore, the transition densities (3), (8), (9) and (10) for \(x\in[0,1)\) are similar, allowing for near identical sampling schemes. To this end, we restrict our attention to the case when \(\mathbf{\theta}=(0,\theta)\), starting with a sampling scheme for (3).
In this case, Algorithm 1 in Jenkins and Spano (2017) can be easily adapted to sample from (3):
1. Sample \(M\sim\{q_{m}^{\theta}(t)\}_{m\in\mathbb{N}}\),
2. Conditionally on \(M=m\), sample \(L\sim\text{Bin}(m,x)\),
3. If \(L=0\) return \(0\), else draw \(Y\sim\text{Beta}(l,\theta+m-l)\).
The only modification to Algorithm 1 in Jenkins and Spano (2017) is the sampling procedure in step 3, where the outcome \(L=0\) encodes the event when the diffusion is absorbed before the sampling time. A similar strategy allows for draws from (9), where additionally if \(L=m\), then in step 3 we return \(Y=1\).
For the case when the diffusion is conditioned on non-absorption, both expressions on the RHS of (8) are mixtures of beta distributions, with the weights forming a probability mass function (pmf) on \(\mathbb{N}\). When the starting point \(x\) is set to \(0\), one can return a draw \(Y\) from the law of the corresponding diffusion process sampled at time \(t\), by
1. Drawing \(M\sim\left\{\frac{mq_{m}^{0}(t)}{\sum_{d=1}^{\infty}dq_{d}^{0}(t)}\right\}_{m \in\mathbb{N}}\)
2. Conditionally on \(M=m\), drawing \(Y\sim\text{Beta}(1,m-1)\)
Step 2 is straightforward, whilst for step 1 the 'alternating series trick' can be employed--this technique requires access to a pair of monotonic sequences of upper and lower bounds for terms in the numerator and denominator, both converging to their exact values. This is immediate for the numerator (Proposition 1 in Jenkins and Spano (2017)), whilst for the denominator we modify slightly the arguments present in Proposition 3 in Jenkins and Spano (2017) (see Section
5 for further details).
A similar sampling scheme can be used for drawing samples from the law of the diffusion started from \(x\in(0,1]\), where once again appropriate monotonic upper and lower bounds can be constructed for both numerator and denominator (see Section 5 for more details).
The above can be replicated and suitably tweaked to return samples from (10), where an additional scheme is needed to deal with the case \(x=1\).
### Sampling from the law of the diffusion bridge
Once again we start by considering the case when the bridge is allowed to be absorbed at the boundary within the time interval \((0,t)\), and the mutation parameter is given by \(\boldsymbol{\theta}=(0,\theta)\).
To sample from (12), we follow an approach similar to that illustrated above for the diffusion case. Recall that we need only focus on the case when \(z=0\), for otherwise the bridge cannot have been absorbed during the time interval \((0,t)\) and thus is equivalent to conditioning on non-absorption (for which an appropriate sampling scheme will be provided shortly). Observe that the RHS of (12) can be viewed as a mixture of beta distributions, with the mixture weights
\[p_{m,k}:=\begin{cases}\frac{q_{m}^{\theta}(s)q_{k}^{\theta}(t-s)}{\sum_{d=1}^ {\infty}q_{d}^{\theta}(t)(1-x)^{d}}\mathcal{B}_{m,x}(l)\frac{B(l,\theta+m-l+k) }{B(l,\theta+m-l)}\mathcal{D}_{l,\theta+m-l+k}(y)&m,k\in\mathbb{N},l\in\{1, \ldots,m\}\\ \\ \frac{q_{m}^{\theta}(s)q_{k}^{\theta}(t-s)}{\sum_{d=1}^{\infty}q_{d}^{\theta} (t)(1-x)^{d}}(1-x)^{m}&m,k\in\mathbb{N},l=0\\ \\ 0&\text{otherwise}\end{cases}\]
defining a pmf on a subspace of \(\mathbb{N}^{3}\) (for more details, refer to (Jenkins and Spano, 2017, Section 3)). Individual monotonic upper and lower bounds can be constructed for \(\{q_{m}^{\theta}(s)\}_{m\in\mathbb{N}}\), \(\{q_{k}^{\theta}(t-s)\}_{k\in\mathbb{N}}\) and \(\sum_{d=1}^{\infty}q_{d}^{\theta}(t)(1-x)^{d}\) (see Section 5 for full details with regards to this last quantity), and subsequently these can be put together to obtain monotonic upper and lower bounds on the \(\{p_{m,k,l}\}_{m,k,l\in\mathbb{N}}\). Thus the alternating series trick lends itself to return a draw \((M,K,L)\sim\{p_{m,k,l}\}_{m,k,l,\in\mathbb{N}}\), and we use this to draw the relevant sample diffusion bridge point:
1. Sample \((M,K,L)\sim\{p_{m,k,l}\}_{m,k,l,\in\mathbb{N}}\)
2. If \(L=0\), return \(Y=0\), else return \(Y\sim\text{Beta}(l,\theta+m-l)\).
A similar scheme can be derived for the case \(\mathbf{\theta}=(\theta,0)\) by symmetric arguments, whereas for \(\mathbf{\theta}=\mathbf{0}\) the above can be replicated with the only significant difference being that if \(L=m\), then the routine returns \(Y=1\) in step 2.
We now turn to the case when the diffusion bridge is conditioned on _not_ being absorbed within the time interval \((0,t)\). Corollary 2 in Griffiths _et al._ (2018) gives us that Wright-Fisher diffusion bridges with mutation parameters either \(\mathbf{\theta}=\mathbf{0}\) or \(\mathbf{\theta}=(0,\theta)\) are equal (in distribution) to Wright-Fisher bridges with mutation parameters \(\mathbf{\theta}=(2,2)\) or \(\mathbf{\theta}=(2,\theta)\) respectively. Thus from now on we shall focus our attention solely on the case when \(\theta_{1},\theta_{2}>0\).
The strategy will be very close to the one developed above and based on the method found in (Jenkins and Spano, 2017, Section 3). As in the unconditioned bridge case, the diffusion bridge densities (PA1)-(PC3) can be viewed as mixtures of beta distributions, where the mixture weights now define a pmf on subspaces of \(\mathbb{N}^{4}\) and whose exact form depends on the particular density being considered.
As diffusion bridges are invariant under time reversal, a diffusion bridge that goes from \(x\) to \(y\) in time \(s\) and then proceeds to terminate at \(z\) at time \(t\) has the same law as a diffusion bridge that starts at \(z\), proceeds to \(y\) at time \(t-s\) and ends at \(x\) at time \(t\). This, coupled with symmetric arguments allows us to sample from the various transition densities (PA1)-(PC3) using just four different schemes, which we group as follows:
1. Start and endpoints are the same (i.e. equations (PA1) and (PB3)).
2. Start and endpoints are opposite boundary points (i.e. equations (PA3) and (PB1)).
3. \(z\) is in the interior of \([0,1]\), and the starting point is at one of the boundary points (i.e. equations (PA2), (PB2), (PC1) and (PC3) -- note that for (PC1) and (PC3) we make use of time reversal).
4. Start and endpoints are both inside the interior of \([0,1]\) (i.e. equation (PC2)).
Using the above groupings, it remains to show that the resulting four different transition densities consist of mixture weights \(\{p_{m,k,l,j}\}_{m,k,l,j\in\mathbb{N}}\) for which one can obtain monotonic sequences of upper and lower bounds. Again constructing these quantities for the numerator is straightforward, whereas the denominator is tackled in Section 5 (by suitably modifying Proposition 4 from Jenkins and Spano (2017)).
We point out that for both the diffusion and diffusion bridge case, numerical instabilities present when computing contributions to the infinite series representation of the probabilities \(\{q_{m}^{\theta}(t)\}_{m\in\mathbb{N}}\) for small time increments prompt the use of approximations for these quantities. For more details, please refer to Section 6.
## 4 Simulation of non-neutral paths
As observed in (Jenkins and Spano, 2017, Section 5), the neutral Wright-Fisher process can be used as a proposal distribution in an appropriate rejection sampler to returns exact draws from a non-neutral process. We give a brief overview for completeness.
Denote by \(\mathbb{W}\mathbb{F}_{\sigma,\boldsymbol{\theta}}^{x_{0}}\) the law induced by the solution \(X^{T}:=(X_{t})_{t=0}^{T}\) to the SDE given by equation (1) in the main paper on the space of continuous functions mapping \([0,T]\) into \([0,1]\) for some fixed time \(T\), and by \(\mathbb{W}\mathbb{F}_{0,\boldsymbol{\theta}}^{x_{0}}\) the corresponding neutral law. The Radon-Nikodym derivative between these two laws is given by
\[\frac{d\mathbb{W}\mathbb{F}_{\sigma,\boldsymbol{\theta}}^{x_{0}}}{d\mathbb{W} \mathbb{F}_{0,\boldsymbol{\theta}}^{x_{0}}}(X^{T})\propto\exp\left\{\tilde{ A}(X_{T})-\tilde{A}^{+}\right\}\exp\left\{-\int_{0}^{T}\left(\varphi(X_{s})- \varphi^{-}\right)dt\right\} \tag{18}\]
where \(\tilde{A}(x):=(\sigma/2)\int_{0}^{x}\eta(z)dz\) with \(\tilde{A}(x)\leq\tilde{A}^{+}\) for any \(x\in[0,1]\), and
\[\varphi(x):=\frac{\sigma}{4}\left[\left(-\theta_{2}x+\theta_{1}(1-x)\right) \eta(x)+x(1-x)\left(\frac{\sigma}{2}\eta^{2}(x)+\eta^{\prime}(x)\right)\right]. \tag{19}\]
Observe that \(\varphi(x)\) is a polynomial in \(x\) (in view of \(\eta(x)\) being a polynomial), and thus we can always find \(\varphi^{-}\) and \(\varphi^{+}\) such that \(\varphi^{-}\leq\varphi(x)\leq\varphi^{+}\) on [0,1], and similarly for \(\tilde{A}(x)\leq\tilde{A}^{+}\). The first term on the RHS of (18) can be viewed as a simple \(e^{\tilde{A}(X_{T})-\tilde{A}^{+}}\)-coin flip, whilst the second term is precisely the probability that all points in a unit rate Poisson point process \(\Phi=\{(t_{i},\omega_{i})\}_{i\in\mathbb{N}}\) on \([0,T]\times[0,\infty)\) lie in the epigraph of the map \(t\mapsto\varphi(x)-\varphi^{-}\). Furthermore, because \(\varphi(x)\leq\varphi^{+}\), we can thin \(\Phi\) to a Poisson point process on \([0,T]\times[0,\varphi^{+}-\varphi^{-}]\) and hence simulate an event with probability given by the RHS of (18).
This allows for exact paths from the non-neutral Wright-Fisher process to be returned by first simulating the appropriate Poisson point process, subsequently generating draws from the neutral Wright-Fisher process at the time-stamps returned by the Poisson point process, checking whether the generated points all lie in the appropriate region, and and finally running a simple \(e^{\tilde{A}(X_{T})-\tilde{A}^{+}}\)-coin flip.
In order to calculate \(\bar{A}^{+}\), \(\varphi^{-}\) and \(\varphi^{+}\), a Polynomial class (with associated root finding algorithm implementing the Jenkins-Traub algorithm, developed by Bill Hallahan1) was used. Whilst the implementation of this routine should work for polynomials of any degree, only polynomials \(\eta(x)\) of degree at most 25 were allowed to ensure that the code returns reliable output within a reasonable amount of time.
Footnote 1: [https://www.codeproject.com/Articles/674149/A-Real-Polynomial-Class-with-Root-Finder](https://www.codeproject.com/Articles/674149/A-Real-Polynomial-Class-with-Root-Finder)
## 5 Monotonic upper and lower bounds for the new denominators
In this section we show that the denominators in the transition densities for both the diffusion (equations (8) and (10)) and the diffusion bridge (equations (12), (16) and (17), as well as equations (PA1) through to (PC3)) allow for monotonic sequences of upper and lower bounds. By comparing (8) and (10), as well as (12), (16) and (17), it becomes clear that we can consider solely the denominator \(\sum_{d=2}^{\infty}q_{d}^{0}(t)(1-x^{d}-(1-x)^{d})\) as the proofs for the other quantities follow using near identical arguments.
We further emphasise once more (as done in Section 3), that for the bridge case we need only need consider the cases (PA1), (PA2), (PA3), and (PC2) in order to be able to simulate draws from any Wright-Fisher diffusion bridge process. Additionally, observe that the denominator for (PA3) is given by \(q_{0}^{\theta}(t)\) for which monotonic upper and lower bounds are immediate, whereas (PC2) is precisely the case covered by Proposition 3 in Jenkins and Spano (2017). It therefore remains to find monotonically converging sequences of upper and lower bounds for each of:
\[\sum_{d=0}^{\infty}q_{d}^{\theta}(t)d, (20) \sum_{d=0}^{\infty}q_{d}^{\theta}(t)(1-x^{d}-(1-x)^{d}), \tag{21}\]
\[\sum_{d=0}^{\infty}q_{d}^{\theta}(t)\frac{1}{B(\theta_{1}+d, \theta_{2})}, (22) \sum_{d=0}^{\infty}q_{d}^{0}(t)\frac{z^{\theta_{1}+d-1}(1-z)^{ \theta_{2}-1}}{B(\theta_{1}+d,\theta_{2})}. \tag{23}\]
Further, by equation (5) in Griffiths _et al._ (2018), (20) admits the required monotonic bounds through analytic expressions for the falling factorial moments of the ancestral process (see Theorem 5 in Griffiths _et al._ (2018) and the preceding paragraphs for full details).
### Calculations for (21)
Dealing with (21) requires some more work; we start by observing that \((1-x^{m}-(1-x)^{m})=\sum_{l=1}^{m-1}{m\choose l}x^{l}(1-x)^{m-l}\). We can modify the arguments in Lemma 1 in Jenkins and Spano (2017) to deduce that for \(L_{m}\sim\text{Bin}(m,x)\) we have that
\[\sum_{l=1}^{m}\mathbb{P}\left[L_{m+1}=l\right]\leq(x+2)\sum_{l=1}^{m-1} \mathbb{P}\left[L_{m}=l\right]. \tag{24}\]
To see this, observe that for \(l\leq\lfloor mx\rfloor\)
\[\mathbb{P}\left[L_{m+1}=l\right]=\frac{m+1}{m+1-l}(1-x)\mathbb{P}\left[L_{m}= l\right]\leq\mathbb{P}\left[L_{m}=l\right], \tag{25}\]
where in the last inequality we used the fact that \(l\leq\lfloor mx\rfloor\leq mx\). When \(l\geq\lfloor mx\rfloor\), we have that
\[\mathbb{P}\left[L_{m+1}=l+1\right]=\frac{m+1}{l+1}x\mathbb{P}\left[L_{m}=l \right]\leq(x+1)\mathbb{P}\left[L_{m}=l\right] \tag{26}\]
by observing that when \(mx>1\), \(\frac{m+1}{l+1}\leq\frac{m+1}{mx}\leq 1+\frac{1}{x}\), whereas for \(mx\leq 1\), \(\frac{m+1}{l+1}\leq m+1\leq 1+\frac{1}{x}\). Summing together (25) and (26) (and noting the double counting happening at \(\lfloor mx\rfloor\)) gives the result. With this in hand we can apply Proposition 3 in Jenkins and Spano (2017), this time setting \(c_{k,m}:=b_{k}^{(t,\theta)}(m)\sum_{l=1}^{m-1}\mathbb{P}[L_{m}=l]\), and replacing \(K^{(x,z)}\) with \(x+2\).
### Calculations for (23)
Note first that
\[\sum_{m=1}^{\infty}q_{m}^{0}(t)\frac{z^{\theta_{1}+m-1}(1-z)^{ \theta_{2}-1}}{B(\theta_{1}+m,\theta_{2})}\] \[=\sum_{m=1}^{\infty}\left(\sum_{k=m}^{\infty}(-1)^{k-m}\frac{ \theta+2k-1}{m!(k-m)!}\frac{(\theta+k+m-2)!}{(\theta+m-1)!}e^{-\frac{k(\theta +k-1)t}{2}}\right)\frac{z^{\theta_{1}+m-1}(1-z)^{\theta_{2}-1}}{B(\theta_{1}+ m,\theta_{2})},\]
and observe that the terms inside the inner summation (excluding the factor \((-1)^{k-m}\)) correspond to the terms \(b_{k}^{(t,\theta)}(m)\) as defined in Proposition 1 in Jenkins and Spano (2017). Let \(c_{k,m}:=b_{k}^{(t,\theta)}(m)\frac{z^{\theta_{1}+m-1}(1-z)^{\theta_{2}-1}}{B (\theta_{1}+m,\theta_{2})}\), and observe that we can re-write the above as \(\sum_{i=0}^{\infty}(-1)^{i}d_{i}\)
with
\[d_{2m} =\sum_{j=0}^{m}c_{m+j,m-j}, d_{2m+1} =\sum_{j=0}^{m}c_{m+1+j,m-j}. \tag{27}\]
For \(\varepsilon\in(0,1)\) fixed, denote by
\[E^{t} :=\inf\left\{m\geq 0:2j\geq C_{m-j}^{t}\text{ for all }j=0,\ldots,m \right\}, \tag{28}\] \[D_{\varepsilon}^{t,\theta} :=\inf\left\{k\geq\left(\frac{1}{t}-\frac{\theta+1}{2}\right) \lor 0:(\theta+2k+1)e^{-\frac{(\theta+2k)t}{2}}<1-\varepsilon\right\}. \tag{29}\]
Proposition 3 in Jenkins and Spano (2017) can be restated for the case we consider here as follows.
**Proposition 5.1**.: _For all \(m>D_{\varepsilon}^{t,\theta}\lor E^{t}\vee\lfloor\frac{\theta+2}{\varepsilon( \theta_{1}+1)}-1\rfloor\),_
\[d_{2m+2}<d_{2m+1}<d_{2m}. \tag{30}\]
Proof.: The proof proceeds as in Jenkins and Spano (2017). As \(m>E^{t}\), \(2j\geq C_{m-j}^{t}\), and thus by Proposition 1 in Jenkins and Spano (2017)\(b_{m+j+1}(m-j)<b_{m+j}(m-j)\). Multiplying both sides of the inequality by \(\frac{z^{\theta_{1}+m-1}(1-z)^{\theta_{2}-1}}{B(\theta_{1}+m,\theta_{2})}\) and summing over \(j\) gives
\[d_{2m+1}=\sum_{j=0}^{m}c_{m+j+1,m-j}<\sum_{j=0}^{m}c_{m+j,m-j}=d_{2m}.\]
The above reasoning also leads to
\[\sum_{j=1}^{m}c_{m+j+2,m-j}<\sum_{j=1}^{m}c_{m+j+1,m-j},\]
which coupled with \(c_{m+1,m+1}+c_{m+2,m}<c_{m+1,m}\) (which still needs to be proved) gives the required \(d_{2m+2}<d_{2m+1}\). Now observe that
\[\frac{c_{k+1,m}}{c_{k,m}}=\frac{b_{k+1}^{(t,\theta)}(m)}{b_{k}^{(t,\theta)}(m )}=\frac{\theta+m+k-1}{k-m+1}\frac{\theta+2k+1}{\theta+2k-1}e^{-\frac{(\theta +2k)t}{2}}\leq(\theta+2k+1)e^{-\frac{(\theta+2k)t}{2}},\]
setting \(k=m+1\) and observing that \(m>D_{\varepsilon}^{t}\), we get that \(c_{m+2,m}<(1-\varepsilon)c_{m+1,m}\). Similarly
\[\frac{c_{m+1,m+1}}{c_{m+1,m}}=\frac{\theta+2m}{(m+1)(\theta+m)}z\frac{B( \theta_{1}+m,\theta_{2})}{B(\theta_{1}+m+1,\theta_{2})}\leq\frac{\theta+2}{( m+1)(\theta_{1}+1)}<\varepsilon\]
if \(m>\lfloor\frac{\theta+2}{\varepsilon(\theta_{1}+1)}-1\rfloor\). The result follows.
### Calculation for (22)
The same arguments used above apply (omitting the presence of the \(z^{\theta_{1}+d-1}(1-z)^{\theta_{2}-1}\), which simplifies the proof slightly).
## 6 Approximations for small times and implementation
Whenever the simulation time increments become too small, numerical instabilities crop up when computing contributions to the quantities \(q_{m}^{\theta}(t)\). Thus (as done in Jenkins and Spano (2017)), adequate approximations are necessary which make use of the small time asymptotics of \(q_{m}^{\theta}(t)\). Theorem 4 in Griffiths (1984) gives that as \(t\to 0\), the ancestral block counting process of the coalescent is well approximated by a Gaussian random variable with mean
\[\mu=\frac{2\eta}{t},\qquad\qquad\text{ where }\eta=\begin{cases}1&\beta=0\\ \frac{\beta}{e^{\beta-1}}&\beta\neq 0\end{cases},\qquad\qquad\text{ and }\beta=\frac{(\theta-1)t}{2},\]
and variance
\[\sigma^{2}=\begin{cases}\frac{2}{3t}&\beta=0\\ \frac{2\eta}{t}(\frac{\eta+\beta}{\beta})^{2}\left(1+\frac{\eta}{\eta+\beta}- 2\eta\right)&\beta\neq 0\end{cases}\]
(note that Theorem 4 in Griffiths (1984) is missing a factor of \(\beta^{-2}\)). In light of this, whenever the time increment \(t\) falls below a specific threshold \(\varepsilon_{G}\), EWF makes use of the above Gaussian approximation, such that the probabilities \(q_{m}^{\theta}(t)\) are replaced by their (suitably rounded) Gaussian counterparts. In the current implementation of EWF, the threshold \(\varepsilon_{G}\) was set to \(0.08\) after extensive testing as it was found that such a cutoff ensured a suitable trade-off between retaining precision by employing the approximation only when necessary, and having a robust and efficient implementation.
For the diffusion bridge case we apply similar approximations to both \(q_{m}^{\theta}(s)\) and \(q_{k}^{\theta}(t-s)\), but we also introduce an additional threshold \(\varepsilon_{D}<\varepsilon_{G}\) below which we approximate draws for the law of a diffusion _bridge_ through draws from the law of a _diffusion_. This is necessary due to the fact that the mean \(\mu\) given above for the Gaussian approximation is inversely proportional to the time increment \(t\). Thus if either of the time increments \(s\) or \(t-s\) is small, the pmf \(\{p_{m,k,l,j}\}_{m,k,l,j\in\mathbb{N}}\) spreads out very thinly over \(\mathbb{N}^{4}\) leading to a loss of precision due to the small quantities involved coupled with infeasible run times, even when the above illustrated Gaussian
approximations are used.
In such cases (i.e. \(s<\varepsilon_{D}\) or \(t-s<\varepsilon_{D}\)), EWF first simulates a draw from the corresponding Wright-Fisher diffusion started at \(x\) and sampled at time \(s\), computes the increment between the generated draw \(Y^{\prime}\) and the start point \(x\), and superimposes it onto a linear interpolation between the left and right end-points \(x\) and \(z\) to generate the required draw \(Y\). The linear interpolation employed explicitly make use of time increments \(s\) and \(t-s\) to account for the fact that the returned draw \(Y\) should come from a diffusion bridge starting at \(x\) and ending at \(z\), with appropriate mechanisms in place to ensure that the output remains within the interval \([0,1]\). When either \(s\in[\varepsilon_{D},\varepsilon_{G})\) or \(t-s\in[\varepsilon_{D},\varepsilon_{G})\), the above detailed (rounded) Gaussian approximations are used for the corresponding \(\{q_{i}^{\theta}\}_{i\in\mathbb{N}}\) within the appropriate time interval, whilst the standard sampling scheme is used for time increments which exceed \(\varepsilon_{G}\). A threshold of 0.008 was chosen for \(\varepsilon_{D}\) following extensive testing, such that the resulting implementation of EWF retained robustness and efficiency and refrained from using such approximations unless their absence led to unfeasible run times. We mention that both thresholds can be altered if desired through the fields g1984 (for \(\varepsilon_{G}\)) and bridgebreshold (for \(\varepsilon_{D}\)) of the Options class found in the myHelpers.h header file (although we would advise against this).
## 7 Output validation
Output was validated by generating 10,000 samples for a wide variety of cases and subsequently comparing this to a truncation of the transition density by means of Kolmogorov-Smirnov test as well as QQ-plots. We point out that we present only neutral output here as the non-neutral output is generated using the same rejection procedure as used in Jenkins and Spano (2017).
To illustrate how the transition density was truncated, consider the case (PC2), which involves a sum over four indices, two of which are infinite. By using an iterative scheme, the mode over these four indices was found and its contribution to the density for a given point \(y\in[0,1]\) was calculated. Subsequently the denominator of (PC2) was evaluated up to machine precision, and an appropriate truncation level was chosen by multiplying together the resulting denominator, the mode's contribution to the density and a tolerance parameter. Similar truncations were employed for all the other diffusion and diffusion bridge cases.
### Diffusions conditioned on non-absorption
Samples were generated using 9 different parameters setups featuring starting points \(x\in\{0,0.5,1\}\), sampling times \(t\in\{0.01,0.05,0.5\}\), and mutation parameter \(\boldsymbol{\theta}=(0,1)\). The output is plotted below, starting with the case when \(x=0\), with the sampling time increment \(t\) increasing when going left to right across plots. All of the Kolmogorov-Smirnov tests and QQ-plots below confirm that the output is indeed coming from the correct distribution.
Figure 2: (Top row): Histograms for 10,000 samples generated from the law of a Wright–Fisher diffusion conditioned on non-absorption, started at \(x=0\) at time 0, sampled at times \(t=0.01,0.05,0.5\) respectively. The truncated transition density is plotted in red. (Bottom row): QQ-plots for the corresponding samples with the \(p\)-value returned from the Kolmogorov–Smirnov test reported above the plot.
Figure 4: (Top row): Histograms for 10,000 samples generated from the law of a Wright–Fisher diffusion conditioned on non-absorption, started at \(x=1\) at time 0, sampled at times \(t=0.01,0.05,0.5\) respectively. The truncated transition density is plotted in red. (Bottom row): QQ-plots for the corresponding samples with the \(p\)-value returned from the Kolmogorov–Smirnov test reported above the plot.
Figure 3: (Top row): Histograms for 10,000 samples generated from the law of a Wright–Fisher diffusion conditioned on non-absorption, started at \(x=0.5\) at time 0, sampled at times \(t=0.01,0.05,0.5\) respectively. The truncated transition density is plotted in red. (Bottom row): QQ-plots for the corresponding samples with the \(p\)-value returned from the Kolmogorov–Smirnov test reported above the plot.
### Unconditioned diffusions
In the case when the diffusion was allowed to be absorbed at the boundaries, simulations for the following cases were obtained: start points \(x\in\{0.25,0.5,0.75\}\), sampling times \(t\in\{0.05,0.25,0.5\}\), and mutation parameter \(\mathbf{\theta}=\mathbf{0}\). We report the probability of being absorbed at either boundary in the table below, where \(\widehat{\mathbb{P}}\) denotes the empirical estimate for this quantity whereas \(\mathbb{P}\) is the theoretical value obtained by evaluating the truncation to the transition density at the boundary. All of the estimated probabilities match their theoretical counterparts, and further both the QQ-plots and Kolmogorov-Smirnov tests confirm that the generated draws are coming from the correct distribution.
Figure 5: (Top row): Histograms for 10,000 samples generated from the law of a Wright–Fisher diffusion started at \(x=0.25\) at time 0, sampled at times \(t=0.05,0.25,0.5\) respectively, with the process allowed to be absorbed at the boundaries. Note that samples equal to 0 or 1 are not included in the above histograms, but their relative frequency can be found from the empirical probabilities found in Table 1. The truncated transition density is plotted in red. (Bottom row): QQ-plots for the corresponding samples with the \(p\)-value returned from the Kolmogorov–Smirnov test reported above the plot.
Figure 6: (Top row): Histograms for 10,000 samples generated from the law of a Wright–Fisher diffusion started at \(x=0.5\) at time 0, sampled at times \(t=0.05,0.25,0.5\) respectively, with the process allowed to be absorbed at the boundaries. Note that samples equal to 0 or 1 are not included in the above histograms, but their relative frequency can be found from the empirical probabilities found in Table 2. The truncated transition density is plotted in red. (Bottom row): QQ-plots for the corresponding samples with the \(p\)-value returned from the Kolmogorov–Smirnov test reported above the plot.
### Diffusion bridges conditioned on non-absorption
To validate the diffusion bridge simulation, we chose to simulate draws from the following three diffusion bridges:
We further considered the following sampling times for each bridge:
Figure 7: (Top row): Histograms for 10,000 samples generated from the law of a Wright–Fisher diffusion started at \(x=0.75\) at time 0, sampled at times \(t=0.05,0.25,0.5\) respectively, with the process allowed to be absorbed at the boundaries. Note that samples equal to 0 or 1 are not included in the above histograms, but their relative frequency can be found from the empirical probabilities found in Table 3. The truncated transition density is plotted in red. (Bottom row): QQ-plots for the corresponding samples with the \(p\)-value returned from the Kolmogorov–Smirnov test reported above the plot.
\begin{table}
\begin{tabular}{c|c c c c} & \((t_{0},x_{0})\) & \((t_{1},x_{1})\) & \((t_{2},x_{2})\) & \((t_{3},x_{3})\) \\ \hline Bridge 1 & (0,0) & (0.05,0.1) & (0.1,0.25) & \\ \hline Bridge 2 & (0.2,0.1) & (0.3,0.3) & (0.4,0.4) & (0.5,0.5) \\ \hline Bridge 3 & (0,1) & (0.5,0.95) & & \\ \end{tabular}
\end{table}
Table 4: The left and right endpoints for the three different bridges simulated, where \((t_{0},x_{0})\) denotes the bridge’s start time \(t_{0}\) and start point \(x_{0}\), \((t_{1},x_{1})\) denotes the second observation time and point for the diffusion bridge and so on.
The output generated is plotted below, starting with bridge 1, and the sampling times \(s_{i}\) increasing from left to right. Again all the output strongly indicates that the method is returning draws from the desired target distribution.
Figure 8: (Top row): Histograms for 10,000 samples generated from the law of the Wright–Fisher diffusion bridge ‘Bridge 1’ in Table 4 above, sampled at the times given by the corresponding row in Table 5. The truncated transition density is plotted in red. (Bottom row): QQ-plots for the corresponding samples with the \(p\)-value returned from the Kolmogorov–Smirnov test reported above the plot.
\begin{table}
\begin{tabular}{c|c c c} & \(s_{1}\) & \(s_{2}\) & \(s_{3}\) \\ \hline Bridge 1 & 0.025 & 0.065 & 0.085 \\ \hline Bridge 2 & 0.25 & 0.35 & 0.45 \\ \hline Bridge 3 & 0.1 & 0.25 & 0.3 \\ \end{tabular}
\end{table}
Table 5: Sampling times for the three different diffusion bridges considered.
Figure 10: (Top row): Histograms for 10,000 samples generated from the law of the Wright–Fisher diffusion bridge ‘Bridge 3’ as given in Table 4 above, sampled at the times given by the corresponding row in Table 5. The truncated transition density is plotted in red. (Bottom row): QQ-plots for the corresponding samples with the \(p\)-value returned from the Kolmogorov–Smirnov test reported above the plot.
Figure 9: (Top row): Histograms for 10,000 samples generated from the law of the Wright–Fisher diffusion bridge ‘Bridge 2’ as given in Table 4 above, sampled at the times given by the corresponding row in Table 5. The truncated transition density is plotted in red. (Bottom row): QQ-plots for the corresponding samples with the \(p\)-value returned from the Kolmogorov–Smirnov test reported above the plot.
### Unconditioned bridges
When the diffusion bridge is allowed to be absorbed at the boundary and \(\boldsymbol{\theta}=\boldsymbol{0}\), we need only consider the cases when \(z\in\{0,1\}\). To this end we considered the following two setups:
We further considered the following sampling times:
As in the diffusion case, we report the probability of absorption at the boundary in the table below, where once more \(\widehat{\mathbb{P}}\) denotes the empirical estimate for this quantity whereas \(\mathbb{P}\) is the theoretical value obtained by evaluating the truncation to the transition density at the boundary.
\begin{table}
\begin{tabular}{l|c c} & \((t_{0},x_{0})\) & \((t_{1},x_{1})\) \\ \hline Bridge 1 & (0,0.25) & (0.3,1) \\ \hline Bridge 2 & (0,0.5) & (0.5,0) \\ \end{tabular}
\end{table}
Table 6: The left and right endpoints for the three different bridges simulated, where \((t_{0},x_{0})\) denotes the bridge’s start time \(t_{0}\) and start point \(x_{0}\), \((t_{1},x_{1})\) denotes the second observation time and point for the diffusion bridge and so on.
\begin{table}
\begin{tabular}{l|c c} Bridge 1 & \(\widehat{\mathbb{P}}\)[Absorbed at 1] & \(\mathbb{P}\)[Absorbed at 1] \\ \hline \(s=0.05\) & 0 & 3.900485e-16 \\ \(s=0.15\) & 7e-4 & 6.752749e-4 \\ \(s=0.25\) & 0.2331 & 0.234209 \\ \end{tabular}
\end{table}
Table 8: Empirical (\(\widehat{\mathbb{P}}\)) and theoretical (\(\mathbb{P}\)) absorption probabilities for the diffusion started at \(x=0.25\) and ending at \(z=1\).
The output generated is plotted below, starting with bridge 1, and the sampling time \(s\) increasing from left to right. All of the plots, tests and probabilities above confirm that we are drawing samples from the desired distribution.
Figure 11: (Top row): Histograms for 10,000 samples generated from the law of the Wright–Fisher diffusion bridge ‘Bridge 1’ (allowed to be absorbed at 1) as given in Table 6, sampled at the times given by the corresponding row in Table 7. Note that the samples equal to 1 are not included in the above plots, but their relative frequency can be found in Table 8. The truncated transition density is plotted in red. (Bottom row): QQ-plots for the corresponding samples with the \(p\)-value returned from the Kolmogorov–Smirnov test reported above the plot.
### Non-neutral diffusions and diffusion bridges
Non-neutral Wright-Fisher paths can be generated (as described in Section 4) through the use of neutral paths coupled with an appropriate Poisson point process. This technique was proposed in Jenkins and Spano (2017) and is used (without any alteration) in the current implementation of EWF to return non-neutral draws from the laws of both diffusions and diffusion bridges. Thus, although EWF does allow for non-neutral draws under a very broad class of selective regimes (and instructions on how to do this can be found in the respective configuration files), we omit the resulting output.
|
2306.00586 | Evaluating the "Learning on Graphs" Conference Experience | With machine learning conferences growing ever larger, and reviewing
processes becoming increasingly elaborate, more data-driven insights into their
workings are required. In this report, we present the results of a survey
accompanying the first "Learning on Graphs" (LoG) Conference. The survey was
directed to evaluate the submission and review process from different
perspectives, including authors, reviewers, and area chairs alike. | Bastian Rieck, Corinna Coupette | 2023-06-01T11:57:47Z | http://arxiv.org/abs/2306.00586v1 | # Evaluating the "Learning on Graphs" Conference Experience
###### Abstract
With machine learning conferences growing ever larger, and reviewing processes becoming increasingly elaborate, more data-driven insights into their workings are required. In this report, we present the results of a survey accompanying the first "Learning on Graphs" (LoG) Conference. The survey was directed to evaluate the submission and review process from different perspectives, including authors, reviewers, and area chairs alike.
## Motivation
The first "Learning on Graphs" (LoG) Conference [(10, 11, 12)] was remarkable in more ways than one: starting from scratch, the conference aims to be _the_ place for graph learning research, making use of an advisory committee that consists of international experts in the field. Moreover, at is core, LoG wants to be known for its exceptional review quality. With reviewing being an often-criticized process, marred by strong opinions that are held with high confidence, LoG implemented three measures for improving review quality: (i) using sponsors to provide high monetary rewards for the best reviewers, (ii) vetting reviewers in advance, and (iii) assigning a smaller number of papers to the reviewers than other machine learning conferences. The effectiveness of these measures can only be assessed holistically, which is why the authors of this report decided early on that a large-scale survey should accompany the conference. Such surveys are done regularly by conferences, but few, if any, appear to result in _actionable changes_ to the way conferences are run.
Against this background, the results described in this report are aimed to engage the community, make the reviewing process more transparent, and, overall, serve as a way to challenge parts of the _status quo_ of running a conference. As our communities grow, our processes, too, must adapt. We cannot run the conferences of the 21st century following procedures developed for community sizes of the 20th century.
## Related Work
Previous conferences, such as NeurIPS 2021, already rolled out surveys to assess certain aspect of the reviewing process [(1)], referencing a famous experiment at NeurIPS 2014 [(2)]. Such surveys and experiments serve to highlight inconsistencies in the decision-making process _per se_, and provide some encouragement to authors.1 However, the size of NeurIPS and other conferences poses an obstacle to imple
menting large-scale changes, primarily because the program committee changes every year and knowledge transfer is not guaranteed. LoG, by contrast, is positioned favorably because its research field is just emerging, being at least an order of magnitude smaller than NeurIPS. Moreover, the advisory committee guarantees a certain level of consistency in decisions. We hope that the results of our survey encourage other conferences to take a critical look at their underlying processes. To quote Lord Kelvin [3, pp. 73-74]:
1. [label=]
2. I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.
We hope that this survey begets knowledge that we may harness to improve future versions of LoG and, perchance, other venues as well.
### Results
To understand how participants experienced the LoG conference, we distributed a survey of mostly closed, Likert-scaled questions to all authors, reviewers, and area chairs registered via OpenReview between from late November 2022 to mid February 2023. In this section, we present the results of the survey.2 In particular, for each part of the survey, we visualize the results for each question, providing the absolute numbers in the main visualization as well as the percentages in the marginals (for single-choice questions), and using \(n\) to indicate the number of respondents (which might differ from the number of responses when multiple simultaneous responses were allowed).
Footnote 2: For reproducibility, we make the response data and the code generating our analyses, excluding all sensitive information, available at the following DOI: 10.5281/zenodo.7875377.
#### Sample Composition
The survey was distributed to 162 active submissions and we received \(n=183\) answers. Our breakdown of roles3 indicates that 92 out of 876 authors responded (10.5% of all authors or up to 50.2% of all active submissions), 118 out of 372 reviewers (31.7%), and, finally, 3 out of 46 area chairs (6.5%).
Footnote 3: To retain anonymity, our survey is not linked to paper IDs. We permit _all_ authors of a paper to respond to the survey.
### Questions to Authors
We find that the overwhelming majority of authors is satisfied (either moderately or extremely so) with the conference as well as the reviews that they received. When it comes to the experience of the rebuttal phase, authors tend to be slightly more neutral, but still positive overall. Interestingly, most authors rate the _standards_ of reviewers to be at least as high as those of comparable machine learning conferences--given that this was the first edition of the conference, this is an excellent outcome that vindicates the vetting process of reviewers. The fact that authors found the conference experience to be similar or better than comparable conferences is also an important signal that we consider to bode well for future editions.
### As an author, how satisfied are you with the _content_ and _quality_ of the reviews?
### As an author, how satisfied are you with the _tone_ and _style_ of the reviews?
\begin{tabular}{c
As an author, how high were the reviewers' standards compared to other AI/ML conferences you submitted to previously?
### Questions to Reviewers
We find that interest in the conference topic is the factor most frequently mentioned as a motivation to review for LoG, followed by the prospect of a monetary reward, and being asked to review based on one's own professional network. Moreover, reviewers are moderately satisfied with the rebuttal phase and the review experience overall, mentioning that their experience is comparable to those of more established conferences. More than 50% of the reviewers also report that their review load was slightly lower or much lower in comparison to other conferences. Given that the program committee assigned virtually all reviewers no more than 3 papers (with few exceptions for certain expert and emergency reviewers, who were assigned up to 5 papers), the feedback provides a good justification for continuing to keep the review load low.
**As a reviewer, why did you choose to review for the conference?**
Interest in the conference topic
Monetary rewards for best reviewers
Review request by someone I know
Other (please specify)
All reviewers who indicated that they were motivated by the monetary rewards offered to best reviewers also indicated that they were motivated by at least one other factor:
\(\$\$\) + Interest in the conference topic
\(\$\$\) + Review request by someone I know
\(\$\$\) + Review request by someone I know
\(\$\$\) + Interest in the conference topic
\(\$\$\) + Review request by someone I know
\(\$\) + Interest in the conference topic
\(\$\$\) + Other (please specify)
Among the responses given in text form, reviewers stated that they were either recruited as an emergency reviewer (\(n=1\)) or chose LoG because of the reputation of the organizers (\(n=1\)).
**As a reviewer, how satisfied are you with the rebuttal phase?**
As a reviewer, how satisfied are you with your review experience overall?
\begin{tabular}{c} \\ \end{tabular}
**As a reviewer, how was your review experience compared to other AI/ML conferences you reviewed for previously?**
About 30% of responding reviewers indicated that they reviewed for ICLR 2023 while reviewing for LoG, with the median affirmative respondent reviewing 5-6 papers for ICLR 2023. Most affected reviewers were neutral or critical toward their double duty. This highlights the importance of conference timing when calibrating reviewer workloads.
### Did you review for ICLR 2023?
### How many papers did you review for ICLR 2023?
### Questions to Area Chairs
We refrain from illustrating area chairs' responses due to their small number (\(n=3\)).
### As an area chair, how satisfied are you with your review experience overall?
Responding area chairs were somewhat or extremely satisfied with their review experience overall.
As an area chair, how was your review experience compared to other AI/ML conferences you reviewed for previously?
Responding area chairs judged their review experience to be much better or about the same as same as for other AI/ML conferences they reviewed for previously.
As an area chair, how was your workload compared to other AI/ML conferences you reviewed for previously?
Responding area chairs judged their workload to be slightly lower than or about the same as for other AI/ML conferences they reviewed for previously.
### General Questions
We also gave participants the option to answer questions about the current setup of the conference (one track for full papers and one track for extended abstracts) and provided options for free-form feedback. The latter received \(n=51\) responses, which we summarize below.
### How could we improve your review experience?
In general, most respondents wanted to have more time to discuss their papers with reviewers and mentioned that reviewers should be encouraged to be more _active_ during the rebuttal phase. Some commenters raised unreasonable demands by reviewers, such as irrelevant experiments and out-of-scope citations, as a prevailing issue of machine learning conferences that they did _not_ experience with LoG. A prevalent wish was also to enable rating of reviewers by authors, as well as to establish a better culture of reviewing that moves away from mere numerical scores. Paraphrasing the respondents here, there appears to be a call for more nuance in the reviewing process. Interestingly, several respondents strongly suggested the utility of enabling public comments on submissions to engage the community in the reviewing process. Finally, some commenters took the time to remark that their experience stood out in positive terms when compared to other conferences.
Concerning the different tracks, respondents commented that the separation should be explained better to authors and reviewers alike. With reviewers having similarly high standards for work that is clearly still in progress, getting an extended abstract accepted was perceived as a tough challenge for authors.
### How do you like the "Extended Abstract" track?
### Why do you not like the "Extended Abstract" track?
We received \(n=19\) text responses. The main issues raised by commenters concern a (perceived) lack of quality of extended abstracts, with some respondents citing fears of using such extended abstracts as a way to perform "idea registration" rather than in-depth analyses. Moreover, respondents also stated that reviewing such submissions is more complex since the standards for acceptance would have to be adjusted accordingly.
### Why do you like the "Extended Abstract" track?
We received \(n=28\) text responses. Almost every comment highlights the possibility to submit early work or preliminary work and get quick feedback by the community. Some respondents also consider this track to be advantageous to present non-traditional work, such as critique papers or papers that focus on highlighting negative results.
### Anything else you would like to tell us?
We received \(n=24\) responses. Many respondents expressed the wish of seeing more instances of LoG, as well as moving to a hybrid format. One respondent specifically requested a track for survey papers, while another raised frustrations about the OpenReview platform. Finally, one respondent provided helpful insights for further improving the review quality, in particular as the conference grows.
### Discussion
The overall responses of the community and the general interest in a second version of the conference paint a positive picture of the first instance of LoG. Analyzing the experiences in more detail, we find that LoG is a microcosm of issues that are known to plague the machine learning community at large. These issues, unsurprisingly, are predominantly concerns about aspects of peer review, including the ensuing discussion between reviewers and authors. We are excited to see that, despite LoG being a "grassroots conference" arising _from_ the community and _for_ the community, respondents often rate this conference to have provided them with the "best review experience" so far. Authors conceded that reviewer standards were even slightly higher than at comparable conferences, while also citing an overall better experience with the review process.
These positive experiences contrast with some negative experiences of authors. An analysis4 of the discussions shows that there are \(n=29\) "silent papers," i.e., papers with no in-depth discussion between authors and reviewers. While \(n=2\) of these papers were eventually accepted because of strong reviews--which, in some sense, obviated the need for a discussion--this leaves \(n=27\) papers without an exchange. Of these papers, \(n=18\) received no comments from authors, meaning that the authors did not comment on the reviews. This could indicate a misunderstanding regarding the potential utility of a rebuttal, or it could mean that authors did not think that the opinions of reviewers could be changed. Believing in the autonomy of authors, one could say that the review process worked "as designed" for these \(n=18\) cases: authors sent in their work, authors received feedback, but _chose_ not to engage further. However, this leaves \(n=9\) papers that were eventually rejected without reviewers commenting on a rebuttal provided by authors. These are clear _failures_ of the review process, since we would at the very least expect reviewers to explicitly acknowledge the rebuttal. A brief comparison to other conferences shows that the relative numbers of such papers are extremely low, indicating that overall engagement of reviewers at LoG was comparatively high. Nevertheless, we will have to improve our processes to avoid such breakdowns in communication.
### Suggestions
Given the high quality of the majority of reviews, we will continue our vetting procedure and strive to select reviewers with the utmost care. We will also retain the rating system of reviewers and area chairs, which is a cornerstone of the reviewer awards. While the effect of mon
etary rewards cannot be fully assessed in our current survey setup, we will nevertheless keep this as one feature of LoG for the next instance of the conference. To further focus on review quality, we will improve the monitoring of the review process, making use of the OpenReview API to find and identify "silent papers" early during the review process. We will also raise this topic with area chairs so that they can better stir and steer such conversations, ensuring that no discussion items are left unanswered.
One of the insights that we have to tackle on a much broader level involves a better tracking of reviewers. While LoG already uses reviewer ratings,5 it would be beneficial for the whole machine learning community to adopt a _reviewer reputation_ system. Such a system would increase the accountability of reviewers and also serve to highlight those that exhibit "good scientific citizenship." Beyond monetary awards for a selected set of reviewers, it would be interesting to discuss general reviewer compensation. However, instituting such a system is a policy change fraught with additional questions (as well as administrative and fiscal complications). While it is likely that a proper contract with remuneration would further improve review quality, the contract would also need to be enforced if need be. This suggests the use of impartial and trusted experts to carefully _check_ reviews of a conference (raising the follow-up problem of establishing guidelines for identifying, recruiting, remunerating, and overseeing these experts). For LoG, we will ensure that organizers perform this job during the next iteration, so that they can engage with problematic reviewers or authors early on in the reviewing process.
Footnote 5: These ratings are to be taken with a grain of salt, though, since the _outcome_ of the review phase constitutes a strong confounding variable. Authors whose papers are rejected may not be willing to concede that they received high-quality reviews.
|
2310.12664 | Is ChatGPT a Financial Expert? Evaluating Language Models on Financial
Natural Language Processing | The emergence of Large Language Models (LLMs), such as ChatGPT, has
revolutionized general natural language preprocessing (NLP) tasks. However,
their expertise in the financial domain lacks a comprehensive evaluation. To
assess the ability of LLMs to solve financial NLP tasks, we present FinLMEval,
a framework for Financial Language Model Evaluation, comprising nine datasets
designed to evaluate the performance of language models. This study compares
the performance of encoder-only language models and the decoder-only language
models. Our findings reveal that while some decoder-only LLMs demonstrate
notable performance across most financial tasks via zero-shot prompting, they
generally lag behind the fine-tuned expert models, especially when dealing with
proprietary datasets. We hope this study provides foundation evaluations for
continuing efforts to build more advanced LLMs in the financial domain. | Yue Guo, Zian Xu, Yi Yang | 2023-10-19T11:43:15Z | http://arxiv.org/abs/2310.12664v1 | # Is ChatGPT a Financial Expert? Evaluating Language Models on Financial Natural Language Processing
###### Abstract
The emergence of Large Language Models (LLMs), such as ChatGPT, has revolutionized general natural language preprocessing (NLP) tasks. However, their expertise in the financial domain lacks a comprehensive evaluation. To assess the ability of LLMs to solve financial NLP tasks, we present FinLMEval, a framework for Financial Language Model Evaluation, comprising nine datasets designed to evaluate the performance of language models. This study compares the performance of encoder-only language models and the decoder-only language models. Our findings reveal that while some decoder-only LLMs demonstrate notable performance across most financial tasks via zero-shot prompting, they generally lag behind the fine-tuned expert models, especially when dealing with proprietary datasets. We hope this study provides foundation evaluations for continuing efforts to build more advanced LLMs in the financial domain.
## 1 Introduction
Recent progress in natural language processing (NLP) demonstrates that large language models (LLMs), like ChatGPT, achieve impressive results on various general domain NLP tasks. Those LLMs are generally trained by first conducting self-supervised training on the unlabeled text [1, 13, 14] and then conducting instruction tuning [15, 16] or reinforcement learning from human feedback (RLHF) [17] to let them perform tasks following human instructions.
Financial NLP, in contrast, demands specialized knowledge and specific reasoning skills to tackle tasks within the financial domain. However, for general language models like ChatGPT, their self-supervised training is performed on the text from various domains, and the reinforcement learning feedback they receive is generated by non-expert workers. Therefore, how much essential knowledge and skills are acquired during the learning process remains uncertain. As a result, a comprehensive investigation is necessary to assess its performance on financial NLP tasks.
To fill this research gap, we are motivated to evaluate language models on financial tasks comprehensively. For doing so, we propose a framework for Financial Language Model Evaluation (FinLMEval). We collected nine datasets on financial tasks, five from public datasets evaluated before. However, for those public datasets, it is possible that their test sets are leaked during the training process or provided by the model users as online feedback. To eliminate this issue, We used four proprietary datasets on different financial tasks: financial sentiment classification (FinSent), environmental, social, and corporate governance classification (ESG), forward-looking statements classification (FLS), and question-answering classification (QA) for evaluation.
In the evaluation benchmark, we evaluate the encoder-only language models with supervised fine-tuning, with representatives of BERT [13], RoBERTa [12], FinBERT [14] and FLANG [20]. We then compare the encoder-only models with the decoder-only models, with representatives of ChatGPT [17], GPT-4 [15], PIXIU [18], LLAMA2-7B [16] and Bloomberg-GPT [15] by zero-shot prompting. Besides, we evaluate the efficacy of in-context learning of ChatGPT with different in-context sample selection strategies.
Experiment results show that (1) the fine-tuned task-specific encoder-only model generally performs better than decoder-only models on the financial tasks, even if decoder-only models have much larger model size and have gone through more pre-training and instruction tuning or RLHF; (2) when the supervised data is insufficient, the
zero-shot decoder-only models have more advantages than fine-tuned encoder-only models; (3) the performance gap between fine-tuned encoder-only models and zero-shot decoder-only models is more significant on private datasets than the publicly available datasets; (4) in-context learning is only effective under certain circumstances.
To summarize, we propose an evaluation framework for financial language models. Compared to previous benchmarks in the financial domain like FLUE (Shah et al., 2022), our evaluation includes four new datasets and involves more advanced LLMs like ChatGPT. We show that even the most advanced LLMs still fall behind the fine-tuned expert models. We hope this study contributes to the continuing efforts to build more advanced LLMs in the financial domain.
## 2 Related Works
The utilization of language models in financial NLP is a thriving research area. While some general domain language models, like BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), GPT (Brown et al., 2020; OpenAI, 2023) and LLAMA (Touvron et al., 2023, 2023) have been applied to financial NLP tasks, financial domain models like FinBERT (Araci, 2019; Yang et al., 2020; Huang et al., 2023), FLANG (Shah et al., 2022), PIXIU (Xie et al., 2023), InvestLM (Yang et al., 2023) and BloombergGPT (Wu et al., 2023) are specifically designed to contain domain expertise and generally perform better in financial tasks. Recent work such as FLUE (Shah et al., 2022) has been introduced to benchmark those language models in the finance domain. However, the capability of more advanced LLMs, like ChatGPT and GPT-4, has not been benchmarked, especially on proprietary datasets. In this work, in addition to the public tasks used in FLUE, we newly include four proprietary tasks in FinLMEval and conduct comprehensive evaluations for those financial language models.
## 3 Methods
We compare two types of models in FinLMEval: the Transformers encoder-only models that require fine-tuning on the labeled dataset, and decoder-only models that are prompted with zero-shot or few-shot in-context instructions. Figure 1 provides an outline of evaluation methods of FinLMEval.
### Encoder-only Models
Our experiments explore the performance of various notable encoder-only models: BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), FinBERT (Yang et al., 2020) and FLANG (Shah et al., 2022). BERT and RoBERTa are pre-trained on general domain corpora, while FinBERT and FLANG are pre-trained on a substantial financial domain corpus. We fine-tune the language models on specific tasks. Following the fine-tuning process, inference can be performed on the fine-tuned models for specific applications.
Figure 1: The framework of financial language model evaluation (FinLMEval).
### Decoder-only Models
We also evaluate the performance of various popular decoder-only language models: ChatGPT (Ouyang et al., 2022), GPT-4 (OpenAI, 2023), PIXIU (Xie et al., 2023), LLAMA2-7B (Touvron et al., 2023) and Bloomberg-GPT (Wu et al., 2023). ChatGPT and GPT-4, developed by OpenAI, are two advanced LLMs that showcase exceptional language understanding and generation abilities. The models are pre-trained on a wide array of textual data and reinforced by human feedback. PIXIU is a financial LLM based on fine-tuning LLAMA (Touvron et al., 2023) with instruction data. LLAMA2 is a popular open-sourced LLM pre-trained on extensive online data, and BloombergGPT is an LLM for finance trained on a wide range of financial data. As the model size of the evaluated decoder-only models is extremely large, they usually do not require fine-tuning the whole model on downstream tasks. Instead, the decoder-only models provide answers via zero-shot and few-shot in-context prompting.
We conduct zero-shot prompting for all decoder-only models. We manually write the prompts for every task. An example of prompts for the sentiment classification task is provided in Figure 1, and the manual prompts for other tasks are provided in Appendix A. Furthermore, to evaluate whether few-shot in-context learning can improve the model performance, we also conduct in-context learning experiments on ChatGPT. We use two strategies to select the in-context examples for few-shot in-context learning: random and similar. The former strategy refers to random selection, and the latter selects the most similar sentence regarding the query sentence. All in-context examples are selected from the training set, and one example is provided from each label class.
## 4 Datasets
Our evaluation relies on nine datasets designed to evaluate the financial expertise of the models from diverse perspectives. Table 1 overviews the number of training and testing samples and the source information for each dataset. Below, we provide an introduction to each of the nine datasets.
**FinSent** is a newly collected sentiment classification dataset containing 10,000 manually annotated sentences from analyst reports of S&P 500 firms.
**FPB Sentiment Classification**(Malo et al., 2014) is a classic sentiment dataset of sentences from financial news. The dataset consists of 4840 sentences divided by the agreement rate of 5-8 annotators. We use the subset of 75% agreement.
**FiQA SA** (FiQA) is a aspect-based financial sentiment analysis dataset. Following the "Sentences for QA-M" method in (Sun et al., 2019), for each (sentence, target, aspect) pair, we transform the sentence into the form "what do you think of the {aspect} of {target}? {sentence}" for classification.
**ESG** evaluates an organization's considerations on environmental, social, and corporate governance. We collected 2,000 manually annotated sentences from firms' ESG reports and annual reports.
**FLS**, the forward-looking statements, are beliefs and opinions about a firm's future events or results. FLS dataset, aiming to classify whether a sentence contains forward-looking statements, contains 3,500 manually annotated sentences from the Management Discussion and Analysis section of annual reports of Russell 3000 firms.
**QA** contains question-answering pairs extracted from earnings conference call transcripts. The goal of the dataset is to identify whether the answer is valid to the question.
**Headlines**(Sinha and Khandait, 2020) is a dataset for the commodity market that analyzes
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \# train & \# test & source & description \\ \hline FinSent & 8996 & 1000 & - & Financial sentiment classification dataset from analyst reports. \\ FPB & 2453 & 1000 & (Malo et al., 2014) & Sentiment classification dataset from financial news. \\ FiQA SA & 973 & 200 & (FiQA) & Aspect-based financial sentiment analysis. \\ ESG & 3000 & 1000 & - & Environmental, social, and corporate governance classification dataset. \\ FLS & 2600 & 1000 & - & Forward-looking statements classification dataset from corporate reports. \\ QA & 868 & 200 & - & Classification on the validity of question-answering pairs. \\ Headlines & 9570 & 1000 & (Sinha and Khandait, 2020) & Multiple tasks classification dataset from news headlines. \\ NER & 14041 & 1000 & (Alvarado et al., 2015) & Named entity recognition on financial agreements. \\ FOMC & 1831 & 450 & (Shah et al., 2023) & Hawkish-dovish monetary policy classification from FOMC documents. \\ \hline \hline \end{tabular}
\end{table}
Table 1: The summarization of nine datasets in FinLMEval. FPB, FiQA SA, Headlines, NER and FOMC are from public datasets, and FinSent, ESG, FLS and QA are newly collected and not released before.
news headlines across multiple dimensions. The tasks include the classifications of Price Direction Up (PDU), Price Direction Constant (PDC), Price Direction Down (PDD), Asset Comparison(AC), Past Information (PI), Future Information (FI), and Price Sentiment (PS).
**NER**Alvarado et al. (2015) is a named entity recognition dataset of financial agreements.
**FOMC**Shah et al. (2023) aims to classify the stance for the FOMC documents into the tightening or the easing of the monetary policy.
Among the datasets, FinSent, ESG, FLS, and QA are newly collected proprietary datasets.
## 5 Experiments
This section introduces the experiment setups and reports the evaluation results.
### Model Setups
**Encoder-only models setups.** We use the BERT (base,uncased), RoBERTa (base), FinBERT (pre-train), and FLANG-BERT from Huggingface1, and the model fine-tuning is implemented via Trainer 2. For all tasks, we fix the learning rate as \(2\times 10^{-5}\), weight decay as 0.01, and the batch size as 48. We randomly select 10% examples from the training set as the validation set for model selection and fine-tune the model for three epochs. Other hyperparameters remain the default in Trainer.
Footnote 1: [https://huggingface.co/](https://huggingface.co/)
Footnote 2: [https://huggingface.co/docs/transformers/main_classes/trainer](https://huggingface.co/docs/transformers/main_classes/trainer)
**Decoder-only models setups.** In the zero-shot setting, for ChatGPT and GPT-4, We use the "gpt-3.5-turbo" and "gpt-4" model API from OpenAI, respectively. We set the temperature and top_p as 1, and other hyperparameters default by OpenAI API. The ChatGPT results are retrieved from the May 2023 version, and the GPT-4 results are retrieved in August 2023. For PIXIU and LLAMA2, we use the "ChanceFocus/finma-7b-nlp" and "meta-llama/Llama-2-7b" models from Huggingface. The model responses are generated greedily. All prompts we used in the zero-shot setting are shown in Appendix A. Besides, as the BloombergGPT Wu et al. (2023) is not publicly available, we directly adopt the results from the original paper.
For in-context learning, we conduct two strategies for in-context sample selection: random and similar. We select one example from each label with equal probability weighting for random sample selection. For similar sample selection, we get the sentence embeddings by SentenceTransformer Reimers and Gurevych (2019) "all-MiniLM-L6-v2" model3 and use cosine similarity as the measure of similarity. Then, we select the sentences with the highest similarity with the query sentence as the
\begin{table}
\begin{tabular}{c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multicolumn{4}{c|}{Encoder-only Models} & \multicolumn{4}{c}{Decoder-only Models} \\ & BERT & RoBERTa & FinBERT & \begin{tabular}{c} FLANG- \\ BERT \\ \end{tabular} & ChatGPT & GPT-4 & PIXIU & \begin{tabular}{c} LLAMA2- \\ 7B \\ \end{tabular} &
\begin{tabular}{c} Bloomberg- \\ GPT \\ \end{tabular} \\ \hline FinSent & 0.841 & **0.871** & 0.851 & 0.849 & 0.782 & 0.809 & 0.800 & 0.243 & - \\ FPB & 0.914 & 0.934 & 0.912 & 0.881 & 0.869 & 0.905 & **0.965** & 0.339 & 0.511 \\ FiQA SA & 0.750 & 0.875 & 0.805 & 0.695 & 0.898 & 0.920 & **0.930** & 0.480 & 0.751 \\ ESG & 0.931 & 0.956 & **0.958** & 0.925 & 0.477 & 0.626 & 0.509 & 0.209 & - \\ FLS & 0.875 & 0.862 & **0.882** & 0.861 & 0.652 & 0.565 & 0.275 & 0.365 & - \\ QA & **0.865** & 0.825 & 0.825 & 0.785 & 0.695 & 0.775 & 0.680 & 0.625 & - \\ Headlines-PDU & 0.937 & 0.947 & **0.956** & 0.940 & 0.889 & 0.878 & 0.842 & 0.411 & - \\ Headlines-PDC & 0.978 & 0.979 & **0.981** & 0.978 & 0.936 & 0.947 & 0.702 & 0.053 & - \\ Headlines-PDD & 0.954 & **0.961** & 0.960 & 0.956 & 0.896 & 0.900 & 0.763 & 0.382 & - \\ Headlines-PI & 0.974 & 0.964 & 0.976 & **0.977** & 0.225 & 0.105 & 0.753 & 0.966 & - \\ Headlines-AC & 0.996 & 0.993 & **0.997** & 0.995 & 0.806 & 0.838 & 0.902 & 0.346 & - \\ Headlines-FI & 0.976 & 0.964 & 0.976 & 0.974 & 0.711 & 0.780 & **0.981** & 0.048 & - \\ Headlines-PS & 0.905 & 0.918 & **0.924** & 0.906 & 0.630 & 0.811 & 0.776 & 0.546 & - \\ NER & 0.980 & **0.981** & 0.964 & 0.978 & 0.748 & 0.707 & 0.749 & 0.714 & 0.608 \\ FOMC & 0.587 & 0.611 & 0.602 & 0.602 & 0.633 & **0.729** & 0.522 & 0.349 & - \\ \hline Average & 0.897 & **0.909** & 0.905 & 0.907 & 0.723 & 0.753 & 0.739 & 0.405 & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: The results of fine-tuned encoder-only models and zero-shot decoder-only models in 9 financial datasets. The results, except the NER dataset, are measured in micro-F1 score. NER is measured in accuracy. Although some zero-shot decoder-only models can achieve considerate results in most cases, the fine-tuned encoder-only models usually perform better than decoder-only models.
in-context examples. The prompts for in-context learning are directly extended from the corresponding zero-shot prompts, with the template shown in Figure 1.
### Main Results
Table 2 compares the results of the fine-tuned encoder-only models and zero-shot decoder-only models in 9 financial datasets. We have the following findings:
**In 6 out of 9 datasets, fine-tuned encoder-only models can perform better than decoder-only models.** The decoder-only models, especially those that have experienced RLHF or instruction-tuning, demonstrate considerable performance on zero-shot settings on the financial NLP tasks. However, their performance generally falls behind the fine-tuned language models, implying that these large language models still have the potential to improve their financial expertise. On the other hand, **fine-tuned models are less effective when the training examples are insufficient** (FiQA SA) **or imbalanced** (FOMC).
**The performance gaps between fine-tuned models and zero-shot LLMs are larger on proprietary datasets than publicly available ones.** For example, the FinSent, FPB, and FiQA SA datasets are comparable and all about financial sentiment classification. However, zero-shot LLMs perform the worst on the proprietary dataset FinSent. The performance gaps between fine-tuned models and zero-shot LLMs are also more significant on other proprietary datasets (ESG, FLS, and QA) than the public dataset.
Table 3 compares the zero-shot and in-context few-shot learning of ChatGPT. In ChatGPT, **the zero-shot and few-shot performances are comparable in most cases**. When zero-shot prompting is ineffective, adding demonstrations can improve ChatGPT's performance by clarifying the task, as the results of ESG and Headlines-PI tasks show. Demonstrations are ineffective for easy and well-defined tasks, such as sentiment classifications and Headlines (PDU, PDC, PDD, AC, and FI), as the zero-shot prompts clearly instruct ChatGPT.
## 6 Conclusions
We present FinLMEval, an evaluation framework for financial language models. FinLMEval comprises nine datasets from the financial domain, and we conduct the evaluations on various popular language models. Our results show that fine-tuning expert encoder-only models generally perform better than the decoder-only LLMs on the financial NLP tasks, and adding in-context demonstrations barely improves the results. Our findings suggest that there remains room for enhancement for more advanced LLMs in the financial NLP field. Our study provides foundation evaluations for continued progress in developing more sophisticated LLMs within the financial sector.
## 7 Limitations
This paper has several limitations to improve in future research. First, our evaluation is limited to some notable language models, while other advanced LLMs may exhibit different performances from our reported models. Also, as the LLMs keep evolving and improving over time, the future versions of the evaluated models can have different performance from the reported results. Second, FinLMEval only focuses on financial classification tasks, and the analysis of the generation ability of the LLMs still needs to be included. Future work can be done toward developing evaluation benchmarks on generation tasks in the financial domain.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{Datasets} & \multicolumn{3}{c}{ChatGPT} \\ & zero & ic-ran & ic-sim \\ \hline FinSent & 0.782 & 0.761 & 0.761 \\ FPB & 0.869 & 0.832 & 0.844 \\ FiQA SA & 0.898 & 0.891 & 0.891 \\ ESG & 0.477 & 0.726 & 0.800 \\ FLS & 0.652 & 0.673 & 0.636 \\ QA & 0.695 & 0.660 & 0.675 \\ Headlines-PDU & 0.889 & 0.839 & 0.765 \\ Headlines-PDC & 0.936 & 0.323 & 0.413 \\ Headlines-PDD & 0.896 & 0.816 & 0.788 \\ Headlines-PI & 0.225 & 0.768 & 0.844 \\ Headlines-AC & 0.806 & 0.576 & 0.597 \\ Headlines-FI & 0.711 & 0.606 & 0.592 \\ Headlines-PS & 0.630 & 0.690 & 0.729 \\ NER & 0.748 & 0.784 & 0.793 \\ FOMC & 0.633 & 0.672 & 0.650 \\ \hline Average & **0.723** & 0.708 & 0.719 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The results of ChatGPT in zero-shot and in-context few-shot learning. Zero, ic-ran, and ic-sim represent zero-shot learning, in-context learning with random sample selection, and in-context learning with similar sample selection. The zero-shot and few-shot performances are comparable in most cases. |
2305.07961 | Leveraging Large Language Models in Conversational Recommender Systems | A Conversational Recommender System (CRS) offers increased transparency and
control to users by enabling them to engage with the system through a real-time
multi-turn dialogue. Recently, Large Language Models (LLMs) have exhibited an
unprecedented ability to converse naturally and incorporate world knowledge and
common-sense reasoning into language understanding, unlocking the potential of
this paradigm. However, effectively leveraging LLMs within a CRS introduces new
technical challenges, including properly understanding and controlling a
complex conversation and retrieving from external sources of information. These
issues are exacerbated by a large, evolving item corpus and a lack of
conversational data for training. In this paper, we provide a roadmap for
building an end-to-end large-scale CRS using LLMs. In particular, we propose
new implementations for user preference understanding, flexible dialogue
management and explainable recommendations as part of an integrated
architecture powered by LLMs. For improved personalization, we describe how an
LLM can consume interpretable natural language user profiles and use them to
modulate session-level context. To overcome conversational data limitations in
the absence of an existing production CRS, we propose techniques for building a
controllable LLM-based user simulator to generate synthetic conversations. As a
proof of concept we introduce RecLLM, a large-scale CRS for YouTube videos
built on LaMDA, and demonstrate its fluency and diverse functionality through
some illustrative example conversations. | Luke Friedman, Sameer Ahuja, David Allen, Zhenning Tan, Hakim Sidahmed, Changbo Long, Jun Xie, Gabriel Schubiner, Ajay Patel, Harsh Lara, Brian Chu, Zexi Chen, Manoj Tiwari | 2023-05-13T16:40:07Z | http://arxiv.org/abs/2305.07961v2 | # Leveraging Large Language Models in Conversational Recommender Systems
###### Abstract.
A Conversational Recommender System (CRS) offers increased transparency and control to users by enabling them to engage with the system through a real-time multi-turn dialogue. Recently, Large Language Models (LLMs) have exhibited an unprecedented ability to converse naturally and incorporate world knowledge and common-sense reasoning into language understanding, unlocking the potential of his paradigm. However, effectively leveraging LLMs within a CRS introduces new technical challenges, including properly understanding and controlling a complex conversation and retrieving from external sources of information. These issues are exacerbated by a large, evolving item corpus and a lack of conversational data for training. In this paper, we provide a roadmap for building an end-to-end large-scale CRS using LLMs. In particular, we propose new implementations for user preference understanding, flexible dialogue management and explainable recommendations as part of an integrated architecture powered by LLMs. For improved personalization, we describe how an LLM can consume interpretable natural language user profiles and use them to modulate session-level context. To overcome conversational data limitations in the absence of an existing production CRS, we propose techniques for building a controllable LLM-based user simulator to generate synthetic conversations. As a proof of concept we introduce RecLIM, a large-scale CRS for YouTube videos built on LaMDA, and demonstrate its fluency and diverse functionality through some illustrative example conversations.
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
FootnoteFootnote: copyrighted
While studied recently [8; 73], this approach is yet to be solved in the general large-scale recommendation setting.
In this paper we provide a roadmap for leveraging LLMs in a variety of ways to build a controllable and explainable large-scale CRS. Key contributions of the proposal are:
* A dialogue management module that reframes natural language generation, preference understanding, context tracking, and calls to a recommendation engine as a unified language modeling task performed by a single LLM.
* A general conceptual framework for performing retrieval with an LLM over a huge corpus of items. Various solutions are presented depending on efficiency requirements and what data and external APIs are available.
* A joint ranking / explanation module that uses an LLM to extract user preferences from an ongoing conversation and match them to textual artifacts synthesized from item metadata. As a byproduct of intermediate chain-of-thought reasoning [95], the LLM generates natural language justifications for each item shown to the user, increasing the transparency of the system.
* Incorporation of persistent, interpretable natural language user profiles as additional input to system LLMs, which supplements session-level context and improves the personalized experience.
* Techniques for building controllable LLM-based user simulators that can be used to generate synthetic conversations for tuning system modules.
As a proof of concept we introduce RecLLM, an LLM-based CRS for YouTube videos powered by LaMDA [86], and share some example conversations showing the fluency and diverse functionality of the system. Our goal is to make a compelling argument for the promise and viability of LLM-based conversational recommender systems and to take a first step towards realizing this vision in practice.
## 2. Problem scope
In RecLLM, we represent the CRS in a multi-modal setup comprised of two components: A slate of recommendations and an
Figure 1. Overview of key contributions from RecLLM. (1) A dialogue management module uses an LLM to converse with the user, track context and make system calls such as submitting a request to a recommendation engine all as a unified language modeling task. (2) Various solutions are presented for tractable retrieval over a large item corpus within an LLM-based CRS. (3) A ranker module uses an LLM to match preferences extracted from the context of the conversation to item metadata and generate a slate of recommendations that is displayed to the user. The LLM also jointly generates explanations for its decisions that can be surfaced to the user. (4) Interpretable natural language user profiles are consumed by system LLMs to modulate session-level context and increase personalization. (5) A controllable LLM-based user simulator can be plugged into the CRS to generate synthetic conversations for tuning system modules.
Figure 2. Screenshot of an LLM-based user simulator talking with RecLLM.
ongoing conversation between the user and the conversational agent (see Figure 2). The user outputs a natural language message on their turn, and the agent responds with a natural language message, optionally updating the slate of recommendations based on the conversation. By separating dialogue from the recommendation slate we hope to more accurately reflect how a large-scale CRS would eventually look in a production setting.
Traditionally, users have interacted with recommender systems via user interface interactions such as viewing the recommended items, or marking recommendations as good or bad via interface widgets (Zhou et al., 2017; Zhang et al., 2018). Although currently in RecLLM we exclude these types of interactions we do not intend to replace them; our eventual goal is to augment them with the more expressive channel of natural language that allows users to better express nuance about their interests.
In terms of the item corpus, RecLLM recommends from the corpus of all public YouTube videos. We make this choice due to two characteristics that increase the applicability of the system to other real-world problems: One, unlike corpora of items that occur frequently in the LLM's training data (e.g., movies and popular music), an LLM cannot feasibly be used to directly recommend YouTube videos and must interface with the corpus. Secondly, it's a large-scale corpus, requiring a scalable approach to recommendations. A natural consequence of building such a system from scratch is that there are no logs of users interacting with this system to jumpstart training of the model(s). Although RecLLM focuses on YouTube videos, our intention in this paper is to outline a general approach that can be easily extended to many other domains.
While evaluating with initial testers, we found that users expect a CRS that pairs slate recommendations with natural language conversation to possess a wide range of conversational capabilities, such as retaining context, handling topic shifts and referencing slate items. RecLLM focuses on leveraging techniques that can scale over a broad number of these use-cases. In Figure 3 a few of the core conversational capabilities currently supported are demonstrated via a mock conversation.
Finally, there are several problems that need to be addressed for conversational agents to become mainstream. These include safety of dialogue, debiasing, consistent personality of agents, etc. In this work we do not attempt to tackle these problems directly, rather focusing on problems that are unique to the setting of conversational recommenders.
## 3. System Overview
In this section we take a closer look at key system components of RecLLM (see Figure 1). In particular we focus on dialogue management, retrieval, ranking and explanations, and incorporation of natural language user profiles.
### Dialogue Management
Dialogue management is the central module of a CRS, acting as the interface between the user and the rest of the system. It is responsible for guiding the user through a multi-turn exploration of the recommendation corpus and generating sensible, useful, and
Figure 4. A unified LLM dialogue management module. An LLM takes as input the full session context and outputs a sequence of messages ending in a terminal output that triggers a system action, such as a response to the user.
Figure 3. RecLLM possesses many conversational capabilities such as the ability to retain context throughout a session, handle topic shifts and reference items from recommendation slates.
grounded responses at each turn. In the process it must either implicitly or explicitly perform context tracking to extract useful representations of user preferences and intents. This information can be used to inform the dialogue policy and also as the basis for outputting API calls to initiate system actions (e.g. by sending a search query to a recommendation engine backend, see Section 3.2.1). From an end-to-end point of view, given context information (dialogue history, a user profile, item summaries, etc.), the goal of the dialogue manager is to generate system actions to take, as well as an appropriate system utterance.
There are extra challenges and requirements to dialogue management in the context of conversational recommenders:
* **Control**: In contrast to open-ended dialogue, a CRS dialogue manager must actively work with the user to explore the recommendation corpus. This entails a mixed-initiative setup where the system must respond to user requests and also at times actively steer the conversation in a specific direction. For instance, preference elicitation--in which the system must figure out when and how to best query the user in order to extract maximal information about their preferences--is an entire subfield of CRS dialogue management (Friedman, 2010; Goyal et al., 2011; Goyal et al., 2011; Goyal et al., 2011).
* **Ambiguity**: Compared to task-oriented dialogue there is no clear cut measure of success for a CRS dialogue manager. Although the system should try to ensure that the conversation does not get too far off track the core recommendation task, the goal is not necessarily to minimize the number of turns that it takes the user to find an acceptable item, but rather to provide an overall satisfactory exploratory experience (see, for instance (Goyal et al., 2011)). This means that there is rarely a single objectively "correct" thing for a conversational system to say at any given time, nor an easily defined metric for whether the dialogue manager is doing a good job.
* **Grounding**: One of the main challenges of a CRS dialogue manager is to faithfully ground its responses to the user in the recommendation corpus. After returning a slate of recommendations, the system should be able to refer to the items in a relevant and factually correct way. Other sources of external information, such as long term preferences coming from a user profile, may also be injected and the dialogue manager should be able to incorporate them appropriately in the ongoing conversation.
Traditionally CRSs take a modular approach to dialogue management, where a hardcoded policy graph maps dialogue states (e.g. intent) to different system actions, such as whether to get a recommendation, ask a question, or chat casually. Natural language understanding models extract preferences and determine the dialogue states, and separate natural language generation models generate system responses. Alternatively, in some recent CRSs language models are tuned end-to-end directly to imitate dialogue collected from crowdsource workers, discarding any notion of dialogue states or internal structure.
In RecLLM we employ a single unified LLM to execute dialogue management purely in terms of language modeling. At each turn the LLM takes as input the prior conversation context along with additional information like textual representations of recommendation slates and user profiles that are potentially injected from external sources. Like the end-to-end approach mentioned above, one of the distinguishing features of this architecture is that there no longer exists a hardcoded policy graph with fixed dialogue states. Instead, on a given system turn the LLM generates a sequence of natural language outputs that encapsulate all context tracking, intermediate reasoning, natural language generation, and API calls to the rest of the system. It is hardcoded that certain string patterns in outputs from the dialogue manager trigger system actions. For instance an output "Response: _<message_" will cause _message_ to be shown as a user facing response, and "Request: _<query_" will cause _query_ to be sent to the recommendation engine backend to retrieve a slate of recommendations. Other outputs of the LLM can function as chain-of-reasoning steps, instructions to itself to follow, or dialogue state tracking inferences. Unlike the system calls, there are no ingrained rules about the functionality of these intermediate outputs, and conventions about their use must be taught to the LLM either through in-context few-shot learning or tuning.
The advantage of this architecture over the modular approach is its simplicity and flexibility. In the modular approach, any new functionality such as the addition of a new user intent or dialogue state has to be engineered into the system, which is a serious impediment to scalability. The unified LLM architecture shifts the emphasis from engineering-driven to data-driven quality iteration. To fix a problem or introduce new capabilities, instead of engineering a new component a designer must now create examples that enable the LLM to learn the desired behavior. This also creates the potential for the dialogue manager to learn new policy states and useful dialogue state tracking artifacts through the generalization abilities of the LLM.
The main challenge to the unified LLM approach is how to effectively control the dialogue manager and guide it towards a reasonable dialogue policy without explicitly constraining it via hard rules. In our initial implementation we tune our unified LLM on a moderate number of manually generated examples. In this way we are able to establish some direction about the type of behavior and internal states we would like to see while still relying heavily on the ability of LLMs pretrained on dialogue data to converse naturally with only minimal supervision. Although we are able to build a functional dialogue manager this way, with only a limited amount of training examples it is difficult to teach the dialogue manager a sophisticated policy tailored to the conversational recommender domain. In Section 4.2 we discuss ideas for overcoming this limitation by tuning our dialogue manager and recommendation modules with larger amounts of synthetically generated data.
### Recommendations and Refinement
Once triggered by the dialogue management module, it is the responsibility of the recommendation module to return a slate of high quality, relevant, and diverse recommendations that will be shown to the user. This can either be an initial recommendation slate or a refinement of an earlier slate from the session based on feedback from the user. A traditional recommender system chooses items by inferring preferences of the user from some type of user profile or dense representation built from historical data, possibly taking into
account other contextual factors (e.g. the location or time of day). In a search system, the user can supplement these implicit signals with explicit intents, usually through a simple static query. A primary challenge of a CRS is that now the user can express these explicit intents over the course of a full multi-turn conversation, which the recommendation module must understand and connect to the item corpus. Many traditional recommender systems employ a two stage pipeline, first retrieving candidate items and then ranking them (Song et al., 2016; Wang et al., 2017). RecLLM follows this strategy, with the added twist that the ranker also jointly generates natural language explanations for why each item is being selected.
#### 3.2.1. Retrieval
The purpose of the retrieval phase is to take the full corpus, which for some domains such as videos or urls may contain hundreds of millions of items, and based on the context select a small number of candidate items (e.g. 100) that will be fed to a downstream ranker. A key challenge of retrieval is to make this process tractable, as it is not computationally feasible to process each item independently at inference time. In Figure 5 we illustrate a general conceptual framework for retrieval in our problem setting. An LLM processes the session context and generates a request, either implicitly through a model activation layer or explicitly through its language output interface. A recommendation engine then uses a tractable search algorithm to retrieve candidates from the item corpus. In Table 1 we give a few illustrative examples of possible retrieval algorithms that fit into this framework, which we describe in more detail below.
Generalized Dual Encoder ModelA popular solution to retrieval in traditional deep learning based recommenders is to use a dual encoder model consisting of two neural net towers, one to encode the context and one to encode the items (see e.g (Kumar et al., 2017) and Figure 10a). Item embeddings can be generated offline using the item tower and stored in an efficient data structure. An approximate nearest neighbor lookup can then use the generated context embedding to perform a sub-linear time retrieval of item embeddings at inference time (Wang et al., 2017). We can extend this approach for conversational recommenders by using an LLM as a context encoder that processes the full ongoing conversation between the user and system along with any other additional context information. In this case the request sent to the recommendation engine is an embedding, which can be generated by extracting and then projecting a suitable activation layer from the model.
One downside to this approach of pulling embeddings from the internals of an LLM is that it severely hampers our ability to learn a retrieval model in a sample efficient way. Dual encoder models trained from scratch require large amounts of training data to constrain the context tower embeddings to occupy the same subspace as the item tower embeddings. Sometimes it is possible to use pretrained embeddings on the item side (for instance by taking them from an existing production search or recommender system), but still the context embeddings must be tuned to align with the item embeddings to get good results. LLMs operate via a text-in / text-out interface and much of their power comes from the transfer learning afforded by knowledge gained through extensive pretraining. By leaving the level of language abstraction we are sacrificing much of this ability to generalize from a small amount of data.
Direct LLM SearchIn this method the LLM directly outputs ids or titles of items to recommend as text. The tractable search algorithm is an exact or fuzzy match against items in the corpus and the recommendation engine plays no role beyond this simple matching. The LLM must learn to output these ids/titles through some combination of its pretraining and a corpus-specific fine tuning phase (see e.g (Wang et al., 2017)). Given the assumption that our system must be able to return states from a fixed item corpus, this is the closest thing to having an LLM-based chatbot function directly as a CRS. The downside to this approach is that because only negligible work is being offloaded to the recommendation engine, the LLM must memorize information about the entire item corpus within its model parameters. For a large corpus this can be prohibitively expensive in terms of the model size and training data needed, and also makes it difficult to refresh the item corpus without retraining the LLM.
Concept Based SearchIn this method the LLM outputs a list of concepts, which are then embedded and aggregated by the recommendation engine into a single context embedding. This is used to lookup items through approximate k-nearest neighbor search similar to the generalized dual encoder method. A technique like Concept Activation Vectors (Wang et al., 2017) can be used to perform this transformation from concepts to embeddings in the item space. The appeal of this approach is that extracting relevant concepts from a conversation is a natural task that can be taught to an LLM through in-context learning or tuning with a small number of examples. Also, because only item embeddings are needed (the concept embeddings are derived from these) if pretrained item embeddings can be borrowed from an existing source then no additional tuning of embeddings is required. However, one limitation is that lists of concepts are often a coarse representation of a conversation and similar to continuous bag-of-words methods (Wang et al., 2017) are lossy with
\begin{table}
\begin{tabular}{|c|p{142.3pt}|p{142.3pt}|} \hline
**Approach** & **Request Type** & **Tractable Search Algorithm** \\ \hline Generalized Dual Encoder Model & Internal LLM embeddings & KNN or ScaNN (Wang et al., 2017) \\ \hline Direct LLM Search & Title or id & Fuzzy lookup \\ \hline Concept Based Search & List of concepts & Concept Activation Vector (Wang et al., 2017) \\ \hline Search API Lookup & Search query & Search API \\ \hline \end{tabular}
\end{table}
Table 1. Various possible solutions to large-scale retrieval in a CRS.
Figure 5. Overview of large-scale retrieval in an LLM-based CRS.
respect to word order and other nuances of language, which can negatively affect retrieval quality.
Search API LookupIn this method, the LLM directly outputs a search query, which gets fed into a black-box search API to retrieve items. Unlike Concept Based Search, which is generic as long as item embeddings can be trained or reused, Search API Lookup is only applicable when such a search API already exists for the domain in question. However, when available, this type of API is often backed by a sophisticated search stack and can yield higher quality results. Analogous to Concept Based Search, in Search API Lookup the LLM can be taught to output relevant search queries using a small number of examples (see Section 3.1), but the quality of retrieval is limited by the extent to which a search query can properly represent the full context of a conversation.
In Section 4.2 we build upon these methods by discussing options for tuning a retrieval model using large-scale synthetic data.
#### 3.2.2. Ranking / Explanations
After candidate items have been retrieved, a ranker decides which of them will be included in the recommendation slate and in what order. Unlike the retrieval module, the ranking module does not need to perform tractable search over a large corpus and is therefore less constrained in the types of computation that are possible. In a traditional recommender system, this usually manifests in the ranker crossing context and item features (instead of processing them in separate towers as is done in a dual encoder) and potentially using custom ranking losses during training that directly compare candidate items (Beng et al., 2015). In the case of RecLLM, we take advantage of this extra room for computation to use an LLM that reasons sequentially about how well an item matches the context and generates a rationalization for its decision as a byproduct.
Figure 6 gives a schematic for the LLM ranker. For each candidate item, the LLM jointly generates a score and a natural language explanation for the score1. These scores implicitly induce a ranking of the items. The first step is to create a text summarization of the item that fits into the context window of the LLM based on metadata associated with the item. In the case of a YouTube video recommender, this metadata consists of information such as the title, knowledge graph entities associated with the video, developer description of the video, transcript of the video, and user comments. In the future we would also expect a large multimodal model to directly process the raw video instead of relying only on textual artifacts. This item summarization can be done offline and is necessary in the case where the metadata is high volume (e.g. if we have thousands of user comments). We can view this summarization as a special case of the multi-document summarization problem (Sutskever et al., 2015); it is also related to a main challenge of the user profile module (see Section 3.3), which must summarize large amounts of prior user data into a text format that can be passed into an LLM (or alternatively augment the LLM with the ability to access this information efficiently at inference time). There also can be a similar preprocessing step for summarizing the context information, although this must be done at inference time since unlike for items we cannot enumerate all possible contexts and process them offline.
Footnote 1: There are many proposed solutions for enabling text in / test out LLMs to solve regression problems (i.e. output a score) (Sutskever et al., 2015); within RecLLM we use the simple approach of bucketing the range of possible scores and having the LLM output a semantically meaningful phrase (e.g. “excellent fit”) corresponding to a bucket id.
Given these item and context summaries as input, the LLM ranker then scores the item using chain-of-thought reasoning, which has been shown to improve the performance of LLMs on these types of classification / regression tasks (Sutskever et al., 2015). The intermediate chain-of-thought reasoning steps generated by the LLM function as explanations for why certain items are eventually included or left out of the recommendation slate. These explanations can be viewed internally for debugging purposes and also shown to the user, either by including them as input to the dialogue manager that produces utterances within the conversational interface or by postprocessing and including them within pop-up boxes in the visual UI where the recommendation slates are displayed.
### User Profile
One of the key advantages to a CRS is the ability of the user to articulate their preferences over the course of a session, so that the system can assist them without necessarily needing any prior background information. Despite this, the personalized experience can be improved if the system has built up a profile of the user beforehand so that there is a mutual starting base to build the conversation on top of. For instance, if a user dislikes jazz music and has shared this previously, they should not have to reiterate this point every new session when searching for music videos.
In traditional deep learning based recommender systems, non-verbal interaction signals such as clicks or ratings are often used to train embedding representations of a user that can be fed into a neural net. In RecLLM we instead represent users with natural language profiles (see e.g. (Sutskever et al., 2015)), which can be consumed by an LLM. These are more transparent compared to embeddings and specific pieces of information can usually be attributed to an original source, which aids in explainability. Also, users can manually edit
Figure 6. A joint LLM ranking / explanation module. The conversation is used as context for the user’s preferences and the video metadata is used as context for the item. The LLM takes in summaries of the item side and context side to produce a score for the item and an explanation for the score.
these natural language profiles, which gives them greater control to monitor and update their preferences. In RecLLM we build user profiles based on a user's repeated interaction with the system over multiple sessions, although it would be possible to incorporate other data sources as well.
An important open research question is how to structure a user profile in terms of natural language. Currently in RecLLM we represent a user by a set of salient facts we have extracted from prior sessions (e.g. "I do not like listening to jazz while in the car") similar to (Kumar et al., 2017), although many other more sophisticated schemes are possible. Another extreme possibility is to avoid any lossiness by defining a user profile degenerately as the raw conversational history of all sessions the user has had with the system in the past. In this case we would need to implement an efficient mechanism for an LLM to retrieve relevant facts from this raw history at inference time.
There are three main components to the User Profile module, which we now describe.
Memory ExtractionThe purpose of the memory extraction component is to identify when a particular utterance contains a meaningful and enduring fact about the user that can be extracted and added to the user profile. In RecLLM, this is currently implemented by an LLM using in-context few-shot learning as part of the dialogue management module.
Triggering and RetrievalThe triggering and retrieval component decides at what instances during a session it is likely beneficial to query the user profile for supplementary information and to then retrieve the most relevant facts related to the current context. Currently at each turn RecLLM retrieves a single fact from the user profile by embedding the last user utterance and doing a cosine distance comparison between this embedding and precomputed embeddings of each fact in the user profile. Triggering is implemented post hoc by thresholding on this minimal cosine distance. Better performance is likely possible by using a separate LLM classifier for triggering, retrieving multiple facts from the user profile, and basing retrieval on the entire conversation context of the session as opposed to just the last utterance.
System IntegrationOnce the user profile information is retrieved, it must be integrated into the rest of the system so that it can influence behavior such as the system's dialogue and API calls to the recommendation engine. How to properly integrate facts coming from a user profile is a difficult open question, as it is highly context dependent how they should modulate short term preferences expressed by the user in the current session. For instance, the system may know that the user is allergic to seafood, but if the user explicitly says they want to see some videos about fish recipes to pass along to a friend it's important that the system overtles this preference from the user profile and gives the user what they are asking for. In RecLLM we use a simple strategy of injecting facts from the user profile into the text input of the dialogue manager (see Section 3.1). By doing so we allow LLMs powering the dialogue manager to make nuanced decisions about how to utilize this auxiliary information in the context of the ongoing session without having to engineer any hard rules into the system.
## 4. Simulation and Large-Scale Tuning
A major impediment to building a high-quality industrial CRS is a lack of data available for training and evaluation. Typically, large-scale recommender systems are trained on user interaction data mined from the logs of existing products; however, conversational recommenders are a nascent technology and for the most part products using this paradigm do not exist yet. An initial high quality system must be built to make such a product viable, after which a bootstrapping cycle can begin in which real data is generated from the system and then increasingly better versions of the system are trained using that data. RecLLM deals with the data sparsity problem by exploiting the transfer learning ability of large language models using in-context few-shot learning or fine-tuning on a small number of manually generated examples. However, we hypothesize that ultimately there is a ceiling to the quality that can be achieved through these approaches, given the long-tail of different scenarios that can arise within a mixed-initiative CRS. In this section we discuss the use of LLM-powered user simulators to generate realistic data at scale and techniques for tuning system components using larger amounts of data.
### User Simulation
Figure 8. An example of session based control: A single variable (a user profile) is used to condition the user simulator.
Figure 7. Overview of the architecture incorporating the User Profile module
According to the conversational recommender setup considered in this paper (see Section 2), a session consists of a sequence \(S=\{s_{1},u_{1},s_{2},u_{2},...,s_{n},u_{n}\}\), where each \(u_{i}\) is a natural language utterance by the user and each \(s_{i}\) is a combination of a natural language utterance and possibly a slate of recommendations by the CRS. Therefore, a user simulator is defined by a function \(f(S^{\prime})=U_{i}\), where \(S^{\prime}=\{s_{1},u_{1},s_{2},u_{2},...,s_{i}\}\) is a partial session and \(U_{i}\) is a distribution over possible user utterances \(u_{i}\) continuing the session. Given a fixed CRS and such a user simulator \(f\), we can generate a new sample session by having the CRS and \(f\) interact for a given number of turns (i.e. the CRS generates each \(s_{i}\) and \(f\) generates each \(u_{i}\)).
The ideal property we would like our user simulator to have when synthetically generating data for evaluation or training is **realism**: Conversations between the user simulator and CRS should be nearly indistinguishable from conversations between a representative group of real users and the CRS. Let R be a set of sessions generated by having real users interact with a particular CRS, and Q be a set of simulated sessions sampled from the CRS and a user simulator \(f\) according to the procedure outlined above. We offer three possible ways to measure the realism of \(f\):
* Have crowdsource workers attempt to distinguish between simulated sessions coming from Q and real sessions coming from R.
* Train a discriminator model (Kumar et al., 2017) on the same differentiation task.
* Let \(g(S)\rightarrow[1,k]\) be a function that classifies a session into \(k\) categories and let \(G=\{g_{i}\}\) be an ensemble of such classifiers. One way to define such an ensemble is by adapting dialogue state tracking artifacts used within the dialogue management module of a CRS (see Section 3.1). For instance, we can have a classifier that labels the user intent at a specific turn, or the topics that are covered within a session, or the primary sentiment of a session. Once defined, we can measure how close the distributions Q and R are by matching statistics according to the classifier ensemble \(G\).
A necessary condition of realism is **diversity**: Simulated sessions from Q should have sufficient variation to invoke all the different functionality of a CRS users will encounter in practice when using the system. It may be that in certain situations measuring realism directly is difficult, for instance if collecting a representative set of real user sessions is infeasible. In this case we can at least attempt to measure the diversity of the user simulator, for instance by defining a notion of entropy of Q with respect to the classifier ensemble \(G\).
Controlled SimulationOur starting point for building a user simulator is the observation that an unconstrained LLM built for dialogue such as LaMDA (Kumar et al., 2017) can interact with a CRS in a similar way to real users. The LLM takes as input the full history of the ongoing conversation and outputs the next user utterance, analogous to how a CRS dialogue manager can use an LLM to generate system utterances. However, we would like to exhibit greater control over the simulator to increase its realism. In controlled simulation, we condition the user simulator on additional latent (to the CRS) variables that allow us to guide its behavior in a certain direction. We explore with two different variations:
* **Session-level control**: A single variable \(v\) is defined at the beginning of the session and is used to condition the user simulator throughout the session. For instance, we could define \(v\) as a user profile such as the ones discussed in Section 3.3.
* **Turn-level control**: A distinct variable \(v_{i}\) is defined at each turn of the session and is used to condition the simulator for that turn. For instance, we could define each \(v_{i}\) to be a user intent for the simulator to adopt at that turn.
In the case of an LLM user simulator, one way to execute the control is to translate the variable into text that can be included as part of the simulator's input along with the rest of the conversation. For instance, for the user profile example we could append the statement 'Iam a twelve year old boy who enjoys painting and video games' to the beginning of the conversation to induce the LLM to imitate this personality. To increase realism, one possible strategy is to define session-level or turn-level variables in terms of the classifiers making up one of the ensembles \(G\) discussed above and then to sample the variables according to the empirical distribution of the collection of real user sessions R. Another possibility is to ground the conditioning in trajectories coming from real data from a related product. For instance, we could look at query sequences submitted by users in a non-conversational search application and sample turn-level variables as trajectories of topics that match these query sequences.
Generating Synthetic Training DataTo use a user simulator to generate data for supervised training of one of the CRS system modules an additional property is needed: ground truth labels that the system can learn from. As a toy example, suppose we are trying to learn a sentiment classifier as part of a traditional dialogue state tracking module. For this we need to generate a set of examples \(S_{i},l_{i}\), where \(S_{i}\) is a session \(s_{1},u_{1},s_{2},u_{2},...s_{n},u_{n}\) and \(l_{i}\) is a ground truth label for the primary user sentiment within \(S_{i}\) coming from a set of possible labels \(L\), e.g (angry, satisfied, confused,...). We can use controlled user simulation to solve this problem, by defining a session level variable \(v\) over this set of labels \(L\). First we sample a variable \(v\) from \(L\) (e.g. "angry") and then condition the simulator based on this label, for instance in a priming implementation by appending the message "You are an angry user" to the beginning of the input of the simulator. If we are able to solve this LLM control problem effectively then we can attach a label \(l_{i}\) ="angry" to the session \(S_{i}\) and trust that with high probability it will be accurate.
Figure 9. An example of turn level control: A series of variables (user intents) are used to condition the user simulator at each turn.
A more ambitious use case is generating data for training the retrieval and ranking modules discussed in Sections 3.2.1 and 3.2.2. For this we can define a session level variable \(v\) as a tuple \((x,j)\), where \(x\) is an item from the corpus and \(j\) is an integer turn index. Once we sample a \(v=(x,j)\), we condition the simulator to generate a session \(S=\{v,s_{1},u_{1},s_{2},u_{2},...,s_{j},u_{j},...\}\) such that after \(j\) turns the item \(x\) is a good match for the context \(S\) (i.e. the user would be satisfied if on turn \(s_{j+1}\) the system included \(x\) within a recommendation slate). This session can then be used as an input example for training a recommendation module, where the item \(x\) is a positive instance and other items from the corpus can be sampled as negatives. This is a far more complex conditioning problem, and a simple zero-shot priming instruction (e.g. 'Generate a session such that after \(j\) turns item \(x\) is a good match for the context') will not work. How to solve this control problem effectively, either through more sophisticated turn level priming or by tuning the user simulator LLM, is an ongoing research effort.
### Tuning System Modules
For the remainder of this section we focus on tuning LLMs within our system using large amounts of synthetically generated data. For concreteness we examine three modules discussed earlier in the paper: Retrieval (Section 3.2.1), Ranking / Explanation (Section 3.2.2), and Dialogue Management (Section 3.1).
RetrievalIn Section 4.1 we outlined a strategy for generating synthetic training data for tuning a recommendation module. For retrieval we assume our training examples are tuples of the form \((S^{\prime},x_{pos},\{x_{neg}\})\), where \(S^{\prime}\) is a partial session \(s_{1},u_{1},s_{2},u_{2},...s_{i},u_{i}\), \(x_{pos}\) is an item that is a good match for the context \(S^{\prime}\) (in the sense defined previously) and \(\{x_{neg}\}\) is a set of negative items generated by some negative sampling procedure. Given this data, we can tune a Generalized Dual Encoder Model (see Section 3.2.1), in which the initial context representation and item representations are each encoded by an LLM. Regardless of whether we choose to tune only the adapter layers of the two tower model or the LLM params as well, the loss is fully differentiable and normal supervised learning with gradient descent suffices.
In Search API Lookup (see Section 3.2.1), an LLM processes the session history and outputs a search query, which then gets passed into a black-box search algorithm. When this architecture is used, the loss is no longer differentiable and ordinary supervised learning is not possible. Instead, we can reframe the setup as a contextual bandit problem (Beng et al., 2017), where the LLM is a policy, the labels are rewards signals, and the black box search algorithm is treated as the environment (see Figure 10b). If the LLM encoder is shared with other modules we have the choice of tuning protected parameters of the LLM that influence only this task of outputting a search query, or instead tuning shared parameters of the LLM that also influence the behavior of these other modules.
RankingFor this use case we assume our training examples are tuples of the form \((S^{\prime},Y)\), where \(S^{\prime}\) is a partial session \(s_{1},u_{1},s_{2},u_{2},...,s_{i}\) such that \(s_{i}\) contains a recommendation slate and \(Y\) is a list of relevancy scores for the items in that slate. In Section 3.2.2 we present an LLM based ranking module that jointly generates a score for each item and an explanation for that score. Using this data, we can tune the ranking LLM to predict the ground truth labels as a regression problem. Using only this relevancy data we cannot directly tune the LLM to generate better explanations, although this is still possible using bootstrapping methods that depend only on labels for the end task (in this case the scoring task) (K
public CRS datasets are based around relatively small domains such as movies and are conversation-only, i.e. recommendations are offered by the system directly within the dialogue without a notion of recommendation slates or other visual elements (see e.g. [31, 42, 49, 114]). These datasets rely on crowdsource workers to provide examples of good system utterances and recommendations that are treated as ground truth. A few other CRS datasets are adapted from non-conversational datasets involving large-scale domains such as e-commerce or Yelp reviews [45, 112, 114]. However in this case the conversations are generated either synthetically or through substitution from other sources, and are overly rigid compared to actual human dialogue. In this paper we focus on recommendations over a large-scale corpus with recommendation slates distinct from the dialogue, a setup that doesn't fit cleanly into any existing offline benchmarks. As future work we are planning to release human evaluations and a public dataset to quantitatively evaluate design alternatives within RecLLM.
_Dialogue Management._ Early CRSs did not rely on natural language but instead on simple forms of preference elicitation by the system and "critiquing" by the user [51, 7, 57]. When conversational recommenders with natural language interfaces first emerged, they were highly rule-based and limited in their ability to handle mixed-initiative interactions [3, 85]. Later, CRSs with model based language generation and understanding appeared, although they tended to still be narrowly focused on issues such as when to ask a question of the user versus showing recommendations, and what questions to ask [16, 110, 83, 112]. Other works have explored learning more flexible dialogue management modules end-to-end, usually by fine-tuning language models on dialoges collected from crowdsource workers [11, 42, 49, 67], although a recent study has indicated that more progress is needed to make these systems practically useful [38]. In some cases the end-to-end approach has been extended to jointly train a separate item recommendation module along with the dialogue [19, 45, 46, 90]. The unified LLM dialog management architecture from Section 3.1 builds on this prior work by:
* Integrating with a recommendation module that can handle a large scale corpus.
* Learning internal natural language representations such as dialogue state tracking artifacts and self-instructions along with the final dialogue utterances.
* Incorporating natural language inputs such as user profiles and textual representations of recommendation slates from external sources.
_Recommendations / Explanations._ In [105] the authors define a conceptual framework for Retrieval Enhanced Machine Learning; our framework defined in Section 3.2.1 is similar in nature but is simplified and focused on capturing existing approaches to retrieval in the recommendation domain. An overall theme of this paper is how to properly integrate LLMs with external resources, particularly recommendation engines and user profiles, in order to build a better CRS. Some prior research [8, 61, 73, 86] explores with tuning conversational systems through human demonstrations to make calls to an external search API, but not for recommendations over a large corpus. More generally, it is a fundamental research area in machine learning to augment deep learning models with external memory [29, 97, 99], and it has been demonstrated that giving LLMs the ability to retrieve from external corpora can improve performance on tasks like question answering and reduce hallucinations [4, 48, 79].
Explainability has been a longstanding concern in recommender systems [111] and a number of works have previously explored jointly generating explanations and recommendations in more traditional recommender systems [11, 12, 13, 58, 91, 113]. Recently, LLMs have been used to explain classifiers and also boost their performance [44, 62, 72]. LLMs have also been used for document ranking [41, 64, 68]; however, we are not aware of previous attempts to apply them to ranking problems in the CRS setting or over large-scale corpora where items are represented by heterogeneous metadata, as we do within RecLLM. What type of explanations a recommender system should share is a difficult question (see e.g. [26, 66]); in RecLLM we currently have the system give post hoc
Figure 10. Tuning Recommendation Modules: (a) Tuning a General Dual Encoder retrieval model. (b) Tuning a Search API Lookup retrieval model, framed as a contextual bandits problem. (c) Tuning a joint ranking / explanation model. The only learning signal comes from ground truth scores, but through self-consistency / bootstrapping tricks it is possible to indirectly tune the explanations as well.
natural language justifications for item slates, although this still leaves open the question of how to verify their correctness.
User ProfileA number of recent works explore extracting transparent natural language user profiles in order to personalize open-ended chat bots [56, 101, 109], and recommender systems [70, 87, 2]. Our proposal from Section 3.3 is perhaps most closely related to BlenderBot [80], which also breaks the problem down into separate extraction, triggering, retrieval and generation phases.
Simulation/Large scale trainingVarious user simulators have been built for training and evaluating recommender systems, often to support experimentation with reinforcement learning algorithms [36, 77, 108, 115]. Recently there has also been a surge in research using LLMs to generate synthetic data for training dialogue systems and text classifiers [18, 59, 69, 103]. Particularly relevant is Unsupervised Data Generation [93], in which an LLM takes a description of a desired label and then generates an input that fits the label. This input / label pair then becomes a synthetic example that can be used for training. Controlled simulation from section 4.1 employs a similar principle where we condition on a latent variable to generate a simulated session and then use the latent variable as a label for tuning. However, we are attempting to generate entire conversations (partially generated by a system outside the simulator's control) and more sophisticated techniques than basic few-shot prompting are likely required.
In [100, 33, 63] a pretrained language model is tuned to process documents as part of a dual encoder retrieval model, and in [32] this is extended to full conversations as in the Generalized Dual Encoder proposal from Section 4.2. When the ground truth labels do not enable a fully differentiable loss function (such as in Search API Lookup), [65, 82] show it is still effective to tune LLMs for language generation tasks using techniques derived from reinforcement learning. Other works [14, 81] also use reinforcement learning to tune LLMs for open ended or task based dialogue using reward signals inferred from the conversations (e.g. through sentiment analysis or a notion of task completion). The proposal for tuning a dialogue manager LLM in Section 4.2 is an example of Reinforcement Learning from Human Feedback [1, 27, 65], a technique that is often used for teaching LLMs to follow instructions and align better with human values.
## 6. RecLLM Prototype
We have built an initial RecLLM prototype based on the outline shared within this paper. Retrieval is currently implemented via Search API Lookup (see Section 3.2.1) using in-context few-shot learning and a public YouTube search API. LaMDA [86] is currently used as the underlying LLM powering dialogue management, recommendations and explanations, user profile integration and user simulation within the system. In Appendix A we share sample sessions from RecLLM demonstrating some of its core competencies.
## 7. Ethical Considerations
It is our belief that by leveraging large language models within CRSs we can mitigate some challenging ethical problems that have been noted in many recommender systems. RecLLM has the following desirable properties:
* A recommendation module that reasons over the attributes of items and is less reliant on learning from interaction data such as clicks that are noisy and can promote unintentional biases.
* The ability to give natural language justifications for why certain recommendations are being shown, which the user can then validate.
* Opportunity for the user to control their recommendations in a nuanced way through language.
* Transparent personalization through human interpretable and editable user profiles.
On the other hand, our proposed system relies heavily on large language models and therefore inherits all of their well-known problems centered around societal biases learned through pretraining, hallucinations, and expensive use of resources [96]. Various controls are included to constrain the LLMs to the conversational recommender task, but these are unlikely to fully wash away their inherent issues. Significant further progress needs to be made in areas like debiasing, grounding in factuality and efficient serving before we can safely deploy this type of system in a production setting.
## 8. Conclusions and Future Work
In this paper we examine the system architecture of a conversational recommender system and identify areas where large language models can unlock new capabilities, along with the technical challenges that emerge through their use. In particular we reimagine how LLMs can transform dialogue management, retrieval, ranking and user profiles to improve system quality, give the user greater control and increase transparency throughout the system. We focus on how to build a large-scale end-to-end CRS without assuming access to logs data coming from an existing product, by utilizing the generalization abilities of LLMs and generating synthetic training data using LLM-powered user simulators. As a proof of concept we introduce RecLLM and share example conversations highlighting its diverse functionality. Our hope is that this roadmap can accelerate progress towards a world where controllable and explainable CRSs allow users to explore content within a healthier recommender system ecosystem.
Some important items for future work include:
* We are planning the release of human evaluations and a public dataset based on our system to quantitatively evaluate design alternatives for RecLLM and help the community better study CRSs in the multimodal, large-scale setting.
* In this paper we assume a simplified setting where users interact with the system only through conversation. We would like to generalize our system to handle more realistic scenarios where users give feedback through other channels as well such as clicking on items or like buttons. We would also like to consider more complicated recommender system UIs containing hierarchical structures such as item shelves as opposed to just flat slates.
* We have proposed ideas for large-scale tuning of the main system modules based on synthetically generated data, but currently RecLLM relies exclusively on in-context few-shot learning or tuning on small amounts of data collected through
crowdsourcing. Successfully proving out these ideas will be critical to properly handle huge item corpora and the full space of possible conversations.
* We would like to support new use cases that naturally arise in a mixed-initiative conversational recommender dialogue, such as question answering over corpus items.
## 9. Acknowledgements
We would like to thank Filip Radlinski, Karan Singhal, Abhinav Rastogi, Raghav Gupta and Yinlam Chow for useful feedback on drafts of this paper.
|
2308.10239 | From Global to Local: Multi-scale Out-of-distribution Detection | Out-of-distribution (OOD) detection aims to detect "unknown" data whose
labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD
detection that recognizes inputs as ID/OOD according to their relative
distances to the training data of ID classes. Previous approaches calculate
pairwise distances relying only on global image representations, which can be
sub-optimal as the inevitable background clutter and intra-class variation may
drive image-level representations from the same ID class far apart in a given
representation space. In this work, we overcome this challenge by proposing
Multi-scale OOD DEtection (MODE), a first framework leveraging both global
visual information and local region details of images to maximally benefit OOD
detection. Specifically, we first find that existing models pretrained by
off-the-shelf cross-entropy or contrastive losses are incompetent to capture
valuable local representations for MODE, due to the scale-discrepancy between
the ID training and OOD detection processes. To mitigate this issue and
encourage locally discriminative representations in ID training, we propose
Attention-based Local PropAgation (ALPA), a trainable objective that exploits a
cross-attention mechanism to align and highlight the local regions of the
target objects for pairwise examples. During test-time OOD detection, a
Cross-Scale Decision (CSD) function is further devised on the most
discriminative multi-scale representations to distinguish ID/OOD data more
faithfully. We demonstrate the effectiveness and flexibility of MODE on several
benchmarks -- on average, MODE outperforms the previous state-of-the-art by up
to 19.24% in FPR, 2.77% in AUROC. Code is available at
https://github.com/JimZAI/MODE-OOD. | Ji Zhang, Lianli Gao, Bingguang Hao, Hao Huang, Jingkuan Song, Hengtao Shen | 2023-08-20T11:56:25Z | http://arxiv.org/abs/2308.10239v1 | # From Global to Local:
###### Abstract
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process. Recent progress in representation learning gives rise to distance-based OOD detection that recognizes inputs as ID/OOD according to their relative distances to the training data of ID classes. Previous approaches calculate pairwise distances relying only on global image representations, which can be sub-optimal as the inevitable background clutter and intra-class variation may drive image-level representations from the same ID class far apart in a given representation space. In this work, we overcome this challenge by proposing **Multi-scale OOD DEtection** (MODE), a first framework leveraging both global visual information and local region details of images to maximally benefit OOD detection. Specifically, we first find that existing models pretrained by off-the-shelf cross-entropy or contrastive losses are incompetent to capture valuable local representations for MODE, due to the scale-discrepancy between the ID training and OOD detection processes. To mitigate this issue and encourage locally discriminative representations in ID training, we propose **A**ttention-based **L**O**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**r**p**r**p**r**r**p**r**r**p**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**p**r**r**p**r**r**p**r**p**r**p**r**r**p**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**p**r**r**p**r**r**p**r**p**r**p**r**p**r**r**p**r**r**p**r**r**p**r**p**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**p**r**r**p**r**r**p**r**p**r**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**r**p**r**r**p**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**r**p**r**p**r**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**p**r**r**p**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**p**r**r**p**r**r**p**r**r**p**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**p**r**r**p**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**r**p**r**r**p**r**r**p**r**r**p**r**r**r**p**r**r**p**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**p**r**r**r**p**r**r**p**r**r**p**r**r**p**r**r**r**p**r**r**r**p**r**r**p**r**r**r**p**r**r**p**r**r**r**p**r**r**p**r**r**r**p**r**r**r**p**r**r**p**r**r**r**p**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**p**r**r**r**p**r**r**r**p**r**r**p**r**r**r**r**p**r**r**r**p**r**r**r**p**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**r**p**r**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r**r**r**p**r
Therefore, it becomes more difficult to effectively distinguish ID-OOD examples, relying only on the _single-scale_ global representations. Furthermore, overwhelming empirical evidence reveals that exploring richer visual information from _multi-scale_ representations is of great importance for understanding discriminative local regions, and semantic categories of the target objects [12, 13]. However, looking at the literature on OOD detection over the past years, the efficiency of exploiting discriminative local representations for achieving better ID-OOD separability has not received any attention so far, not to mention leveraging both global and local representations to maximally benefit OOD detection. This limitation begs the following question:
* Can we take advantage of both global visual information and local region details from images to distinguish ID/OOD examples more effectively?
In this work, we answer the above question by proposing **M**ulti-scale **O**OD **D**E**tection (**MODE**), a first framework that leverages multi-scale (i.e., both global and local) representations for OOD detection. Specifically, as illustrated in Fig. 2, we first find that existing models pretrained by off-the-shelf cross-entropy (CE) or contrastive learning (CL) losses are incompetent to capture valuable local representations for MODE, due to the _scale-discrepancy_ between the ID training and OOD detection procedures. To address this issue, we propose Attention-based Local PropAgation (ALPA), a trainable objective that encourages the mining of locally discriminative representations from images during ID training. As shown in Fig. 3, ALPA exploits contrastive representation learning to promote general-purpose visual information that captures richer and more flexible representations for recognizing ID/OOD data. Yet, instead of directly using global representations to maximize/minimize the agreement of pairwise examples, ALPA adopts a cross-attention mechanism to align and highlight the local regions of the target objects for each pair of examples, making the extracted local representations more discriminative. In test-time OOD detection, a Cross-Scale Decision (CSD) function is further devised for MODE, where the most discriminative multi-scale representations are explored to distinguish ID/OOD examples more faithfully, as shown in Fig. 4.
**Flexibility and Strong Performance.** The proposed MODE is orthogonal to the ID training procedure, as well as models pretrained with different fashions. More specifically, MODE can not only take ALPA as a plugin to regularize ID training losses, but also directly leverage it to finetune existing pretrained models in an end-to-end manner. We demonstrate the effectiveness and flexibility of MODE on a broad spectrum of baseline methods applied to various network structures. Remarkably, our MODE establishes new state-of-the-art performance on several benchmarks, on average outperforming the previous best scheme KNN [9] by up to **19.24%** in FPR, and **2.77%** in AUROC (see Table I). What's more, when MODE performs test-time OOD detection based only on **5%** ID training data, it still exhibits superior performance than the strong competitor KNN (which relies on **100%** ID training examples), outperforming KNN by **6.08%** in FPR, **0.68%** in AUROC (see Table V).
**Contributions.** To sum up, our contributions are fourfold.
* We propose MODE, a first framework that takes advantage of multi-scale (i.e., both global and local) representations for OOD detection.
* During ID training, we develop ALPA, an end-to-end, plug-and-play, and cross-attention based learning objective tailored for encouraging locally discriminative representations for MODE.
* During test-time OOD detection, we devise CSD, a simple, effective and multi-scale representations based ID-OOD decision function for MODE.
* Comprehensive experimental results on several benchmark datasets demonstrate the effectiveness and flexibility of MODE. Remarkably, our MODE achieves significantly better performance than state-of-the-art methods.
## II Related Work
In this section, we briefly review previous research closely related to our work, including out-of-distribution (OOD) detection, distance-based OOD detection, representation learning for OOD detection, and part-based visual correspondence.
**Out-of-distribution Detection.** Out-of-distribution (OOD) detection, a.k.a. _outlier_ detection [14, 15], _anomaly_ detection [16, 17] or _novelty_ detection [18, 19], aims to recognize unknown inputs from the open world to prevent unpredictable risks. The vast majority of previous works are _test-time_ approaches that rely on the output softmax confidence score of a pretrained model to safeguard against OOD inputs. The insight beneath this line of works is that incoming examples with lower output softmax confidence scores are more likely from OOD [5, 3]. Effective test-time scoring functions include OpenMax [20], MSP [1], LogitNorm [21], DICE [22], Energy [3], ODIN [5] and etc. In the recent work [23], a simple yet effective test-time approach named LINE is proposed. By leveraging important neurons for post-hoc OOD detection, LINE yields remarkable test-time OOD detection performance. While the results are impressive, it has been demonstrated that well-performed models can produce arbitrarily high softmax confidence for inputs far away from the training data [8]. Moreover, most of those test-time OOD methods often consider the development of effective OOD decision functions alone, our proposed MODE framework in this work considers training-time representation learning and test-time OOD detection, simultaneously.
Fig. 2: Performance degradation (blue bar) caused by the **scale-discrepancy** between ID training (on global representations) and OOD detection (_on local_ representations) – CIFAR-100 (ID) with ResNet-34. The model is pretrained by cross-entropy (CE) or contrastive learning (CL) loss. The results are the average on the five common OOD datasets shown in Section IV-A. For FPR (resp. AUROC), smaller (resp. higher) values indicate better performance.
**Distance-based OOD Detection.** The core concept of the distance-based OOD detection is to calculate a distance metric between the input examples and the training data. Testing examples are recognised as OOD (resp. ID) data if they are relatively far away from (resp. close to) training examples of ID classes. With the recent advances in representation learning, various kinds of distance-based OOD detection algorithms have been employed. Among those methods, the Mahalanobis distance-based methods possess remarkable performance [2, 24]. However, the success of those methods are established on a strong distributional assumption of the underlying representation space, which may not always held in reality. To address this limitation, Sun et al. proposed KNN [9], a first study exploring the effectiveness of using a \(k\)-nearest neighbor search over the penultimate layer representations for OOD detection. In contrast to the Mahalanobis distance-based methods, KNN [9] does not impose any distributional assumptions on the underlying representation space, which is more simple, flexible and effective. In [25], a novel representation learning framework coined CIDER is presented to exploit hyperspherical embeddings for distance-based OOD detection. Recently, utilizing large vision-language pre-trained models like CLIP [26] for multi-modal downstream tasks has achieved remarkable success. By matching visual features with textual class prototypes in the CLIP model, an effective test-time method coined MCM is proposed for distance-based OOD detection in [27]. Despite the encouraging advantages of distance-based OOD detection, we observe that the background clutter as well as the large intra-class variation may drive the image-level representations from the same ID class far apart in a given representation space. As a result, it becomes more difficult to correctly distinguish ID/OOD examples based only on the pairwise distances calculated from global image representations. Moreover, it has been widely demonstrated that a global average pooled image representation can destroy image structures and result in the compromise of a substantial amount of discriminative local representations of the target objects [28, 29]. In this work, for the first time, we exploit both global visual information and local region details from images to calculate the distance between each pair of examples for maximally benefiting distance-based OOD detection.
**Representation Learning for OOD Detection.** A good deal of methods have attempted to improve the compactness of intra-class examples during the ID training stage, so as to achieve better test-time OOD detection performance [11, 6, 30]. Contrastive representation learning [31, 32, 33, 34, 35, 36] that targets learning a discriminative representation space where positive samples are aligned while negative ones are dispersed, has been shown to improve OOD detection [10, 11, 37]. In particular, Tack et al. [11] proposed a scheme named Contrasting Shifted Instances (CSI) to learn a representation well-suited for novelty detection. In [10], authors present an effective outlier detector based on unlabeled ID data along with the self-supervised representation learning technique. Recent studies [38, 39] also revealed that improving the closed-set (i.e. ID) classification accuracy is the key to further boosting OOD detection performance. Another promising line of work improves ID training by conducting training-time regularization [3, 40, 41]. Most of those regularization approaches, however, require the availability of abundant simulating OOD data, which may not held in practice. Surprisingly, the obtained quantitive and qualitative results reveal that relying only on the ID training data, our devised loss function ALPA can shape the distributions of different classes to be more compact for benefiting both OOD detection and ID classification tasks.
Fig. 3: An overview of _training-time Attention-based Local Propagation_ (ALPA) that encourages the mining of discriminative local representations for MODE. First, the feature backbone \(\Psi_{\theta}\) takes each image \(\mathbf{x}\) as input to produce the local representations (a.k.a. dense features) \(\mathbf{L}=\Psi_{\theta}(\mathbf{x})\in\mathbb{R}^{HW\times E}\). Then, three linear projection heads \(\Omega_{k}\), \(\Omega_{q}\) and \(\Omega_{r}\), transform \(\mathbf{L}\) to a lower-dimensional space and obtain the value \(\mathbf{K}=\Omega_{k}(\mathbf{L})\), the query \(\mathbf{Q}=\Omega_{q}(\mathbf{L})\) and the key \(\mathbf{V}=\Omega_{r}(\mathbf{L})\) respectively, where \(\mathbf{K}\),\(\mathbf{Q}\),\(\mathbf{v}\)\(\in\)\(\mathbb{R}^{HW\times e}\), search, a cross-attention mechanism is applied to align the \(e\)-dimensional local representations of pairwise examples, so as to highlight the target object regions. Finally, the parameters of \(\Psi_{\theta}\) together with \(\Omega_{k}\), \(\Omega_{q}\) and \(\Omega_{v}\) are updated by maximizing (resp. minimizing) the agreement of the aligned local representations of each pair of examples from the same class (resp. different classes).
Attention-based Local Feature Alignment.Local feature alignment [42, 43, 44] has emerged as a powerful paradigm enabling meaningful representations by matching local features of images (or image-text pairs), and has achieved great success in a wide spectrum of tasks, such as domain adaptation [45, 46], image-text matching [47, 48], few-shot learning [28, 49, 50]. Among those methods, the idea of utilizing cross-attention to enhance feature alignments has been extensively studied. Particularly, CDTrans [45] applies cross-attention and self-attention for source-target domain alignment to learn discriminative domain-invariant and domain-specific features simultaneously. SCAN [47] highlights the alignment of image regions and words in a sentence in cross-attention modules to learn modality-invariant features. FEAT [49] adapts the image features produced by deep convolution neural networks (CNNs) to the target few-shot task with a set-to-set function (i.e., Transformer [51]), yielding discriminative and informative features. Different from those works that leverage task-specific supervision to encourage the interaction between local features, the devised ALPA formulates the learning objective as a contrastive loss, where the cross-attention module takes the output dense features of CNNs as input to maximize (resp. minimize) the agreement of each pair of samples from the same ID class (resp. different ID classes). In addition, the goal of most of those works is to learn a shared feature space to align features from different domains [45] (or modalities[47]), while our ALPA aims to learn a discriminative feature space where a suitable threshold or compact decision boundary can be established to distinguish ID/OOD data accurately. To the best of our knowledge, this work is the first to use the idea of attention-based local feature alignment to promote locally discriminative representations in OOD detection.
Multi-scale Representation Learning.Multi-scale representations are of great importance to plenty of vision tasks such as classification [52, 53], retrieval [54, 55] and detection [12, 56], significantly boosting the performance achieved on single-scale (i.e., global) representations in those fields. Unlike most works in those fields that use multi-scale representations to recognize ID categories, in this work we for the first time leverage multi-scale representations to enable better ID-OOD separability in OOD detection, which is more challenging due to the following reasons. On the one hand, relying only on the training data of ID categories, the learned multi-scale representations may not be generalizable enough to recognize parts, objects, and their surrounding context of OOD data. On the other hand, the sample space of potential OOD data can be prohibitively large, even severely overlapped with the sample space of ID categories [40, 57], making it difficult to establish a decision boundary on the extracted multi-scale representations of ID categories and OOD data at test time.
## III Methodology
In this section, we elaborate on our MODE framework. Before that, we introduce some important preliminaries.
### _Preliminaries_
When dealing with supervised multi-class classification, we typically denote \(\mathcal{X}\), \(\mathcal{Y}\) as the input, output space, respectively. Let \(P\) be a distribution over \(\mathcal{X}\times\mathcal{Y}\), and \(f:\mathcal{X}\mapsto\mathbb{R}^{|\mathcal{Y}|}\) be a neural network that takes input the examples drawn from \(P\) to output a logit vector, which is then used to predict the label of an input example. Denote \(\mathbb{D}^{\textbf{in}}=\{(\textbf{x}_{i},y_{i})\}_{i=1}^{s}\) as the marginal distribution of \(P\) for \(\mathcal{X}\), which represents the distribution of in-distribution (ID) data. During test-time OOD detection, the environment can present a distribution \(\mathbb{D}^{\textbf{out}}\) over \(\mathcal{X}\) of OOD data, whose label space \(\mathcal{Y}^{\textbf{out}}\) s.t. \(\mathcal{Y}^{\textbf{in}}\bigcap\mathcal{Y}^{\textbf{out}}=\phi\).
Out-of-distribution Detection.Essentially, OOD detection can be viewed as a binary classification task, where the goal is to reject the "unknown" inputs to prevent any potential risk. More specifically, to determine whether an example \(\textbf{x}\in\mathcal{X}\) belongs to \(\mathbb{D}^{\textbf{in}}\) or not (i.e. \(\mathbb{D}^{\textbf{out}}\)), the decision function can be made via a level set estimation:
\[\Gamma_{\varepsilon}(\textbf{x})=\left\{\begin{aligned} \textbf{ID}& \quad S(\textbf{x})\geq\varepsilon\\ \textbf{OOD}&\quad S(\textbf{x})<\varepsilon\end{aligned}\right., \tag{1}\]
where the input example **x** is classified as ID (resp. OOD) if its obtained score \(S(\textbf{x})\) is higher (resp. lower) than the threshold \(\varepsilon\). In practice, \(\varepsilon\) is typically selected so that a high fraction of ID data (e.g. 95%) is correctly classified.
KNN-based OOD Detection.Recent advances in representation learning give rise to distance-based OOD detection that represents image data in an appropriate representation space and leverages a distance function to decide whether testing examples are ID/OOD according to their relative distances to the seen examples of ID classes. In particular, Sun et al. proposed KNN [9] that established state-of-the-art performance using a \(k\)-nearest neighbor (coined \(k\)-NN in the following) search over global image representations for OOD detection.
Let \(\Psi_{\theta}\) be a feature backbone (parameterized by \(\theta\)) mapping the input **x** to a global average pooled representation \(\textbf{g}\in\mathbb{R}^{E}\). KNN-based OOD detection normalizes the global representation \(\textbf{z}=\textbf{g}/||\textbf{g}||_{2}\) for distance calculation. Before testing an example \(\tilde{\textbf{z}}\), we first obtain the representation collection of ID training data, denoted as \(\mathbb{S}=(\textbf{z}_{1},...,\textbf{z}_{s})\). During test-time OOD detection, we calculate the Euclidean distances \(||\textbf{z}_{t}-\tilde{\textbf{z}}||_{2}\) w.r.t. representations \(\textbf{z}_{i}\in\mathbb{S}\). Denote the reordered ID data as \(\mathbb{S}^{\prime}=(\textbf{z}_{(1)},...,\textbf{z}_{(s)})\), the decision function for KNN-based OOD detection takes the form of
\[\Gamma_{\varepsilon}(\tilde{\textbf{z}};k)=\left\{\begin{aligned} \textbf{ID}&\quad r_{k}(\tilde{\textbf{z}})<\varepsilon\\ \textbf{OOD}&\quad r_{k}(\tilde{\textbf{z}})\geq \varepsilon\end{aligned}\right., \tag{2}\]
where \(r_{k}(\tilde{\textbf{z}})=||\textbf{z}_{(k)}-\tilde{\textbf{z}}||_{2}\) indicates the \(k\)-th nearest neighbor. The threshold \(\varepsilon\) does not depend on OOD data, and can be selected when a large proportion of of ID data (e.g. 95%) is correctly classified in practice.
Contrastive Representation Learning.We take advantage of contrastive representation learning [58] to promote general-purpose visual information that captures richer and more flexible representations usable for recognizing ID/OOD data. Concretely, we first project the global representation of **x**, **g**, into a lower dimensional space with a projection head \(h\), i.e., \(h(\textbf{g})\in\mathbb{R}^{e},e\ll E\). Let \(\psi(h(\textbf{g}_{i}),h(\textbf{g}_{j}))\) be the cosine similarity of every pair of images in the projected space. We sample a batch of \(N\) pairs of images and labels from the training data of ID classes, and augment every image in the batch to obtain \(2N\) labeled data points. The loss function of
supervised contrastive representation learning can therefore be expressed as
\[\mathcal{L}_{con}=\sum_{i=1}^{2N}\frac{1}{2N_{y_{i}}-1}\sum_{j=1}^{2N} \mathbb{1}_{i\neq j}\cdot\mathbb{1}_{y_{i}=y_{j}}\cdot\ell_{ij}, \tag{3}\]
and we have
\[\ell_{ij}=-\log\frac{\exp(\psi(h(\textbf{g}_{i}),h(\textbf{g}_{j})/ \tau)}{\sum_{t=1}^{2N}\mathbb{1}_{i\neq t}\cdot\exp(\psi(h(\textbf{g}_{i}),h( \textbf{g}_{t}))/\tau)}, \tag{4}\]
where \(\mathbb{1}\) is the indicator function, and \(N_{y_{j}}\) is the number of the samples with the same label \(y_{j}\), \(\tau\) is a scalar temperature parameter. The above learning objective \(\mathcal{L}_{con}\) introduces the label information to avoid pulling augmented views from the same class apart, enabling the mining of more discriminative and robust representations.
### _Multi-scale OOD Detection (MODE)_
Our goal in this work is to take advantage of multi-scale (i.e., both global and local) representations from images to distinguish ID/OOD examples more effectively. Particularly, _local representations_ are the output feature maps before the final global average pooling layer of convolutional neural networks (CNNs). For an input image **x**, we denote the obtained \(HW\)\(E\)-dimensional local representations as \(\textbf{L}=\Psi_{\theta}(\textbf{x})\in\mathbb{R}^{HW\times E}\), and the global representation as \(\textbf{g}=\nu(\textbf{L})\in\mathbb{R}^{E}\), where \(\Psi_{\theta}\) denotes a feature backbone, and \(\nu:\mathbb{R}^{HW\times E}\mapsto\mathbb{R}^{E}\) is an additional average pooling layer. The multi-scale representations for **x** thus can be expressed as \(\mathbb{M}=\{\textbf{g},\textbf{L}\}\).
Intuitively, we can directly borrow existing pretrained CNNs to generate multi-scale representations for MODE. Unfortunately, due to the _scale-discrepancy_ between the ID training and OOD detection processes, models learned by off-the-shelf Cross-Entropy (CE) or Contrastive Learning (CL) losses are incompetent to capture discriminative local representations for recognizing OOD data, as demonstrated in Fig. 2. This observation is also consistent with abundant empirical evidence that an average pooled image representation can destroy image structures and lose a substantial amount of discriminative local representations of the target objects during training [59, 28, 60]. And once the model has been learned, those lost valuable local representations become difficult to recover. Hence, this challenge begs one important question:
Can we develop a model-agnostic approach to encourage locally discriminative representations in ID training, so as to overcome the scale-discrepancy issue and benefit MODE during testing?
**Attention-based Local Propagation (\(\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{ \mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{ \mathtt{ }}}}}}}{}}}}}}}}}}\)** **A
dubbed ALPA-finetune. To sum up, the ALPA-enhanced, ID training objective can take the form of
\[\mathcal{L}=\left\{\begin{aligned} \mathcal{L}_{alpa}&\texttt{ALPA-finetune}\\ \mathcal{L}_{base}+\lambda\mathcal{L}_{alpa}&\texttt{ALPA-train} \end{aligned}\right., \tag{9}\]
where \(\mathcal{L}_{base}\) indicates the learning objective of the existing ID training (or representation learning) procedure, \(\lambda\) is a balance weight controlling the contribution of ALPA for ALPA-train.
**Remark 2**: **: Key Differences Between ALPA and DenseCL.** The work most closely related to ALPA is DenseCL [60], which implements contrastive representation learning by formulating a dense-feature-level contrastive loss based on different views of images. We highlight the key differences between ALPA and DenseCL as follows. On the one hand, DenseCL designs the loss function in an unsupervised learning setting, while our ALPA leverages label information to avoid pulling augmented views from the same class apart. On the other hand, DenseCL uses an identical \(1\times 1\) convolution layer as a projection head to generate lower-dimensional dense feature vectors for individual examples, while our ALPA exploits a cross-attention mechanism to highlight the object regions of pairwise examples, making the learned representations more discriminative and robust. Nevertheless, DenseCL [60] does bring some inspiration to our method.
**Remark 3**: **: Time Complexity of ALPA.** The time complexity of our ALPA is \(\mathcal{O}(\Pi^{2})\), where \(\Pi=N\times HW\times e\). Thus, we can **i)** reduce the batch size \(N\), **ii)** apply an extra average pooling step on \(\textbf{L}\in\mathbb{R}^{HW\times E}\) (to reduce \(HW\)), and **iii)** set a smaller dimensionality \(e\) in attention heads to avoid excessive computational cost in practice.
**OOD Detection with Cross-scale Decision (CSD).** In the ID training procedure, we propose ALPA, an end-to-end, plug-and-play, and cross-attention based loss function for encouraging locally discriminative representations for MODE. To maximally benefit test-time OOD detection, we develop a Cross-Scale Decision (CSD) function to distinguish ID/OOD examples more faithfully, relying on the relative distances of the most discriminative multi-scale representations.
Mathematically, let \(\Psi_{\theta^{*}}\) be the ALPA-enhanced feature backbone. We first apply \(\Psi_{\theta^{*}}\) and \(\nu()\) to produce the multi-scale representations for the training data \(\mathbb{D}^{\textbf{tr}}=\{\textbf{x}_{1},...,\textbf{x}_{s}\}\), denoted as \(\mathbb{M}^{\textbf{tr}}=\{\textbf{g}_{1},...,\textbf{g}_{s},\textbf{L}_{1}^{ 1},...,\textbf{L}_{i}^{HW},...,\textbf{L}_{s}^{1},...,\textbf{L}_{s}^{HW}\}= \{\textbf{m}_{1}^{\textbf{tr}},...,\textbf{m}_{(HW+1)\times s}^{\textbf{tr}}\}\), where \(\textbf{m}_{i}^{\textbf{tr}}\in\mathbb{R}^{E}\). Similarly, denote the multi-scale representations of \(i\)-th testing example as \(\mathbb{M}_{i}^{\textbf{test}}=\{\textbf{m}_{1}^{\textbf{test}},...,\textbf{ m}_{HW+1}^{\textbf{test}}\}\). For each \(\textbf{m}_{i}^{\textbf{test}}\), we search over \(\mathbb{M}^{\textbf{tr}}\) to determine its distance to the \(k\)-th nearest neighbor in a normalized representation space (as in KNN [9]), denoted as \(r_{k}(\textbf{m}_{i}^{\textbf{test}})\). The decision function of CSD for distinguish ID/OOD data takes the form of
\[\Gamma_{\varepsilon}(\mathbb{K}_{i}^{(k)})=\left\{\begin{aligned} \textbf{ID}&\quad\textbf{min}(\mathbb{K}_{i}^{(k)})< \varepsilon\\ \textbf{OOD}&\quad\textbf{min}(\mathbb{K}_{i}^{(k)}) \geq\varepsilon\end{aligned}\right., \tag{10}\]
where \(\mathbb{K}_{i}^{(k)}=\{r_{k}(\textbf{m}_{1}^{\textbf{test}}),...,r_{k}( \textbf{m}_{HW+1}^{\textbf{test}})\}\), and \(r_{k}(\textbf{m}_{i}^{\textbf{test}})\) be the calculate the Euclidean distance between each representation in \(\mathbb{M}_{i}^{\textbf{test}}\) and its searched \(k\)-th nearest neighbor in \(\mathbb{M}^{\textbf{tr}}\). In like manner, the threshold \(\varepsilon\) can be selected when a large proportion of of ID data (e.g. 95%) is correctly classified.
**Remark 4**: **: Strategies for Speeding Up CSD.** In practice, to avoid excessive time cost, we follow KNN [9] to **i)** store the multi-scale representations of all examples in a key-value map, and **ii)** use the library of **Faiiss**[64] for speeding up the \(k\)-NN search process. In concrete terms, we employ **faiss.IndexFlatL2** as the indexing scheme with Euclidean distance. Moreover, as illustrated in Fig. 5, we further reduce the number of extracted local representations for every image from \(HW\) to \(HW/4+1\), by performing a _neighbor aggregation_ step (i.e., a \(2\times 2\) average pooling step) on every four nearest local representations at different positions. Quantitative analysis for the computational cost of our designed CSD at inference is presented in Section IV-E.
**Flexibility of our MODE Framework.** Our proposed MODE framework is orthogonal to the ID training procedure, as well as models pretrained with different losses. In this work, we consider two versions of our MODE according to how ALPA promotes locally discriminative representations, i.e., **MODE-T** = ALPA-train + CSD, **MODE-F** = ALPA-finetune + CSD. In practice, we can flexibly decide whether to adopt MODE-T or MODE-F depending on the current training stage of the model, i.e., MODE-T if the model has not yet started training, MODE-F if the model is already pretrained.
## IV Experiments.
In this section, we extensively test our proposed MODE on regularly used OOD benchmarks, feature backbones and
Fig. 4: An overview of _test-time_ OOD detection with **Cross-scale Decision** (CSD) where the most discriminative multi-scale (i.e., both global and local) representations are explored to distinguish ID/OOD examples more faithfully.
Fig. 5: Illustrations of **(a)** Global, **(b)** Local, and **(c)** Neighbor Aggregated _Local_ (c)**, _Local++) representations. During test-time OOD detection, CSD leverages Global and Local++ representations for computational efficiency.
evaluation metrics. In specific, we first scrutinize the effectiveness of our MODE on common benchmarks, then we move a step further to evaluate it on large-scale ImageNet benchmark. Ablation studies and visualization results are shown at the end.
**Evaluation Metrics.** We follow the widely-employed setup in the literature to use the following evaluation metrics: FPR (a.k.a. FPR95) [65]: The false positive rate of OOD examples when the true positive rate of ID examples reaches 95%. AUROC [1]: The area under the curve of the receiver operating characteristic. Note that both FPR and AUROC are utilized for testing the OOD detection performance, besides we don't need to manually tune the threshold \(\varepsilon\) for FPR and AUROC at inference, as the two metrics can determine \(\varepsilon\) according to the classification results of testing ID samples. To investigate the ID training (or representation learning) performance we also introduce ID ACC: the classification accuracy of ID examples.
### _Evaluation on Common Benchmarks_
**Datasets.** Following the common benchmarks in OOD detection, we adopt CIFAR-10 and CIFAR-100 as in-distribution (ID) datasets, and they are spilled normally for conducting ID training. In test-time OOD detection, Textures [66], SVHN [67], Places365 [68], LSUN-C [69] and iSUN [70] are used as OOD datasets for performance evaluation. Specifically, Places365 consists of 365 scene categories of images, SVHN and iSUN are datasets with colored street numbers and a large scale of natural scenes. Besides, Textures is made up of images in the wild from 47 terms, and LSUN contains millions of images from 10 scenes and 20 object categories.
**Implementation Details.** We follow the common practice to use ResNet-18 as the feature backbone for CIFAR-10, and ResNet-34 for CIFAR-100 in our experiments. We obtain the CE-trained and CL-trained models based on the open-source
implementations of SupCE and SupCon in [58]1, respectively. We update the networks using stochastic gradient descent with momentum 0.9, and the weight decay is set to 0.0001. In particular, the balance weight \(\lambda\) in Eq. 9 is set to 1.0 for MODE-T (or ALPA-train). The initial learning rate \(\eta\) is set to 0.1 for MODE-F (or ALPA-finetune). The dimensionality \(e\) in attention heads takes the value of 80. The temperature \(\tau\) in \(\mathcal{L}_{alpa}\) is 0.1. The \(k\)-NN hyperparameter \(k\) is 50. We found that the batch size \(N\) has negligible effect on performance within a certain range, we therefore set \(N=128\) to avoid excessive computational cost. We carefully adjust the critical hyperparameters \(\lambda\), \(\eta\), and \(k\) in our ablation studies.
Footnote 1: [https://github.com/HobbitLong/SupContrast](https://github.com/HobbitLong/SupContrast)
**Experimental Results.** The experimental results are reported in Table I, where a broad spectrum of state-of-the-art OOD detection approaches are compared. Please visit [9] for more details of those approaches. In particular, we divide those approaches into two groups, depending on whether the pretrained model is learned by cross-entropy (CE) loss or contrastive learning (CL) loss. From the reported results in the table, we highlight the following observations. **First**, OOD detection performance is substantially improved with our proposed MODE. On average, with CE-trained (resp. CL-trained) models, our methods outperform the strong competitor KNN [9] by a maximum of **19.24%** (resp. **9.69%**) in terms of FPR, and **2.77%** (resp. **1.76%**) in terms of AUROC. **Second**, for the two versions of MODE, MODE-F (i.e. ALPA-finetune + CSD) outperforms MODE-T (i.e. ALPA-train + CSD) in the vast majority of cases, suggesting that our ALPA-finetune does not raise the catastrophic forgetting problem - overwriting the previously learned knowledge of pretrained models. The OOD detection performance curves depicted in Fig. 7 further confirms this conclusion - MODE-F continually improves the OOD detection performance of the pretrained baseline models. **Third**, the performance improvement of our MODE upon CE-trained methods is significantly better than that on CL-trained methods. One possible reason is that our devised ALPA as a variant of CL loss is able to complement vanilla CE loss to mine general-purpose visual information that captures richer and more flexible representations for recognizing ID/OOD data. More qualitative results for this problem is systematically discussed in Section IV-F and demonstrated in Fig. 10. In a nutshell, the achieved results in Table I show that our MODE framework is agnostic to ID training losses, as well as models pretrained with different fashions.
### _Evaluation on Large-scale ImageNet Benchmark_
**Datasets.** We move a step further to demonstrate the effectiveness and flexibility of our method by evaluating it on a large-scale OOD detection task using ImageNet [71] as ID dataset. Following the common setup in ImageNet-based OOD detection [72, 9], we evaluate on four OOD datasets that are specifically the subsets of Textures [66], Places365 [68], iNaturalist [73] and SUN [74], and without overlapping categories w.r.t. ImageNet.
**Implementation Details.** We use a ResNet-50 feature backbone for evaluation on the ID dataset ImageNet. Here, instead of meticulously training the backbone from scratch on ImageNet, we directly borrow the CL-trained ResNet-50 model from the public repository of KNN [9]2 for efficiency. Note that we only test our method on the CL-trained model, since the CE-trained model is not publicly available yet. During ID training, we iteratively finetune the pretrained model using our ALPA-finetune for 300 epochs, where the batch size \(N=64\), and the initial learning rate \(\eta=0.1\) with cosine scheduling. Other hyperparameters are set the same as in Section IV-A. In addition, following KNN [9], we sample a tiny ratio (\(1\%\)) of training data from ImageNet for nearest neighbor search during test-time OOD detection for our method and LINE [23].
Footnote 2: [https://github.com/deeplearning-wisc/knn-ood](https://github.com/deeplearning-wisc/knn-ood)
**Experimental Results.** The achieved OOD detection performance for different approaches over the four OOD datasets are reported in Table II. From the table, we have the following findings. **First**, our method (MODE-F) significantly outperforms those strong competitors across the four OOD datasets, and establishes new state-of-the-art results. **Second**, it is worth noting that SSD [10] obtains inferior performance to both KNN [9] and our method. This is probably due to that the increased data complexity of large-scale benchmarks makes the class-conditional Gaussian assumption less viable for effective OOD detection. In contrast, KNN along with our
method are distribution assumption-free therefore do not suffer from this limitation. **Third**, after finetuning the pretrained model (i.e. ResNet-50), our MODE-F (more concretely, the designed CSD) randomly samples a tiny ratio (\(1\%\)) of training data for nearest neighbor search as in KNN. In this case, our method still consistently outperforms other competitors across all OOD datasets, revealing that the ALPA-enhanced multi-scale representations are more informative and transferable.
### _Evaluation on Clean OOD Benchmarks_
As revealed in [75], most of widely-used OOD datasets are noisy: the test OOD data is mixed with a large proportion (by up to 50% in some cases) of ID examples from ImageNet-1k. To further show the effectiveness of our proposed method, we also compare our method with the strong baseline KNN [9] on two _clean_ OOD datasets: OpenImage-O [76], and NINCO [75]. The obtained results are reported in Table III, where the experimental setup is the same as in Section IV-B. From the results in the table, our method consistently outperforms the competitor KNN on the two datasets.
### _Ablation Studies_
In this section, we first conduct ablative analysis to validate the effectiveness of designed components of our MODE in Table IV. Then, we analyze the effects of **i)** multi-scale representations, **ii)** balance weight \(\lambda\), **iii)** learning rate \(\eta\), and **iv)**\(k\)-NN hyperparameter \(k\) to deeply investigate our MODE.
**Effectiveness of the Designed Components of MODE.** Here, we seek to answer the following two questions: (1) Can our training-time ALPA encourage locally discriminative representations during ID training? (2) Can our test-time CSD further boost test-time OOD detection? To this end, we conduct experiments on the ResNet-34 feature backbone and use CIFAR-100 as the ID dataset. The average results w.r.t. ID ACC, FPR and AUROC on five common OOD benchmarks are reported in Table IV. We have the following observations. **First**, from the cells of "ID ACC", it is obvious that both ALPA-train and ALPA-finetune improve the in-distribution classification performance, which indicates that the designed ALPA benefits the learning of discriminative representations from ID classes. This is in accordance with the historical evidence that local representations (i.e. dense features) inside images can provide richer and more flexible information about the target objects [28, 29, 60]. More qualitative results are presented in Fig. 11 and discussed in Section IV-F. **Second**, from the cells of "\(+\)CSD", we can see that our CSD significantly improves the test-time OOD detection results with multi-scale representations learned by both ALPA-train and ALPA
Fig. 7: OOD detection performance curves of different approaches at various ID training epochs – CIFAR-100 (ID) with ResNet-34. The reported results are the average across five common OOD datasets. For FPR (resp. AUROC), smaller (resp. higher) is better.
Fig. 6: Effect of multi-scale (i.e., global, local and local++) representations on performance – CIFAR-100 (ID) with ResNet-34. The results are the average across five common OOD datasets. For FPR (resp. AUROC), smaller (resp. higher) values are better.
finetune, suggesting the efficacy of our CSD function. **Third**, CSD does not result in significant performance gains on single-scale image representations learned by off-the-shelf cross-entropy (CE) or contrastive learning (CL) losses, which proves the necessity of addressing the scale-discrepancy between ID training and OOD detection, and also confirms the effectiveness of our ALPA for tackling this problem. **Fourth**, from the cells of "FPR" and "AUROC", it can be observed that each of ALPA-train + CSD and ALPA-finetune + CSD outperforms the baseline methods (i.e., vanilla CE/CL-trained models with KNN-based OOD detection [9]) by large margins, clearly demonstrating the effectiveness of the designed components (i.e. training-time ALPA and test-time CSD), as well as the flexibility of our proposed MODE framework.
**Effect of Multi-scale Representations.** During test-time OOD detection, our designed CSD function explores the most discriminative multi-scale (i.e., both global and local) representations to distinguish ID/OOD examples more faithfully. The extracted local representations of each input image **x** are concretely the output feature maps before the final global average pooling layer of convolutional networks (or feature backbones), denoted as \(\textbf{f}\in\mathbb{R}^{HW\times E}\). Therefore, an \(M\times M\) image can be mapped into \(HW\) local representations (or split image regions), with a corresponding region size \(M/H\times M/W\). That means the larger the number of local representations, the smaller the size of a region. As aforementioned in Section III-B and depicted in Fig. 5, we can reduce the number of extracted local representations for every image from \(HW\) to \(HW/4+1\) (concretely, from \(4\times 4\) to 5 in our experiments), by performing a neighbor aggregation procedure on every four nearest local representations at different positions, and obtain the neighbor aggregated local representations, called local++ representations. In Fig. 6, we investigate the effect of those multi-scale (i.e., global, local and local++) representations on the OOD detection performance of our MODE framework. We have several important observations from the figure. **First**, compared with the ALPA-enhanced global representations, the ALPA-enhanced local representations are more beneficial to improve OOD detection performance, which reveals the fact that our ALPA enables locally discriminative representations that captures richer and more flexible representations for recognizing ID/OOD data. **Second**, leveraging both global and local representations from images can further boost the results. This is in accordance with our intention that exploiting multi-scale representations from images help maximally benefit OOD detection. **Third**, the combination of global and local++ representations achieves the best performance in most cases. One possible reason is that when an image is divided into a larger number of (i.e. \(HW\)) local representations, the size of every corresponding region becomes smaller, as a consequence, some of those split regions fail to capture the target objects. Therefore, for higher performance and computational efficiency, in our experiments the multi-scale representations for each image specifically include one global representation and five local++ representations.
**Effect of the Balance Weight \(\lambda\) for MODE-T.** During ID training, MODE-T (i.e. ALPA-train + CSD) employs ALPA-train to encourage discriminative local representations by regularizing existing ID training loss functions with the devised \(\mathcal{L}_{alpa}\), i.e., \(\mathcal{L}=\mathcal{L}_{base}+\lambda\mathcal{L}_{alpa}\) in Eq. 9, where the hyperparameter \(\lambda\) is adopted to balance the contribution of our \(\mathcal{L}_{alpa}\). In this part, we carefully tune \(\lambda\) by setting it to the values of {0.001, 0.1, 0.5, 1.0, 2.0, 4.0}, and report the average testing results on five common OOD datasets in Fig. 8 (_left_). As can be observed, our MODE-T is not sensitive to the change of \(\lambda\) within a certain range (from 0.1 to 2.0). It's worth noting that when \(\lambda\) takes the value of 1.0 our MODE-T
Fig. 8: Effects of the balance weight \(\lambda\) for MODE-T and the learning rate \(\eta\) for MODE-F on performance – CIFAR-100 (ID) with ResNet-34. The FPR values are the average on the five common OOD datasets, smaller is better.
Fig. 10: The tSNE visualization of the vanilla CE-trained and our ALPA-enhanced image representations from the penultimate layer of the feature backbone – CIFAR-10 (ID) with ResNet-18. The OOD data LSUN is colored in **black**, and the ID data CIFAR-10 is denoted by **non-black** colors.
Fig. 9: Effect of the KNN hyperparameter \(k\) on OOD detection performance – CIFAR-100 (ID) with ResNet-34. The FPR values are the average on five common OOD datasets, smaller is better.
establishes the best OOD detection performance with both CE-trained and CL-trained baseline models. We thus set \(\lambda=1.0\) for MODE-T in our experiments.
**Effect of the Learning Rate \(\eta\) for MODE-F.** The most important hyperparameter of our MODE-F (i.e., ALPA-finetune + CSD) is the initial learning rate \(\eta\) for finetuning pretrained models learned by different losses, using the developed ALPA-finetune. To investigate the effect of \(\eta\) on the performance MODE-F, we carefully tune \(\eta\) by setting it to different values of {0.001, 0.05, 0.1, 0.5, 1.0, 2.0}. We report the average results on five common OOD datasets in Fig. 8 (_right_). As seen, our MODE-F achieves remarkable and stable performance when \(\eta\) takes the values within a certain range (from 0.05 to 0.5). In particular, when \(\eta=0.1\) our MODE-F achieves the best results with both CE-trained and CL-trained baselines, we therefore set \(\eta=0.1\) for MODE-F in our experiments.
**Effect of the \(k\)-NN Hyperparameter \(k\).** Both KNN [9] and our MODE (CSD, more concretely) need to adjust the \(k\)-NN hyperparameter \(k\). In Fig. 9, we analyze the effect of \(k\) on the OOD detection performance of our MODE. Specifically, we carefully tune \(k\) by setting it to the values of {5, 10, 30, 50, 100, 200}, and report the average results on five common OOD datasets. As can be observed from the figure, the OOD detection performance gradually improves with the increase of \(k\) before \(k\) reaches 50. This trend is also consistent with the ablation results of \(k\) in KNN [9] under the same setting. Additionally, we also observe that the OOD detection results of both MODE-T and MODE-F remain similar when \(k\) takes the values of 50, 100 and 200. Hence, in our experiments, we set \(k=50\) as in KNN [9].
### _Computational Cost_
It is important to study the computational cost of our proposed MODE for practical purposes. In this part, we quantitatively investigate the test-time OOD detection computational cost of our MODE (concretely, brought by the test-time CSD), according to the per-image inference time (in _milliseconds_). In particular, we randomly sample \(\alpha\%\) training data from each class of the ID dataset (i.e., CIFAR-100 with 50,000 training examples) for \(k\)-nearest neighbor search on testing OOD data. We report the per-image inference time of MODE-F (while the results of MODE-T have similar trends) at different values of \(\alpha\%\) in Table V, where we conduct the experiment on an _NVIDIA GeForce RTX 3090_. It should be noted that when \(\alpha\) takes the values of {5, 10, 50, 100}, we set the \(k\)-NN hyperparameter \(k\) to {10, 20, 30, 50} for our MODE-F, respectively. We highlight four important observations in the table. **First**, when \(\alpha=100\%\), the per-image inference time of our method is 1.51 _milliseconds_, a result that may be acceptable in many real-world (offline or online) applications. **Second**, as expected, the inference time cost of our method gradually decreases as \(\alpha\) decreases. **Third**, the plunge in \(\alpha\) does not severely degrade the OOD detection performance of our method. **Fourth**, when spending a comparable time consumption as the state-of-the-art KNN [9], our method still outperforms KNN by a large margin (i.e., **6.08%** in FPR). All the above results suggest that our proposed MODE enjoys good practicability and scalability.
### _Visualization Analysis_
So far, we have quantitatively demonstrated the effectiveness and flexibility of our developed MODE framework for OOD detection. In this part, we present some visualization results to qualitatively investigate our MODE.
**Visualization with tSNE [77].** In Fig. 10, we present the tSNE visualization of the vanilla CE-trained and our ALPA-enhanced global representations (extracted from the penultimate layer of the feature backbone ResNet-18) of the ID dataset CIFAR-10 and the OOD data LSUN (OOD) - the results of vanilla CL-trained representations have similar trends. As can be observed from the figure, compared with the vanilla CE-trained global
Fig. 11: Visualization analysis on \(k\)-nearest neighbors. For illustrative purposes, we construct a _hard_ 2-Way 10-Shot task, which consists of two _near_ categories of “_Lemon_” and “_Orange_” from ImageNet-1k [71], with 10 examples/class. _Particularly, when “Lemon” is treated as ID, “Orange” becomes OOD, and vice versa._ For each testing example, we search the \(k\)-nearest neighbors from the 20 seen examples (_left_) in the representation space using KNN [9] or MODE-F. The cyan dotted boxes indicate the image regions corresponding to the most discriminative representations recognized by our method (the results of both the two settings i) ID=“_Lemon_”, OOD=“_Orange_” and ii) ID=“_Orange_”, OOD=“_Lemon_” are shown). As seen, our method locates the target object regions in the vast majority of cases. Besides, when \(4\leq k\leq 10\), totally 6 and 2 OOD examples are wrongly detected as ID data for KNN [9] and our method, respectively.
representations, the global representations learned by each of our ALPA-train or ALPA-finetune exhibit better ID-OOD separability. We also see that although the baseline improves the compactness of each ID class, there is a significant overlap between these ID classes and the OOD data. Generally speaking, in combination with the quantitative ID classification and OOD detection results in Table IV, it is apparent that our designed ALPA not merely encourage locally discriminative representations during ID training, but also drive the extracted image representations of different ID/OOD classes to be more compact for benefiting both OOD detection as well as multi-class classification tasks.
**Visualization of \(k\)-Nearest Neighbors.** In Fig. 11, we further demonstrate the effectiveness of our MODE by qualitatively comparing its searched \(k\)-nearest neighbors with that of KNN [9] on testing examples. In this experiment, leveraging the {"_Lemon_" vs. "_Orange_"} task for visualization analysis is inspired by the fact that _hard_ OOD detection tasks composed with near ID-OOD classes/examples are the major challenge for existing machine learning systems, as revealed in [39, 78]. In this task, when we treat "_Lemon_" as ID data, "_Orange_" becomes OOD, and _vice versa_. As can be observed from the figure, our method successfully identifies the most discriminative multi-scale representations corresponding to the target objects (or object regions) in the vast majority of cases. What is noteworthy is that local representations/regions play a key role in successfully recognizing those hard examples with cluttered backgrounds. Moreover, when \(4\leq k\leq 10\), totally 6 and 2 OOD examples are wrongly detected as ID data for KNN [9] and our method, respectively. All in all, benefiting from highlighting richer and more transferable representations during ID training (by ALPA), and taking advantage of the most discriminative multi-scale representations for test-time OOD detection (by CSD), our proposed MODE shows remarkable performance on distinguishing ID/OOD data.
## V Conclusion
For the first time, this work proposes MODE to leverage multi-scale representations inside images for OOD detection. Concretely, we first observe that due to the scale-discrepancy between the ID training and OOD detection processes, existing models pretrained by off-the-shelf cross-entropy or contrastive losses are unable to capture usable local representations for MODE. To address this issue, we propose ALPA, which enables locally discriminative representations by aligning and highlighting the local object regions of pairwise examples during ID training. During test-time OOD detection, we devise a CSD function on the most discriminative multi-scale representations to distinguish ID/OOD examples more faithfully. Our MODE framework is orthogonal to ID training losses and models pre-trained with different fashions. Extensive experimental results demonstrate the effectiveness and flexibility of our MODE on a wide range of baseline methods applied to various network structures. We hope this work can bring new inspiration to OOD detection as well as other related fields. To facilitate future research, we have made our code publicly available at: [https://github.com/JimZAI/MODE-OOD](https://github.com/JimZAI/MODE-OOD).
|
2306.10795 | On the number of components of random polynomial lemniscates | A lemniscate of a complex polynomial $Q_n$ of degree $n$ is a sublevel set of
its modulus, i.e., of the form $\{z \in \mathbb{C}: |Q_n(z)| < t\}$ for some
$t>0.$ In general, the number of connected components of this lemniscate can
vary anywhere between 1 and $n$. In this paper, we study the expected number of
connected components for two models of random lemniscates. First, we show that
lemniscates whose defining polynomial has i.i.d. roots chosen uniformly from
$\mathbb{D}$, has on average $\mathcal{O}(\sqrt{n})$ number of connected
components. On the other hand, if the i.i.d. roots are chosen uniformly from
$\mathbb{S}^1$, we show that the expected number of connected components,
divided by n, converges to $\frac{1}{2}$. | Subhajit Ghosh | 2023-06-19T09:26:21Z | http://arxiv.org/abs/2306.10795v1 | # On the number of components of random polynomial lemniscates
###### Abstract.
A lemniscate of a complex polynomial \(Q_{n}\) of degree \(n\) is a sublevel set of its modulus, i.e., of the form \(\{z\in\mathbb{C}:|Q_{n}(z)|<t\}\) for some \(t>0\). In general, the number of connected components of this lemniscate can vary anywhere between \(1\) and \(n\). In this paper, we study the expected number of connected components for two models of random lemniscates. First, we show that lemniscates whose defining polynomial has i.i.d. roots chosen uniformly from \(\mathbb{D}\), has on average \(\mathcal{O}(\sqrt{n})\) number of connected components. On the other hand if the i.i.d. roots are chosen uniformly from \(\mathbb{S}^{1}\), we show that the expected number of connected components, divided by \(n\), converges to \(\frac{1}{2}\).
## 1. Introduction
Let \(Q_{n}(z)\) be a monic polynomial of degree \(n\) in the complex plane such that all its roots are contained within the closed unit disk \(\overline{\mathbb{D}}\). That is,
\[Q_{n}(z):=\prod_{i=1}^{n}(z-z_{i}), \tag{1}\]
where \(|z_{j}|\leq 1\), for \(1\leq j\leq n\). We denote the unit lemniscate of \(Q_{n}(z)\) by \(\Lambda(Q_{n}):=\{z\in\mathbb{C}:|Q_{n}(z)|<1\}.\) The quantity of interest is the number of connected components of \(\Lambda\). The maximum principle implies that each connected component of the lemniscate must contain a zero of the polynomial; therefore, there are at most \(n\) components. In this paper, we investigate the number of components of a _typical_ lemniscate. Numerical simulations for random polynomials with roots chosen from the uniform probability measure on the unit disk \(\mathbb{D}\), and on the circle \(\mathbb{S}^{1}\) show a giant component alongside some tiny components (see Figures 1, 2). In this paper, we quantify this numerical observation.
### Motivation and Previous Results
The study of the metric and topological properties of polynomial lemniscates serves two main purposes. Firstly, it is the simplest curve with an algebraic boundary that is relevant to many problems in mathematical physics [19, 5, 2]. Secondly, polynomial lemniscates are used as a tool for approximating and analyzing complex geometric objects due to implications of Hilbert's lemniscate Theorem and its generalizations [28, 24]. For a more detailed exposition, please refer to [20] and the corresponding references therein. Taking all these into account, in 1958, Erdos, Herzog, and Piranian in [9] studied the geometric and topological properties of polynomial lemniscates and posed a long list of open problems. One of the key motivations behind the work related to random polynomial lemniscates is to offer a probabilistic approach to the problems in [9]. Krishnapur, Lundberg, and Ramachandran recently showed that the inradius of a random lemniscate whose defining polynomial has roots chosen from a measure \(\mu\) depends on the negative set of the logarithmic potential \(U_{\mu}\). Lundberg, Epstein, and Hanin conducted a study on the lemniscate tree that encodes the nesting structure of the level sets of a random polynomial in [8]. Lundberg and Ramachandran in [22] conducted a study on the _Kac ensemble_ and found that the expected number of connected components is asymptotically \(n\). Lerario and Lundberg [21] proved that for random rational lemniscates, which are defined as the quotient of two _spherical random polynomials_, the average number of connected components is \(\mathcal{O}(n)\). Later, Kabluchko and Wigman [18] discovered the exact asymptotics. Fyodorov, Lerario,
and Lundberg studied the number of connected components of random algebraic hypersurfaces in [10]. In this article, we examine random polynomials with random roots, in contrast to random coefficients. Another stream of research on random polynomials includes studying the roots and critical points of random polynomials. In this work, we have made use of one such _pairing_ result due to Kabluchko and Seidel [17], which states that for random polynomials whose roots are sampled from an appropriate probability measure \(\nu\) supported within the unit disk, each root is associated with a critical point in close proximity. For more background, details and generalizations consult [13], [26], [16], [30], [27], [4], [23], [1], [14], [15], [25]. To find related research on meromorphic functions and Gaussian polynomials, please refer to [12], [11]. We emphasize the fact that such pairing phenomena are exclusive to random polynomials. The analogous result in the deterministic setting is Sendov's conjecture [29], which was recently proven by Tao in [31] for all polynomials of sufficiently large degree.
### Main Results
In all the theorems we have the following setting.
**Setting and notations:** Let \(\{X_{i}\}_{i=1}^{\infty}\) be a sequence of i.i.d. random variables with law \(\mu\), supported in the closed unit disk. Consider the sequence of random polynomials defined by
\[P_{n}(z):=\prod_{i=1}^{n}(z-X_{i}) \tag{2}\]
and its lemniscate
\[\Lambda_{n}:=\Lambda(P_{n})=\{z\in\mathbb{C}:|P_{n}(z)|<1\}.\]
We denote by \(C(\Lambda_{n})\) the number of connected components of the lemniscate \(\Lambda_{n}\). Throughout the paper, we denote by \(\mathbb{C}\) a positive numerical constant whose values may vary from line to line. For a set \(S\subset\mathbb{C}\), we denote by \(|S|\) the cardinality of the set \(S\). The following are the main theorems of this paper.
**Theorem 1.1**.: _Let \(\mu\) be the probability measure distributed uniformly in the unit disk \(\mathbb{D}\). Then there exist absolute constants \(C_{1},C_{2}>0\) such that for all large \(n\) we have_
\[C_{1}\sqrt{n}\leq\mathbb{E}[C(\Lambda_{n})]\leq C_{2}\sqrt{n}.\]
**Theorem 1.2**.: _Let \(\mu\) be the probability measure distributed uniformly in the unit circle \(\mathbb{S}^{1}\). Then_
\[\lim_{n\to\infty}\frac{\mathbb{E}[C(\Lambda_{n})]}{n}=\frac{1}{2}.\]
Figure 1. Lemniscates of degree n = 50, 100, 250 with zeros sampled uniformly from the open unit disk.
### Remarks
What happens if we choose \(\mu\) to be the uniform measure on \(r\mathbb{D}\) or \(r\mathbb{S}^{1}\)? Let us consider the uniform probability measure on \(r\mathbb{S}^{1}\) say \(\mu_{r}\). Then it is easy to show that the logarithmic potential is
\[U_{\mu_{r}}(z)=\begin{cases}\log|z|&\text{ if }|z|\geq r,\\ \log r&\text{ if }|z|<r.\end{cases} \tag{3}\]
**Case 1 \((r<1)\):**: In this case, the potential (3) is negative in the whole unit disk. Therefore the set \(r\mathbb{D}\) is enclosed within the lemniscate by Theorem 1.1 in [20], resulting in a single connected component with overwhelming probability.
**Case 2 \((r>1)\):**: In this case, the potential (3) is positive in the entire complex plane therefore we get with overwhelming probability, \(n\) components for the lemniscate, by the implications of Theorem 1.3 of [20].
So in some sense, \(r=1\) is the critical case in this model. A similar analysis for the uniform probability measure on \(r\mathbb{D}\) is done in [20], example-1.7. See Figure 3, 4. The above results and the results in this paper are summarized schematically in Table 1.
### Heuristics and ideas of proof
We will now provide an overview of the underlying heuristics behind our results. In the first model, which involves random polynomials with uniformly chosen roots from \(\mathbb{D}\), the potential \(U_{\mu}(z)\) is negative throughout the unit disk. By writing \(\log|P_{n}(z)|=\sum_{i=1}^{n}\log|z-X_{i}|\) as the sum of independent random variables with mean \(U_{\mu}(z)\), we employ various concentration estimates to analyze the behavior of \(|P_{n}(z)|\). Since the sum of i.i.d. random variables concentrate near its mean which is negative, most of the region within the disk, away from the boundary, lies inside the lemniscate. It is only near the boundary, where the potential approaches zero, isolated components are formed due to the fluctuations governed by the _Central Limit Theorem_, resulting in \(\mathcal{O}(\sqrt{n})\) many components. In the other model, i.e., random polynomial with roots chosen uniformly on the circle, the potential is zero in the whole disk. The probability
\begin{table}
\begin{tabular}{||c|c|c|c|c|c||} \hline & \(\mathbf{\mu}\) & \(\mathbf{r<1}\) & \(\mathbf{r=1}\) & \(\sqrt{\mathbf{e}}\geq\mathbf{r>1}\) & \(\mathbf{r>1}\) \\ \hline \(\mathbb{E}[\mathbf{C(\Lambda_{n})}]\) & Uniform probability measure in \(r\mathbb{D}\) & \(1\) & \(\mathcal{O}(\sqrt{n})\) & \(C_{r}n\) & \(n\) \\ \hline \(\mathbb{E}[\mathbf{C(\Lambda_{n})}]\) & Uniform probability measure in \(r\mathbb{S}^{1}\) & \(1\) & \(\frac{n}{2}\) & \(n\) & \(n\) \\ \hline \end{tabular}
\end{table}
Table 1. Asymptotics of Expected No. of Components for Different Values of \(r\).
Figure 2. Lemniscates of degree n = 50, 100, 250 with zeros sampled uniformly from the unit circle. A unit circle is also plotted for reference in each case.
of any point on \(\mathbb{S}^{1}\) being inside the lemniscate is close to \(\frac{1}{2}\). Therefore, if we start with \(P_{n}\) and introduce a new root \(X_{n+1}\) to build \(P_{n+1}\), \(X_{n+1}\) will land outside \(\Lambda_{n}\) with probability approximately \(\frac{1}{2}\), forming an isolated component. Therefore, on average, we get approximately \(\frac{n}{2}\) components. In both models, we establish the lower bound by estimating the number of isolated components. To determine the upper bound for the disk case, we utilize an analytical characterization for the number of components (see Lemma 2.8), which asserts that the number of components is one more than the number of critical points whose critical value is larger or equal to \(1\). To determine the number of such critical points, we employ a _pairing_ result from [17] to associate critical points with roots with some desired properties. The number of such roots yields the desired upper bound. However, in the other case, the pairing phenomena does not occur. There we establish the upper bound by showing that the number of components possessing fewer than \(n^{\varepsilon}\) roots, when divided by \(n\), tends towards \(\frac{1}{2}\), for sufficiently small \(\varepsilon\).
Figure 4. Lemniscates of degree n = 500 with zeros sampled uniformly from \(r\mathbb{D}\) for \(r=0.95,0.85\sqrt{e}\), and \(1.5\sqrt{e}\) respectively.
Figure 3. Lemniscates of degree n = 500 with zeros sampled uniformly from \(r\mathbb{S}^{1}\), for \(r=0.9\) and \(1.1\) respectively.
## 2. Preliminary Lemmas
Before delving into the proofs of the main theorems, we gather preliminary theorems and lemmas that are utilized repeatedly in both theorems.
**Theorem 2.1**.: _(Berry-Esseen) Let \(X_{1},X_{2},...\) be i.i.d. random variables with \(\mathbb{E}X_{i}=0,\mathbb{E}X_{i}^{2}=\sigma^{2},\) and \(\mathbb{E}|X_{i}|^{3}=\rho<\infty.\) If \(F_{n}(x)\) is the distribution function of \(\frac{(X_{1}+...+X_{n})}{\sigma\sqrt{n}}\) and \(\Phi(x)\) is the standard normal distribution, then_
\[|F_{n}(x)-\Phi(x)|\leq\frac{3\rho}{\sigma^{3}\sqrt{n}}. \tag{4}\]
The proof of Theorem 2.1 can be found in [7] Theorem 3.4.17.
**Theorem 2.2**.: _[Bennett's inequality] Let \(Y_{1},Y_{2},...,Y_{n}\) be independent random variables with finite variance such that \(\forall\)\(i\leq n\), \(Y_{i}\leq b\), for some \(b>0\) almost surely. Let_
\[S=\sum_{i=1}^{n}\left(Y_{i}-\mathbb{E}[Y_{i}]\right),\]
_and \(\nu=\sum_{i=1}^{n}\mathbb{E}[Y_{i}^{2}].\) Then for any \(t>0,\) we have_
\[\mathbb{P}(S>t)\leq\exp\left(-\frac{\nu}{b^{2}}h\Big{(}\frac{bt}{\nu}\Big{)} \right),\]
_where \(h(u)=(1+u)\log(1+u)-u\), for \(u>0\)._
For the proof of this concentration inequality and other similar results see [3].
**Lemma 2.3**.: _Let \(X\) be a random variable taking values in \(\overline{\mathbb{D}}\) with law \(\mu\). Assume that for all \(z\in\mathbb{D},r\leq 2\), there exist constants \(\varepsilon,M_{1},M_{2}\in(0,\infty)\) such that \(\mu\) satisfies_
\[M_{1}r^{\varepsilon}\leq\mu(B(z,r))\leq M_{2}r^{\varepsilon}. \tag{5}\]
_Fix \(p\), and define the function \(F_{p}(z):=\mathbb{E}\Big{[}\big{|}\log|z-X|\big{|}^{p}\Big{]}:\mathbb{D}\to \mathbb{R}\). Then, there exist constants \(C_{1},C_{2}\) depending on \(p,\varepsilon,M_{1},M_{2}\), such that_
\[C_{1}\leq\inf_{z\in\mathbb{D}}F_{p}(z)\leq\sup_{z\in\mathbb{D}}F_{p}(z)\leq C _{2}. \tag{6}\]
Proof.: We will utilize the layer cake representation and write
\[\mathbb{E}\left[\big{|}\log|z-X|\big{|}^{p}\right] =p\int_{0}^{\infty}t^{p-1}\mathbb{P}\left(\big{|}\log|z-X|\big{|}> t\right)dt\] \[=p\int_{0}^{2}t^{p-1}\mathbb{P}\left(\big{|}\log|z-X|\big{|}>t \right)dt+p\int_{2}^{\infty}t^{p-1}\mathbb{P}\left(\big{|}\log|z-X|\big{|}>t \right)dt.\]
In the second integral, notice that \((\log|z-X|)^{+}<2\), therefore, probability is non zero when \(\log|z-X|\) is negative. Taking this into account and using the upper bound in (5)
\[\mathbb{E}\left[\big{|}\log|z-X|\big{|}^{p}\right] \leq p\int_{0}^{2}t^{p-1}dt+pM_{2}\int_{2}^{\infty}t^{p-1}e^{-t \varepsilon}dt\] \[\leq p2^{p+1}\left(1+C\left(\varepsilon\right)M_{2}\right).\]
The lower bound follows similarly using the left inequality in (5) along with the layer cake representation.
**Lemma 2.4**.: _Let \(X\) be a uniform random variable on the open unit disk \(\mathbb{D}\). For \(p<2\), there exists a constant \(C_{p}\) such that_
\[\mathbb{E}\left[\frac{1}{|z-X|^{p}}\right]\leq C_{p}. \tag{7}\]
Proof.: This proof is again based on the layer cake representation.
\[\mathbb{E}\left[\frac{1}{|z-X|^{p}}\right] =\int_{0}^{\infty}\mathbb{P}\left(\frac{1}{|z-X|^{p}}>t\right)dt\] \[=\int_{0}^{\infty}\mathbb{P}\left(|z-X|<\frac{1}{t^{1/p}}\right)dt\] \[=\int_{0}^{2}\mathbb{P}\left(|z-X|<\frac{1}{t^{1/p}}\right)dt+ \int_{2}^{\infty}\mathbb{P}\left(|z-X|<\frac{1}{t^{1/p}}\right)dt\] \[=\int_{0}^{2}dt+\int_{2}^{\infty}t^{-2/p}dt\] \[\leq\left(2+\frac{p}{2-p}2^{\frac{p-2}{p}}\right).\qed\]
**Lemma 2.5**.: _(**Distance between the roots**) Let \(\{X_{i}\}_{i=1}^{\infty}\) be a sequence of i.i.d. random variables with law \(\mu\), supported in the closed unit disk. If there exists a real-valued function \(f\) such that_
\[\mathbb{P}\left(|z-X_{j}|>t\right)\geq 1-f(t),\]
_for all \(z\in\mathbb{D}\) and \(t\) small, then for any set \(B\subset\mathbb{D}\), we have_
\[\mathbb{P}\left(\min_{2\leq j\leq n}|X_{1}-X_{j}|>t\Big{|}X_{1}\in B\right) \geq\left(1-f(t)\right)^{n}. \tag{8}\]
Proof of Lemma 2.5.: We use the independence of the random variables after conditioning on \(X_{1}\) to write
\[\mathbb{P}\left(\min_{2\leq j\leq n}|X_{1}-X_{j}|>t\Big{|}X_{1} \in B\right)\] \[=\frac{1}{\mathbb{P}(X_{1}\in B)}\int_{\mathbb{D}}\mathbb{P}\left( \min_{2\leq j\leq n}|X_{1}-X_{j}|>t,X_{1}\in B\Big{|}X_{1}=z\right)d\mu(z)\] \[=\frac{1}{\mathbb{P}(X_{1}\in B)}\int_{B}\mathbb{P}\left(\min_{2 \leq j\leq n}|z-X_{j}|>t\right)d\mu(z)\] \[=\frac{1}{\mathbb{P}(X_{1}\in B)}\int_{B}\mathbb{P}\left(|z-X_{j} |>t\right)^{\left(n-1\right)}d\mu(z)\] \[\geq\frac{1}{\mathbb{P}(X_{1}\in B)}\int_{B}\left(1-f(t)\right)^ {\left(n-1\right)}d\mu(z)\] \[\geq\left(1-f(t)\right)^{\left(n-1\right)}.\]
**Lemma 2.6**.: _(Lower bound on first derivative) Let \(\{X_{i}\}_{i=1}^{\infty}\) be a sequence of i.i.d. random variables with law \(\mu\), supported in the closed unit disk. Assume that for every \(1\leq p\leq 3\), there exists some positive constant \(C_{p}>0\), such that \(E\left[\left|\log\left|z-X_{1}\right|\right|^{p}\right]<C_{p}.\) Let \(B_{n}\subset\mathbb{D}\) be such that for some \(M\geq 0\) and \(\forall z\in B_{n}\), we have \(E\left[\left|\log\left|z-X_{1}\right|\right|\right]\geq-\frac{M}{\sqrt{n}}\). Then for \(n\) large, there exists a constant \(\hat{C}(M)>0\), depending on \(M\) such that,_
\[\mathbb{P}\left(\left|P_{n}^{{}^{\prime}}(X_{1})\right|\geq e^{\sqrt{n}} \Big{|}X_{1}\in B_{n}\right)\geq\hat{C}(M). \tag{9}\]
Proof of Lemma 2.6.: We start by taking the logarithm to write
\[\mathbb{P}\left(\left|P_{n}^{{}^{\prime}}(X_{1})\right|\geq e^{ \sqrt{n}}\Big{|}X_{1}\in B_{n}\right) =\mathbb{P}\left(\prod_{j=2}^{n}\left|X_{1}-X_{j}\right|\geq e^{ \sqrt{n}}\Big{|}X_{1}\in B_{n}\right)\] \[=\mathbb{P}\left(\sum_{j=2}^{n}\log\left|X_{1}-X_{j}\right|\geq \sqrt{n}\Big{|}X_{1}\in B_{n}\right)\] \[=\frac{\mathbb{P}\left(\sum_{j=2}^{n}\log\left|X_{1}-X_{j}\right| \geq\sqrt{n},X_{1}\in B_{n}\right)}{\mathbb{P}\left(X_{1}\in B_{n}\right)}\] \[=\frac{1}{\mathbb{P}(X_{1}\in B_{n})}\int_{B_{n}}\mathbb{P}\left( \sum_{j=2}^{n}\log\left|z-X_{j}\right|\geq\sqrt{n}\right)d\mu(z). \tag{10}\]
We estimate the probability inside the integral in (10) using Berry-Esseen theorem (2.1) to arrive at
\[\frac{1}{\mathbb{P}(X_{1}\in B_{n})}\int_{B_{n}}\mathbb{P}\left( \sum_{j=2}^{n}\left(\log\left|z-X_{j}\right|-\mathbb{E}[\log\left|z-X_{j} \right|]\right)\geq\sqrt{n}-(n-1)\mathbb{E}[\log\left|z-X_{j}\right|]\right)d \mu(z)\] \[\geq\frac{1}{\mathbb{P}(X_{1}\in B_{n})}\int_{B_{n}}\mathbb{P} \left(\frac{1}{\sqrt{n}}\sum_{j=2}^{n}\left(\log\left|z-X_{j}\right|-\mathbb{E }[\log\left|z-X_{j}\right|]\right)\geq(M+1)\right)d\mu(z)\] \[\geq\frac{1}{\mathbb{P}(X_{1}\in B_{n})}\int_{B_{n}}\left(\Phi \left(\frac{M+1}{\sigma(z)}\right)-\frac{C\rho(z)}{\sigma^{3}(z)\sqrt{n}} \right)d\mu(z), \tag{11}\]
where \(\sigma^{2}(z)=\mathbb{E}\left[\left(\log\left|z-X_{j}\right|\right)^{2}\right]\), \(\rho(z)=\mathbb{E}\left[\left|\log\left|z-X_{j}\right|\right|^{3}\right]\) and \(\Phi\) is the distribution function of standard normal. From the hypothesis, we have uniform upper and lower bounds on \(\sigma^{2}(z)\) and \(\rho(z)\) using which we can bound the integrand in (11) as
\[\Phi\left(\frac{(M+1)}{\sigma(z)}\right)-\frac{C\rho(z)}{\sigma^{3}(z)\sqrt{n} }\geq\left(C_{1}(M)-\frac{C_{2}}{\sqrt{n}}\right). \tag{12}\]
Putting the bound (12) in the estimate (11) we get the required probability (9) for some absolute constant \(\hat{C}\).
\[\mathbb{P}\left(\left|P_{n}^{{}^{\prime}}(X_{1})\right|\geq e^{\sqrt{n}}|X_{1} \in B_{n}\right)=\frac{1}{\mathbb{P}(X_{1}\in B_{n})}\int_{B_{n}}\left(C_{1}(M) -\frac{C_{2}}{\sqrt{n}}\right)d\mu(z)\geq\hat{C}(M).\qed\]
**Lemma 2.7**.: _(Bound on higher derivatives) Let \(\{X_{i}\}_{i=1}^{\infty}\) be a sequence of i.i.d. random variables with law \(\mu\), supported on the closed unit disk. If there exists a constant \(C>0\), such that \(\mathbb{E}\left[\frac{1}{\left|z-X_{1}\right|}\right]<C\) for all
\(z\in\mathbb{D}\), then for any \(\mathbb{B}\subset\mathbb{D}\)_
\[\mathbb{E}\left[\frac{1}{k!}\left|\frac{P_{n}^{(k)}(X_{1})}{P_{n}^{\prime}(X_{1} )}\right|\left|X_{1}\in\mathbb{B}\right]\leq{n-1\choose k-1}C^{k-1}. \tag{13}\]
Proof of Lemma 2.7.: We write \(P_{n}(z)\) as \(P_{n}(z)=(z-X_{1})Q_{n}(z)\), where \(Q_{n}(z):=\prod_{2}^{n}(z-X_{j}),\) then differentiation yields,
\[P_{n}^{(k)}(z)=kQ_{n}^{(k-1)}(z)+(z-X_{1})Q_{n}^{k}(z).\]
Putting \(z=X_{1}\) in the above equation, we get \(\frac{P_{n}^{k}(X_{1})}{P_{n}^{\prime}(X_{1})}=\frac{kQ_{n}^{(k-1)}(X_{1})}{Q_ {n}(X_{1})}\). Since \(X_{1}\) is not a root of \(Q_{n}(z)\), \(\frac{Q_{n}^{(k-1)}(X_{1})}{Q_{n}(X_{1})}\) will have \((n-1)(n-2)...\left(n-(k-1)\right)\) many summands of the form \(\left[\frac{1}{(X_{1}-X_{2})...(X_{1}-X_{k})}\right]\). Here, we only care about the number of summands because after conditioning on \(X_{1}\), all of them will have the same expected value.
\[\mathbb{E}\left[\frac{1}{k!}\left|\frac{P_{n}^{(k)}(X_{1})}{P_{n} ^{\prime}(X_{1})}\right|\left|X_{1}\in\mathbb{B}\right] \leq\int_{\mathbb{B}}\frac{1}{k!}\mathbb{E}\left(\left|\frac{P_{n }^{(k)}(X_{1})}{P_{n}^{\prime}(X_{1})}\right|\left|X_{1}=z\right)\frac{d\mu(z )}{\mu(\mathbb{B})}\right.\] \[\leq\int_{\mathbb{B}}\frac{k(n-1)(n-2)...(n-k+1)}{k!}\mathbb{E} \left(\left|\frac{1}{(z-X_{2})...(z-X_{k})}\right|\right)\frac{d\mu(z)}{\mu( \mathbb{B})}\] \[\leq{n-1\choose k-1}C^{k-1},\]
where we got the last estimate using the hypothesis of the lemma.
We will need one last lemma from complex analysis which relates the number of components of a polynomial lemniscate with the number of critical points with critical value bigger or equal to \(1\).
**Lemma 2.8**.: _Let \(Q_{n}(z)\), \(\Lambda(Q_{n})\) be as in (1), and \(\{\beta_{j}\}_{j=1}^{n-1}\) be the set of critical points of \(Q_{n}\). Then,_
\[C(\Lambda)=1+\left|\{\beta_{j}:|P(\beta_{j})|\geq 1\}\right|.\]
Proof.: Let us assume that \(C(\Lambda)=m\), i.e. there are \(m\) many components of the lemniscate. Let \(n_{1},...,n_{m}\) be the number of zeroes in each of the components. We know that for a simple closed level curve of \(f(z),\) say \(\mathcal{C}\) if \(f(z)\) is analytic up to the boundary of \(\mathcal{C}\) and has \(n\) zeroes inside \(\mathcal{C}\), then \(f^{\prime}(z)\) has \((n-1)\) zeros inside it. The proof of this result can be found in [32], Proposition \(3.55\). Then the component containing \(n_{i}\) many zeroes will have \((n_{i}-1)\) many critical points inside the component. Since all these critical points are inside the lemniscate, all the critical values are strictly less than \(1\). Therefore, the following algebraic manipulations yield the required equality.
\[\left|\{\beta_{j}:|P(\beta_{j})|\geq 1\}\right| =(n-1)-\left|\{\beta_{j}:|P(\beta_{j})|<1\}\right|\] \[=(n-1)-\sum_{i=1}^{m}(n_{i}-1)=(m-1)\]
## 3. Proof of theorem 1.1
Proof of theorem 1.1.: **(Lower bound)** The proof of the lower bound in both the theorems is based on an estimate of the number of isolated components. We start by defining what we mean by an isolated component of a polynomial. Let \(Q_{n}(z)\) be defined as in (1), then we say that a root \(z_{j}\) forms an _isolated component_ if there exists a ball \(\mathcal{B}\) containing \(z_{j}\) such that,
\[\left\{\begin{array}{ll}&z_{k}\notin\mathcal{B},\qquad\qquad\forall k\neq j \\ &|Q_{n}(z)|\geq 1,\qquad\forall z\in\partial\mathcal{B}.\end{array}\right. \tag{14}\]
The key observation here is that bounds on the derivatives at the root provide a sufficient condition for an isolated component. Suppose for the root \(z_{1}\) there exists some \(r>0\) such that the following holds,
\[\left\{\begin{array}{ll}&|Q_{n}^{{}^{\prime}}(z_{1})\frac{r}{2}|\geq 1\\ &\left|\frac{Q_{n}^{{}^{\prime}}(z_{1})\frac{r^{k}}{k!}}{Q_{n}^{{}^{\prime}}( z_{1})\frac{r}{1!}}\right|<\frac{1}{2n^{2}},\qquad\text{for $k=2,...,n$}\\ &\min\limits_{2\leq j\leq n}|z_{1}-z_{j}|>r.\end{array}\right. \tag{15}\]
Then using Taylor series expansion of \(Q_{n}(z)\) for \(z\in\partial B(z_{1},r)\) we get,
\[|Q_{n}(z)| \geq\left|Q_{n}^{{}^{\prime}}(z_{1})r\right|-\sum_{k=2}^{n}\left| Q_{n}^{(k)}(z_{1})\frac{r^{k}}{k!}\right|\] \[\geq\left|Q_{n}^{{}^{\prime}}(z_{1})r\right|\left(1-\sum_{k=2}^{n }\frac{|Q_{n}^{(k)}(z_{1})\frac{r^{k}}{k!}|}{Q_{n}^{{}^{\prime}}(z_{1})\frac{r }{1!}}\right|\right)\] \[\geq\left|Q_{n}^{{}^{\prime}}(z_{1})r\right|\left(1-\sum_{k=2}^{ n}\frac{1}{2n^{2}}\right)\] \[\geq\left|Q_{n}^{{}^{\prime}}(z_{1})\frac{r}{2}\right|\geq 1. \tag{16}\]
This ensures that there is a connected component of the lemniscate inside the disk \(B(z_{1},r)\). We now define for each \(1\leq i\leq n,\) the event \(L_{i}=\{X_{i}\) forms an isolated component\(\}\). Then it immediately follows that
\[\mathbb{E}\left[C(\Lambda_{n})\right]\geq\mathbb{E}\left[\sum_{i=1}^{n} \mathbbm{1}_{L_{i}}\right]\geq n\mathbb{E}\left[\mathbbm{1}_{L_{i}}\right]\geq n \mathbb{P}\left(L_{1}\right).\]
Since the isolated roots are near the unit circle with high probability we only consider roots lying in \(A_{n}:=\left\{z:1-\frac{1}{\sqrt{n}}<|z|<1\right\}\).
\[\mathbb{E}[C(\Lambda_{n})]\geq n\mathbb{P}\left(L_{1}|X_{1}\in A_{n}\right) \mathbb{P}(X_{1}\in A_{n})\geq\sqrt{n}\mathbb{P}\left(L_{1}|X_{1}\in A_{n} \right). \tag{17}\]
We now define the following events with \(r_{n}=\frac{1}{n^{6}}\),
\[\left\{\begin{array}{l}G_{1}:=\left\{|P_{n}^{{}^{\prime}}(X_{1})|\geq e^{n^{1/2} }\right\}\\ G_{k}:=\left\{\left|\frac{P_{n}^{(k)}(X_{1})\frac{r_{n}^{k}}{2!}}{P_{n}^{{}^{ \prime}}(X_{1})\frac{r_{n}}{2!}}\right|<\frac{1}{2n^{2}}\right|\right\},\text{ for }k=2,...,n.\\ G_{n+1}:=\left\{\min_{2\leq j\leq n}|X_{1}-X_{j}|>\frac{1}{n^{6}}\right\}. \end{array}\right. \tag{18}\]
In the setting of (15), occurrence of the events (18) implies that \(X_{1}\) forms an _isolated component_. Hence,
\[\mathbb{P}\left(L_{1}|X_{1}\in A_{n}\right)\geq\mathbb{P}\left(\cap_{j=1}^{n+ 1}G_{j}\big{|}X_{1}\in A_{n}\right) \tag{19}\]
Now we will estimate the conditional probabilities of \(G_{1},G_{2},...,G_{n+1}\) one by one. From Lemma 2.6 we have
\[\mathbb{P}\left(G_{1}|X_{1}\in A_{n}\right)\geq C_{1}. \tag{20}\]
Using the Lemma 2.7 with \(\mathbb{B}=A_{n}\) and the uniform bound of moment from Lemma 2.4, we get for \(k=2,...,n\)
\[\mathbb{E}\left(\left|\frac{P_{n}^{(k)}(X_{1})\frac{r_{n}^{k}}{R!}}{P_{n}^{{} ^{\prime}}(X_{1})\frac{r_{n}}{2!}}\right|\bigg{|}X_{1}\in A_{n}\right)\leq \frac{C^{k-1}}{n^{4(k-1)}}. \tag{21}\]
Now conditional Markov inequality with (21) yields,
\[\mathbb{P}\left(G_{k}^{c}|X_{1}\in A_{n}\right)\geq\mathbb{P}\left(\left|\frac {P_{n}^{(k)}(X_{1})\frac{r_{n}^{k}}{R!}}{P_{n}^{{}^{\prime}}(X_{1})\frac{r_{n} }{2!}}\right|\geq\frac{1}{2n^{2}}\Big{|}X_{1}\in A_{n}\right)\leq\frac{1}{n^{2 (k-1)}} \tag{22}\]
Lastly, the Lemma 2.5 with \(t=\frac{1}{n^{6}},f(x)=x^{2}\), and \(C=1\) gives,
\[\mathbb{P}\left(G_{n+1}|X_{1}\in A_{n}\right)\geq\left(1-\frac{1}{n^{12}} \right)^{n-1}\geq 1-\frac{1}{n^{10}} \tag{23}\]
Combining the estimates (20), (22), (23), we arrive at
\[\mathbb{P}\big{(}\cap_{j=1}^{n+1}G_{j}|X_{1}\in A_{n}\big{)} \geq\mathbb{P}(G_{1}|X_{1}\in A_{n})-\mathbb{P}\Big{(}G_{1}\cap \big{(}\cap_{k=2}^{n+1}G_{k}\big{)}^{c}|X_{1}\in A_{n}\Big{)}\] \[\geq C_{1}-\mathbb{P}\Big{(}\cup_{k=2}^{n}\big{(}G_{1}\cap G_{k} ^{c}|X_{1}\in A_{n}\big{)}\Big{)}\] \[\geq C_{1}-\sum_{k=2}^{n}\mathbb{P}\Big{(}G_{k}^{c}|X_{1}\in A_{n} \Big{)}\] \[\geq C_{1}-\frac{1}{n}, \tag{24}\]
where we have used \(\mathbb{P}(A\cap B)=\mathbb{P}(A)-(A\cap B^{c})\) in the first step and the union bound in the third step. Finally, putting (24) in (17) the required bound is obtained.
\[\mathbb{E}[C(\Lambda_{n})]\geq\sqrt{n}\mathbb{P}\left(L_{1}|X_{1}\in A_{n} \right)\geq\sqrt{n}\mathbb{P}\big{(}\cap_{j=1}^{n+1}G_{j}|X_{1}\in A_{n}\big{)} \geq C_{1}\sqrt{n}.\]
**(Upper bound)** The proof of the upper bound uses Lemma 2.8 to relate the number of components to certain critical points. We will take an indirect route to estimate the number of such critical
points via the roots. We say a root \(z_{1}\) of the polynomial \(Q_{n}(z)\) is _good_, if there exists \(r>0\) such that,
\[\left\{\begin{array}{c}B\left(z_{1},r\right)\subset\Lambda_{n},\\ \min_{2\leq j\leq n}\lvert z_{1}-z_{j}\rvert>3r,\\ \exists\text{ a unique critical point }\xi\in B\left(z_{1},r\right).\end{array}\right. \tag{25}\]
Resembling the proof of lower bound, we first give a sufficient condition for the ball of radius \(r_{n}:=\frac{1}{n^{3/4}}\) around \(z_{1}\) to be inside the lemniscate. Assume the following holds,
\[\left\{\begin{array}{c}0<\lvert Q_{n}^{{}^{\prime}}(z_{1})\rvert<e^{-\sqrt{n }},\\ \\ \left|\frac{Q_{n}^{\left(k\right)}(z_{1})\frac{r_{n}^{k}}{k!}}{Q_{n}^{{}^{ \prime}}(z_{1})\frac{r_{n}^{k}}{k!}}\right|<n^{2}\binom{n-1}{k-1}\left(\frac{ C}{n^{3/4}}\right)^{k-1},\quad 2\leq k\leq n.\end{array}\right. \tag{26}\]
Then for \(z\in\partial B\left(z_{1},r_{n}\right)\) and \(n\) large enough, using (26) we have,
\[\lvert Q_{n}(z)\rvert \leq\left\lvert Q_{n}^{{}^{\prime}}(z_{1})\frac{r_{n}}{1!} \right\rvert+\left\lvert Q_{n}^{{}^{\prime\prime}}(z_{1})\frac{r_{n}^{2}}{2! }\right\rvert+...+\left\lvert Q_{n}^{\left(k\right)}(z_{1})\frac{r_{n}^{k}}{k! }\right\rvert+...+\left\lvert Q_{n}^{\left(n\right)}(z_{1})\frac{r_{n}^{n}}{n! }\right\rvert\] \[\leq\left\lvert Q_{n}^{{}^{\prime}}(z_{1})r_{n}\right\rvert\left( 1+\sum_{k=2}^{n}\frac{\lvert Q_{n}^{\left(k\right)}(z_{1})\frac{r_{n}^{k}}{k! }\rvert}{\lvert Q_{n}^{{}^{\prime}}(z_{1})\frac{r_{n}}{1!}\rvert}\right)\] \[\leq\left\lvert Q_{n}^{{}^{\prime}}(z_{1})r_{n}\right\rvert\left( 1+\sum_{k=2}^{n}n^{2}\binom{n-1}{k-1}\left(\frac{C}{n^{3/4}}\right)^{k-1}\right)\] \[\leq n^{2}e^{-\sqrt{n}}\left(1+\frac{C}{n^{3/4}}\right)^{n-1}\] \[\leq n^{2}e^{-\sqrt{n}}e^{Cn^{1/4}}<1.\]
The maximum principle then ensures that the disk \(B(z_{1},r)\) is inside the lemniscate. Let us now go back to the random setting and define the events \(T_{i}:=\left\{X_{i}\text{ is a {good} root with r}=\frac{1}{n^{3/4}}\right\}\). The conditions in (25) immediately imply that the number of _good_ roots is less than or equal to the number of critical points with critical value less than \(1\), therefore,
\[\mathbb{E}[C(\Lambda_{n})] =n-\mathbb{E}\left[\left\{\text{Number of critical points with critical value less than }1\right\}\right]+1\] \[\leq n-\mathbb{E}\left[\sum_{1}^{n}\mathbb{1}_{T_{i}}\right]+1 \leq n\left(1-\mathbb{P}(T_{1})\right)+1.\]
By concentration estimates, we expect that most of the _good_ roots are within the annulus \(\mathbb{D}_{n}:=\left\{z:\frac{3}{n^{1/4}}<\lvert z\rvert\leq 1-\frac{1}{ \sqrt{n}}\right\}\). So we estimate
\[\mathbb{E}[C(\Lambda_{n})]\leq n\left(1-\mathbb{P}\left(T_{1}\lvert X_{1}\in \mathbb{D}_{n}\right)\mathbb{P}(X_{1}\in\mathbb{D}_{n})\right)+1. \tag{27}\]
Now let us define the events \(H_{1},...,H_{n+1}\) with \(r_{n}:=\frac{1}{n^{3/4}}\).
\[\left\{\begin{array}{l}H_{1}:=\left\{\left|P_{n}^{{}^{\prime}}(X_{1})\right|<e^ {-\frac{\sqrt{n}}{2}}\right\}\\ \\ H_{k}:=\left\{\left|\frac{P_{n}^{(k)}(X_{1})\frac{r_{n}^{k}}{P_{n}^{\prime}}(X _{1})\frac{r_{n}}{P_{n}^{\prime}}}{|P_{n}^{\prime}(X_{1})\frac{r_{n}}{P_{1}}} \right|<n^{2}\binom{n-1}{k-1}\left(\frac{C}{n^{3/4}}\right)^{k-1}\right\},\text { for }k=2,...,n.\\ \\ H_{n+1}:=\left\{\min_{2\leq j\leq n}\lvert X_{1}-X_{j}\rvert>3r_{n}\right\}\\ \\ H_{n+2}:=\left\{\exists\text{ a unique critical point }\xi\in B\left(X_{1},r_{n}\right) \right\}.\end{array}\right. \tag{28}\]
Notice that on the events (28), we have a good root. Therefore
\[\mathbb{P}\left(T_{1}\lvert X_{1}\in\mathbb{D}_{n}\right)\geq\mathbb{P}\left( \cap_{j=1}^{n+2}H_{j}\lvert X_{1}\in\mathbb{D}_{n}\right). \tag{29}\]
Next, we estimate the conditional probabilities of each of the events \(H_{1},...,H_{n+2}\). To estimate the probability of the event \(H_{1}\) we require the following lemma.
**Lemma 3.1**.: _( Upper bound on the first derivative) Let \(\{X_{i}\}_{i=1}^{\infty}\) be a sequence of i.i.d. uniform random variables in the open unit disk. Let \(\mathbb{D}_{n}:=\left\{z:\frac{3}{n^{1/4}}<\lvert z\rvert\leq 1-\frac{1}{\sqrt{n}}\right\}\). Then there exists a constant \(C>0\) such that,_
\[\mathbb{P}\left(\lvert P_{n}^{{}^{\prime}}(X_{1})\rvert\leq e^{-\frac{\sqrt{n }}{2}}\lvert X_{1}\in D_{n}\right)\geq 1-\frac{C}{\sqrt{n}}. \tag{30}\]
Proof of Lemma 3.1.: This proof adopts a methodology similar to Lemma 2.6, but with a slight variation. Instead of using a uniform bound for the integrand, we actually perform the integration to achieve the desired inequality. We have
\[\mathbb{P}\left(\lvert P_{n}^{{}^{\prime}}(X_{1})\rvert\geq e^{- \frac{\sqrt{n}}{2}}\lvert X_{1}\in\mathbb{D}_{n}\right) =\mathbb{P}\left(\prod_{j=2}^{n}\lvert X_{1}-X_{j}\rvert\geq e^{- \frac{\sqrt{n}}{2}}\lvert X_{1}\in\mathbb{D}_{n}\right)\] \[=\mathbb{P}\left(\sum_{j=2}^{n}\log\lvert X_{1}-X_{j}\rvert\geq- \frac{\sqrt{n}}{2}\lvert X_{1}\in\mathbb{D}_{n}\right)\] \[=\frac{1}{\mathbb{P}(X_{1}\in\mathbb{D}_{n})}\int_{\mathbb{D}_{n}} \mathbb{P}\left(\sum_{j=2}^{n}\log\lvert z-X_{j}\rvert\geq-\frac{\sqrt{n}}{2} \right)d\mu(z). \tag{31}\]
We use Bennett's inequality (2.2) after subtracting the mean in (31), with the uniform upper and lower bounds of \(\mathbb{E}\left[\log|z-X_{j}|^{2}\right]\) from Lemma 2.3 to obtain,
\[\frac{1}{\mathbb{P}(X_{1}\in\mathbb{D}_{n})}\int_{\mathbb{D}_{n}} \mathbb{P}\left(\sum_{j=2}^{n}\left(\log|z-X_{j}|-\mathbb{E}\left[\log|z-X_{j} |\right]\right)\geq\frac{(n-1)(1-|z|^{2})}{2}-\frac{\sqrt{n}}{2}\right)d\mu(z)\] \[\leq\frac{1}{\mathbb{P}(X_{1}\in\mathbb{D}_{n})}\int_{\mathbb{D}_ {n}}\exp\left(-C_{1}nh\left(\frac{(n-1)(1-|z|^{2})-\sqrt{n}}{2C_{2}(n-1)} \right)\right)d\mu(z)\] \[\leq\frac{1}{\pi\mathbb{P}(X_{1}\in\mathbb{D}_{n})}\int_{0}^{2 \pi}\int_{\frac{3}{n^{1/4}}}^{1-\frac{1}{\sqrt{n}}}\exp\left(-C_{1}nh\left( \frac{(n-1)(1-r^{2})-\frac{\sqrt{n}}{2}}{2C_{2}(n-1)}\right)\right)rdrd\theta\] \[\leq\frac{2}{\mathbb{P}(X_{1}\in\mathbb{D}_{n})}\int_{\frac{3}{n ^{1/4}}}^{1-\frac{1}{\sqrt{n}}}\exp\left(-C_{1}nh\left(\frac{(n-1)(1-r^{2})- \sqrt{n}}{2C_{2}(n-1)}\right)\right)rdr \tag{32}\]
To estimate the integral we do a change of variables of \((1-r^{2})=s\) in (32) and use the fact that \(C_{3}u^{2}\leq h(u)\leq C_{4}u^{2}\), for all \(u\in[0,1]\), for some constants \(C_{3},C_{4}>0\). Then (32) becomes
\[\frac{2}{\mathbb{P}(X_{1}\in\mathbb{D}_{n})}\int_{\frac{2}{\sqrt{ n}}-\frac{1}{n}}^{1-\frac{9}{\sqrt{n}}}\exp\left(-C_{1}nh\left(\frac{(n-1)s- \sqrt{n}}{2C_{2}(n-1)}\right)\right)ds\] \[\leq\frac{2}{\mathbb{P}(X_{1}\in\mathbb{D}_{n})}\int_{0}^{1}\exp \left(-C_{1}n\left(s-\frac{1}{2\sqrt{n}}\right)^{2}\right)ds\] \[\leq\frac{2}{\mathbb{P}(X_{1}\in\mathbb{D}_{n})}\int_{0}^{\infty }\exp\left(-x^{2}\right)\frac{dx}{C_{1}\sqrt{n}}\] \[\leq\frac{C}{\sqrt{n}}.\]
We finish the proof by taking the probability of the complementary event.
Using Lemma 3.1 above we deduce that,
\[\mathbb{P}\left(H_{1}\big{|}X_{1}\in\mathbb{D}_{n}\right)=\mathbb{P}\left( \left|P_{n}^{{}^{\prime}}(X_{1})\right|<e^{-\frac{\sqrt{n}}{2}}\Big{|}X_{1} \in\mathbb{D}_{n}\right)\geq 1-\frac{C_{1}}{\sqrt{n}}. \tag{33}\]
Now we estimate \(\mathbb{P}\left(H_{k}\big{|}X_{1}\in\mathbb{D}_{n}\right)\) for \(2\leq k\leq n\). By taking \(\mathbb{B}=\mathbb{D}_{n}\) in Lemma 2.7 and the uniform bound from Lemma 2.4, we arrive at
\[\mathbb{E}\left[\left|\frac{P_{n}^{(k)}(X_{1})\frac{r_{n}^{k}}{k!}}{P_{n}^{{} (X_{1})}\frac{r_{n}}{1!}}\right|\left|X_{1}\in\mathbb{D}_{n}\right]\leq\binom{ n-1}{k-1}\left(\frac{C}{n^{3/4}}\right)^{k-1}. \tag{34}\]
Now conditional Markov inequality along with (34) gives,
\[\mathbb{P}\left(H_{k}\big{|}X_{1}\in\mathbb{D}_{n}\right)=\mathbb{P}\left( \left|\frac{P_{n}^{(k)}(X_{1})\frac{r_{n}^{k}}{k!}}{P_{n}^{{}^{\prime}}(X_{1}) \frac{r_{n}}{1!}}\right|\geq n^{2}\binom{n-1}{k-1}\left(\frac{C}{n^{3/4}} \right)^{k-1}\Big{|}X_{1}\in\mathbb{D}_{n}\right)\leq\frac{1}{n^{2}}. \tag{35}\]
Using Lemma 2.5 with \(t=\frac{1}{n^{3/4}}\) and \(f(x)=x^{2}\) we obtain,
\[\mathbb{P}\left(H_{n+1}\big{|}X_{1}\in\mathbb{D}_{n}\right) =\mathbb{P}\left(\min_{2\leq j\leq n}\big{|}X_{1}-X_{j}\big{|}> \frac{3}{n^{3/4}}\Big{|}X_{1}\in\mathbb{D}_{n}\right)\] \[\geq\left(1-\frac{1}{n^{3/2}}\right)^{n-1}\geq 1-\frac{2}{\sqrt{n}}. \tag{36}\]
Lastly, the probability bound for the event \(H_{n+2}\) follows from the following lemma.
**Lemma 3.2**.: _( **Distance between roots and critical points )** Let \(\{X_{i}\}_{i=1}^{\infty}\) be a sequence of i.i.d. uniform random variables in the open unit disk. We define the random polynomial \(P_{n}\) as in (2). Let \(\mathbb{D}_{n}:=\{z:\frac{3}{n^{1/4}}<|z|<1-\frac{1}{n^{1/2}}\}\), and \(r_{n}=\frac{1}{n^{3/4}}.\) Then_
\[\mathbb{P}\left(\left\{\exists\text{ a unique critical point }\xi\in B\left(X_{1},r_{n}\right)\right\}\big{|}X_{1}\in\mathbb{D}_{n} \right)\geq 1-\frac{C}{\sqrt{n}}. \tag{37}\]
Proof of Lemma 3.2.: The proof can essentially be deduced from ideas in [17]. We first condition on the location of \(X_{1}\) and rewrite the probability as
\[\mathbb{P}\left(\left\{\exists\text{ a unique critical point }\xi\in B \left(X_{1},r_{n}\right)\right\}\big{|}X_{1}\in\mathbb{D}_{n}\right)\\ =\int_{\mathbb{D}_{n}}\mathbb{P}\left(\left\{\exists\text{ a unique critical point }\xi\in B\left(X_{1},r_{n}\right)\right\}\big{|}X_{1}=u\right)d\mu(u). \tag{38}\]
Fixing \(u\in\mathbb{D}_{n}\) we define the event
\[\mathcal{E}_{n}(u):=\left\{\sup_{z\in\partial B(u,r_{n})}\left|\frac{1}{n} \frac{P_{n}^{\prime}(z)}{P_{n}(z)}-f(u)\right|<|f(u)|\right\}, \tag{39}\]
where \(r_{n}=\frac{1}{n^{3/4}}\) and \(f(z):=\mathbb{E}\left[\frac{1}{z-X_{2}}\right]=\bar{z}\) is the Cauchy transform of the uniform probability measure on \(\mathbb{D}\). On the event \(\mathcal{E}_{n}(u)\), by Rouche's theorem (c.f. [6], pp.125-126) the difference between the number of zeros and critical points of \(\frac{P_{n}^{\prime}(z)}{P_{n}(z)}\) on \(B(u,r_{n})\) is same as the difference between the number of zeros and poles of the constant function \(z\mapsto f(u)\), which is zero. Now we define another event \(\mathcal{F}_{n}(u):=\{|X_{2}-u|>3r_{n},...,|X_{n}-u|>3r_{n}\}\) which guarantees that there is only one root of \(P_{n}\) inside \(B(u,r_{n})\), hence only one critical point inside \(B(u,r_{n})\). Following the idea of proof of Lemma (2.5) one can show that \(\mathbb{P}(\mathcal{F}_{n}(u))\geq 1-\frac{C}{\sqrt{n}}\), therefore,
\[\mathbb{P}\left(\left\{\exists\text{ a unique critical point }\xi\in B \left(u,r_{n}\right)\right\}\right)\geq\mathbb{P}\left(\mathcal{E}_{n}(u) \cap\mathcal{F}_{n}(u)\right)\geq\mathbb{P}\left(\mathcal{E}_{n}(u)\right)- \frac{C}{\sqrt{n}}. \tag{40}\]
Next, writing \(\frac{P_{n}^{\prime}(z)}{P_{n}(z)}\) as sum of i.i.d random variables with mean \(f(z)\) in (39) we get,
\[\mathbb{P}\left(\mathcal{E}_{n}(u)\right)=\left\{\sup_{z\in\partial B(u,r_{n} )}\left|\frac{1}{n(z-u)}+\frac{1}{n}\sum_{2}^{n}\frac{1}{z-X_{j}}-f(u)\right|< |f(u)|\right\}.\]
Let \(\tilde{z}_{n}\) be a sequence of complex numbers in \(B(u,r_{n})\) converging to \(u\). Now adding and subtracting \(\frac{1}{n}\sum_{2}^{n}\frac{1}{\tilde{z}_{n}-X_{j}}\) and \(f(\tilde{z}_{n})\) we bound the probability from below as
\[\mathbb{P}\left(\mathcal{E}_{n}(u)\right)\geq\mathbb{P}\left(\sup _{z\in\partial B(u,r_{n})}\left|\frac{1}{n(z-u)}\right|+\sup_{z,\tilde{z}_{n} \in B(u,r_{n})}\left|\frac{1}{n}\sum_{2}^{n}\left(\frac{1}{z-X_{j}}-\frac{1}{ \tilde{z}_{n}-X_{j}}\right)\right|\\ +\left|\frac{1}{n}\sum_{2}^{n}\frac{1}{\tilde{z}_{n}-X_{j}}-f( \tilde{z}_{n})\right|+\left|\overline{\tilde{z}_{n}-u}\right|<\left|f(u) \right|\right). \tag{41}\]
Notice that, the maximum of the first and last term in (41) \(\max\left\{\sup_{z\in\partial B(u,r_{n})}\left|\frac{1}{n(z-u)}\right|,| \overline{\tilde{z}_{n}-u}|\right\}\leq\frac{1}{n^{1/4}}\), whereas \(\left|f(u)\right|\geq\frac{3}{n^{1/4}}\). Therefore by triangle inequality, we get
\[\left|f(u)\right|-\sup_{z\in\partial B(u,r_{n})}\left|\frac{1}{n(z-u)}\right|- \left|\overline{\tilde{z}_{n}-u}\right|\geq\frac{\left|f(u)\right|}{3}. \tag{42}\]
Plugging the estimate (42) in (41) we arrive at,
\[\mathbb{P}\left(\mathcal{E}_{n}(u)\right)\geq\\ \mathbb{P}\left(\sup_{z,\tilde{z}_{n}\in B(u,r_{n})}\left|\frac{1 }{n}\sum_{2}^{n}\left(\frac{1}{z-X_{j}}-\frac{1}{\tilde{z}_{n}-X_{j}}\right) \right|+\left|\frac{1}{n}\sum_{2}^{n}\frac{1}{\tilde{z}_{n}-X_{j}}-f(\tilde{z} _{n})\right|<\frac{\left|f(u)\right|}{3}\right). \tag{43}\]
Now taking complimentary events and using the fact that \(\mathbb{P}(a+b>2)\leq\mathbb{P}(a>1)+\mathbb{P}(b>1)\) we obtain,
\[\mathbb{P}\left(\mathcal{E}_{n}(u)\right)\geq 1-\mathbb{P} \underbrace{\left(\sup_{z,\tilde{z}_{n}\in B(u,r_{n})}\left|\frac{1}{n}\sum_ {2}^{n}\left(\frac{1}{z-X_{j}}-\frac{1}{\tilde{z}_{n}-X_{j}}\right)\right| \geq\frac{\left|f(u)\right|}{6}\right)}_{\text{(I)}}\\ -\mathbb{P}\underbrace{\left(\left|\frac{1}{n}\sum_{2}^{n}\frac {1}{\tilde{z}_{n}-X_{j}}-f(\tilde{z}_{n})\right|\geq\frac{\left|f(u)\right|}{ 6}\right)}_{\text{(II)}}. \tag{44}\]
To estimate (I), we first simplify it using the following change of variables \(z^{\prime}=z-u,z_{n}^{{}^{\prime\prime}}=\tilde{z}_{n}-u,X_{j}^{{}^{\prime}}= X_{j}-u\) to get
\[\text{(I)}=\mathbb{P}\left(\sup_{z^{\prime},z_{n}^{{}^{\prime \prime}}\in B(0,r_{n})}\left|\frac{1}{n}\sum_{2}^{n}\left(\frac{(z^{\prime}- z_{n}^{{}^{\prime\prime}})}{(z^{\prime}-X_{j}^{{}^{\prime}})(z_{n}^{{}^{\prime \prime}}-X_{j}^{{}^{\prime}})}\right)\right|\geq\frac{\left|f(u)\right|}{6} \right)\\ \leq\mathbb{P}\left(\sup_{z^{\prime},z_{n}^{{}^{\prime\prime}}\in B (0,r_{n})}\frac{2r_{n}}{n}\left|\sum_{2}^{n}\left(\frac{1}{(z^{\prime}-X_{j}^ {{}^{\prime}})(z_{n}^{{}^{\prime\prime}}-X_{j}^{{}^{\prime}})}\right)\right| \geq\frac{\left|f(u)\right|}{6}\right). \tag{45}\]
Now using Markov inequality and Lemma 5.9 from [17] with \(r_{n}=s_{n}=\frac{1}{n^{3/4}}\) and \(a_{n}=\frac{2r_{n}}{n}\) in (45) we get,
\[\text{(I)}\leq\frac{6}{\left|f(u)\right|}\left[4na_{n}\left(-2\pi C\log(2s_{n}) +\mathcal{O}(1)\right)+4n\pi s_{n}^{2}C\right]\leq\frac{C}{\left|f(u)\right| \sqrt{n}}. \tag{46}\]
We use the bound \((\ref{eq:boundbound})\) in Lemma 5.11 from [17] with \(p=1.5,\varepsilon=\frac{|f(u)|}{6}\) and uniform bounds on the moments from Lemma (2.4) to estimate \((\mathrm{II})\).
\[(\mathrm{II})\leq\frac{C}{|f(u)|^{3/2}\sqrt{n}}\left(\mathbb{E}\left|\frac{1}{ \tilde{z}_{n}-X_{1}}\right|^{1.5}+|f(\tilde{z}_{n})|^{1.5}\right)\leq\frac{C}{ |f(u)|^{3/2}\sqrt{n}}. \tag{47}\]
Now inserting (46), and (47), in (40) we obtain,
\[\mathbb{P}\left(\left\{\exists\text{ {a unique critical point }}\xi\in B\left(X_{1},r_{n}\right)\right\}\right|X_{1}\in \mathbb{D}_{n}\right)\] \[\geq\int_{\mathbb{D}_{n}}\left(1-\frac{C}{|f(u)|^{3/2}\sqrt{n}}- \frac{C}{|f(u)|\sqrt{n}}-\frac{C}{\sqrt{n}}\right)d\mu(u)\] \[\geq 1-\frac{C}{\sqrt{n}}-C_{1}\int_{\frac{1}{n^{3/4}}}^{1- \frac{1}{\sqrt{n}}}\left(\frac{C}{r^{3/2}\sqrt{n}}-\frac{C}{r\sqrt{n}}\right) rdr\] \[\geq 1-\frac{C}{\sqrt{n}}.\]
Applying the union bound along with the estimates (33), (35), (36), and (38) leads to
\[\mathbb{P}\left(T_{1}|X_{1}\in\mathbb{D}_{n}\right)\geq\mathbb{P}\left(\cap_{ k=1}^{n+2}H_{k}\big{|}X_{1}\in\mathbb{D}_{n}\right)\geq 1-\sum_{k=1}^{n+2} \mathbb{P}\left(H_{k}^{\ c}\big{|}X_{1}\in\mathbb{D}_{n}\right)\geq 1-\frac{C}{ \sqrt{n}}. \tag{48}\]
Feeding (48) into (27) the required upper bound is obtained.
\[\mathbb{E}[C(\Lambda_{n})] \leq n\left(1-\mathbb{P}\left(T_{1}|X_{1}\in\mathbb{D}_{n}\right) \mathbb{P}(X_{1}\in\mathbb{D}_{n})\right)+1\] \[\leq n\left(1-\left(1-\frac{C}{\sqrt{n}}\right)\left(1-\frac{2}{ \sqrt{n}}\right)\right)+1\] \[\leq C_{2}\sqrt{n}.\]
## 4. Proof of Theorem 1.2
In a recent paper [20], Krishnapur, Lundberg, and Ramachandran have shown that the polynomial lemniscate for roots chosen uniformly from the unit circle is a truly random quantity that converges in distribution to a sub-level set of a certain Gaussian function. Here, we show that the expected number of components for such lemniscates is asymptotically \(\frac{n}{2}\).
Proof of the theorem 1.2 (lower limit).: The proof of the lower bound in this case follows the same strategy as in the previous theorem. The definition of an isolated component remains unchanged, and our focus lies on determining the number of such components. However, we cannot follow the proof verbatim because in this case, \(\mathbb{E}\left[\frac{1}{|z-X_{j}|}\right]=\infty\). Therefore we condition on the following event to bypass this problem. Let us define the event \(A:=\left\{\min\limits_{2\leq j\leq n}|X_{1}-X_{j}|>\frac{1}{n^{3}}\right\}\), then by Lemma 2.5 with \(t=\frac{1}{n^{3}}\) and \(f(x)=2x\) the probability of the event \(A\) is
\[\mathbb{P}(A)=\mathbb{P}\left(\min\limits_{2\leq j\leq n}|X_{1}-X_{j}|>\frac{1 }{n^{3}}\right)\geq 1-\frac{2}{n^{2}}. \tag{49}\]
For \(1\leq i\leq n\), let us define the events \(S_{i}:=\{X_{i}\text{ forms an isolated component}\}\). Then it immediately follows that
\[\mathbb{E}\left[C(\Lambda_{n})\right]\geq\mathbb{E}\left[\sum_{i=1}^{n}1_{S_{ i}}\right]\geq n\mathbb{E}\left[1_{S_{i}}\right]\geq n\mathbb{P}\left(S_{1} \right)\geq n\mathbb{P}\left(S_{1}\cap A\right)\geq n\mathbb{P}\left(S_{1}|A \right)-\frac{2}{n}.\]
Next, we define events \(F_{1},...F_{n+1}\) as follows.
\[\left\{\begin{array}{l}F_{1}:=\left\{\left|P_{n}^{{}^{\left(k\right)}\left(X_{1} \right)\right|\geq e^{n^{1/2-\varepsilon}}\right\}\\ \\ F_{k}:=\left\{\left|\frac{P_{n}^{\left(k\right)}\left(X_{1}\right)\frac{r_{k} ^{k}}{P_{n}^{{}^{\prime}}\left(X_{1}\right)\frac{r_{k}}{1!}}}{P_{n}^{{}^{ \prime}}\left(X_{1}\right)\frac{r_{k}}{1!}}\right|<\frac{1}{2n^{2}}\right\} \text{,\ for }k=2,...,n.\\ \\ F_{n+1}:=\left\{\min_{2\leq j\leq n}\lvert X_{1}-X_{j}\rvert>\frac{1}{n^{6}} \right\}\end{array}\right. \tag{50}\]
From the calculations of (15) and (16) it follows that on the events (50), we have an isolated component. Hence
\[\mathbb{P}\left(S_{1}\lvert A\right)\geq\mathbb{P}\left(\cap_{j=1}^{n+1}F_{j} \lvert A\right). \tag{51}\]
As before we will calculate the conditional probability of the events \(F_{j}\). Taking logarithms and using Berry-Esseen theorem (2.1) as in Lemma 2.6, along with uniform bounds on the moments gives
\[\mathbb{P}\left(F_{1}\lvert A\right)=\mathbb{P}\left(\lvert P_{n}^{{}^{\prime }}\left(X_{1}\right)\rvert\geq e^{n^{1/2-\varepsilon}}\lvert A\right)\geq \mathbb{P}\left(\lvert P_{n}^{{}^{\prime}}\left(X_{1}\right)\rvert\geq e^{n^ {1/2-\varepsilon}}\right)-\frac{1}{n^{2}}\geq\frac{1}{2}-\frac{\hat{C}}{n^{ \varepsilon}},\]
where we used that \(\mathbb{P}(A\cap B)=\mathbb{P}(A)-(A\cap B^{c})\). Notice that, on the event \(A\), we have for \(2\leq k\leq n\),
\[\left|\frac{P_{n}^{\left(k\right)}\left(X_{1}\right)\frac{r_{k}^{ k}}{k!}}{P_{n}^{{}^{\prime}}\left(X_{1}\right)\frac{r_{k}}{1!}}\right| \leq\frac{1}{n^{6\left(k-1\right)}k!}\sum_{i_{1},...,i_{k-1}}\frac {1}{\lvert X_{1}-X_{i_{1}}\rvert...\lvert X_{1}-X_{k-1}\rvert}\] \[\leq\frac{1}{n^{6\left(k-1\right)}k!}\sum_{i_{1},...,i_{k-1}}n^{3 \left(k-1\right)}\] \[\leq\frac{k(n-1)(n-2)...(n-k+1)n^{3\left(k-1\right)}}{n^{6\left(k -1\right)}k!}\] \[\leq\frac{1}{2n^{2}}.\]
Therefore, \(\mathbb{P}(F_{k}\cap A)=\mathbb{P}(A)\) and as a result for \(2\leq k\leq n\), we have \(\mathbb{P}(F_{k}\lvert A)=1\). Since \(A\subset F_{n+1}\), we get the conditional probability \(\mathbb{P}(F_{n+1}\lvert A)=1\). Now using these bounds in (49) we obtain,
\[\mathbb{E}\left[C(\Lambda_{n})\right]\geq n\mathbb{P}\left(S_{1} \lvert A\right)-\frac{2}{n}\geq\frac{n}{2}-Cn^{1-\varepsilon}\] \[\implies\liminf_{n\rightarrow\infty}\ \ \frac{\mathbb{E}\left[C(\Lambda_{n})\right]}{n}\geq\frac{1}{2}.\]
**(Upper limit)** The pairing of zeros and critical points does not occur in general if the law of the random variable does not have a density. Therefore when \(\mu\) is the uniform probability measure on \(\mathbb{S}^{1}\), we can not proceed by exploiting the pairing result. We prove the upper limit by showing the number of components having less than \(n^{\varepsilon}\) roots is approximately \(\frac{n}{2}\). Let \(C_{k}(\Lambda_{n})\) denote the number of components containing exactly \(k\) roots. Then it immediately follows that
\[\sum_{1}^{n}C_{k}(\Lambda_{n})=C(\Lambda_{n}), \tag{52}\] \[\sum_{1}^{n}kC_{k}(\Lambda_{n})=n. \tag{53}\]
For \(i=1,...,n\), fix an \(\varepsilon>0\) small and define the events \(D_{i}:=\big{\{}\)There are at least \(n^{\varepsilon/2}\) many roots inside the component containing the root \(X_{i}\big{\}}\). Now we claim that,
\[C(\Lambda_{n})\leq n-\sum_{1}^{n}\mathbbm{1}_{D_{i}}+\sum_{k\geq n^{\varepsilon/ 2}}C_{k}(\Lambda_{n}). \tag{54}\]
Substituting (52) and (53) in (54), we have to verify that
\[\sum_{1}^{n}\mathbbm{1}_{D_{i}}\leq\sum_{k<n^{\varepsilon/2}}(k-1)C_{k}( \Lambda_{n})+\sum_{k\geq n^{\varepsilon/2}}kC_{k}(\Lambda_{n})\]
Since all the quantities are non-negative it is enough to show that
\[\sum_{1}^{n}\mathbbm{1}_{D_{i}}\leq\sum_{k\geq n^{\varepsilon/2}}kC_{k}( \Lambda_{n}). \tag{55}\]
Let \(X_{i_{1}}\) be a root that has more than \(n^{\varepsilon/2}\) roots in the component containing it. Assume that \(X_{i_{2}},...,X_{i_{m}}\) are the other roots in this component say \(C\). Then clearly,
\[\sum_{k=1}^{m}\mathbbm{1}_{D_{i_{k}}}=m. \tag{56}\]
Now choose another root from \(\{X_{1},...,X_{n}\}\backslash\{X_{i_{2}},...,X_{i_{m}}\}\) such that it has more than \(n^{\varepsilon/2}\) roots in the component containing it. Continuing this process and adding equations like (56) we get (55). Since the total number of roots is \(n\), we can obtain a bound on the rightmost term of (54) in the following way.
\[n =\sum_{k<n^{\varepsilon/2}}kC_{k}(\Lambda_{n})+\sum_{k\geq n^{ \varepsilon/2}}kC_{k}(\Lambda_{n})\geq\sum_{k\geq n^{\varepsilon/2}}kC_{k}( \Lambda_{n})\] \[\implies n^{1-\varepsilon/2} \geq\sum_{k\geq n^{\varepsilon/2}}C_{k}(\Lambda_{n}). \tag{57}\]
After putting the estimate (55) and taking expectation in both sides of (54) we arrive at,
\[\mathbb{E}[C(\Lambda_{n})]\leq n-n\mathbb{P}(D_{1})+n^{1-\varepsilon/2}. \tag{58}\]
To calculate the probability of the event \(D_{1}\), let us first calculate the probability of having at least \(n^{\varepsilon/2}\) roots in the ball \(B(rX_{1},\tilde{r})\), where \(r:=1-\frac{1}{n^{1-\varepsilon}}\), \(\tilde{r}:=\frac{2}{n^{1-\varepsilon}}\). For \(i=2,...,n\), define the events \(\mathcal{T}_{i}:=\{X_{j}\in B(rX_{1},\tilde{r})\}\), then by the Paley-Zygmund inequality,
\[\mathbb{P}\left(\sum_{2}^{n}\mathbbm{1}_{\mathcal{T}_{i}}\geq n^{\varepsilon/ 2}\right)\geq\left(1-\frac{n^{\varepsilon/2}}{\mathbb{E}\left[\sum_{2}^{n} \mathbbm{1}_{\mathcal{T}_{i}}\right]}\right)^{2}\frac{\mathbb{E}\left[\sum_{2}^ {n}\mathbbm{1}_{\mathcal{T}_{i}}\right]^{2}}{\mathbb{E}\left[\,\sum_{2}^{n} \mathbbm{1}_{\mathcal{T}_{i}}\right]^{2}}. \tag{59}\]
Using the rotation invariance of the measure we get,
\[\mathbb{P}\left(X_{j}\in B\left(rX_{1},\tilde{r}\right)\right)=\int_{0}^{2 \pi}\mathbb{P}\left(X_{j}\in B(rX_{1},\tilde{r})\big{|}X_{1}=e^{i\phi}\right) \frac{d\phi}{2\pi}=\mathbb{P}\left(X_{j}\in B(re_{1},\tilde{r})\right),\]
where \(e_{1}=(1,0)\). Let us assume that \(B(re_{1},\tilde{r})\) intersects the unit circle at points \(W_{1},W_{2}\) and the angle \(\angle OW_{1}W_{2}=\theta\), where \(O\) is the origin. Then it is easy to see that \(\mathbb{P}(X_{j}\in B(rX_{1},\tilde{r}))=\theta\) and
\(\mathbb{P}(X_{j},X_{k}\in B(rX_{1},\tilde{r}))=\theta^{2}\), for all \(j,k\neq 1\). Then
\[\mathbb{E}\left[\sum_{2}^{n}\mathbb{1}_{\tau_{i}}\right]=(n-1)\theta, \tag{60}\] \[\mathbb{E}\left[\left(\sum_{2}^{n}\mathbb{1}_{\tau_{i}}\right)^{2 }\right]=(n-1)\theta+(n-2)(n-1)\theta^{2}. \tag{61}\]
Equation (60) and (61) along with Bernoulli's inequality yields,
\[\frac{\mathbb{E}\left[\sum_{2}^{n}\mathbb{1}_{\tau_{i}}\right]^{2}}{\mathbb{E }\left[|\sum_{2}^{n}\mathbb{1}_{\tau_{i}}|^{2}\right]}=\frac{(n-1)^{2}\theta^ {2}}{(n-1)\theta+(n-2)(n-1)\theta^{2}}\geq 1-\frac{1}{\theta(n-1)}\geq 1-\frac{C}{n ^{\varepsilon}}, \tag{62}\]
where we got the last inequality using elementary geometry in the following way. For n large, \(\sin\left(\frac{\theta}{4}\right)\geq\frac{C}{n^{1-\varepsilon}}\), utilizing this, we bound \((n-1)\theta\) as
\[(n-1)\theta\geq 4(n-1)\sin\left(\frac{\theta}{4}\right)\geq Cn^{\varepsilon}.\]
Plugging the bound (62) in (59) we have,
\[\mathbb{P}\left(\sum_{2}^{n}\mathbb{1}_{\tau_{i}}\geq n^{\varepsilon/2}\right) \geq\left(1-\frac{n^{\varepsilon/2}}{\mathbb{E}\left[\sum_{2}^{n}\mathbb{1}_{ \tau_{i}}\right]}\right)^{2}\left(1-\frac{C}{n^{\varepsilon}}\right)\geq\left( 1-\frac{C}{n^{\varepsilon/2}}\right). \tag{63}\]
If the ball \(B(rX_{1},\tilde{r})\) is inside the lemniscate and there are at least \(n^{\varepsilon/2}\) roots inside the ball \(B(rX_{1},\tilde{r})\), then the connected component containing \(X_{1}\) must have atleast \(n^{\varepsilon/2}\) roots inside it. Now all we need is to estimate the probability that the ball \(B(rX_{1},\tilde{r})\) is inside the lemniscate, which follows from the next lemma.
**Lemma 4.1**.: _Let \(\{X_{i}\}_{i=1}^{\infty}\) be a sequence of i.i.d. random variables which are uniformly distributed on the unit circle. Fix \(\varepsilon\in(0,\frac{1}{4})\) and define \(r:=1-\frac{1}{n^{1-\varepsilon}}\), \(\tilde{r}:=\frac{2}{n^{1-\varepsilon}}\). Then there exists a constant \(C>0\), such that,_
\[\mathbb{P}\left(B(rX_{1},\tilde{r})\subset\Lambda_{n}\right)\geq\frac{1}{2}- \frac{C}{n^{\varepsilon}} \tag{64}\]
Proof of Lemma 4.1.: Define \(\tilde{Q}_{n}(z):=\frac{Q_{n}(z)}{(z-z_{1})}\) and assume that for some \(r_{1},\tilde{r_{1}}\) the following is satisfied.
\[\left\{\begin{array}{l}4\tilde{r_{1}}<1,\\ \left|\tilde{Q}_{n}(r_{1}z_{1})\right|\leq\exp\left(-n^{\frac{1}{2}-\varepsilon }\right),\\ \left|\frac{\tilde{Q}_{n}^{(k)}(r_{1}z_{1})\tilde{r_{1}}k}{\tilde{Q}_{n}(r_{1} z_{1})}\right|\leq n\sqrt{(n-1)...(n-k)}\left(\frac{4}{n^{1-\varepsilon}} \right)^{k/2},\quad k\geq 1.\end{array}\right. \tag{65}\]
Then for \(z\in\partial B(r_{1}z_{1},\tilde{r})\) and n large, we have,
\[|Q_{n}(z)| =|z-z_{1}||\tilde{Q}_{n}(z)|\] \[\leq 2\tilde{r}\left(|\tilde{Q}_{n}(r_{1}z_{1})|+\left|\tilde{Q}_{n }^{\prime}(r_{1}z_{1})\frac{\tilde{r_{1}}}{1!}\right|+...+\left|\tilde{Q}_{n} ^{(k)}(r_{1}z_{1})\frac{\tilde{r_{1}}^{k}}{k!}\right|+...+\left|\tilde{Q}_{n} ^{(n-1)}(r_{1}z_{1})\frac{\tilde{r_{1}}^{(n-1)}}{(n-1)!}\right|\right)\] \[\leq|\tilde{Q}_{n}(r_{1}z_{1})|\left(1+\sum_{k=1}^{n-1}\frac{ \left|\tilde{Q}_{n}^{k}(r_{1}z_{1})\over\tilde{Q}_{n}(r_{1}z_{1})}\frac{ \tilde{r}^{k}}{k!}\right|\right)\] \[\leq\exp\left(-n^{\frac{1}{2}-\varepsilon}\right)\left(1+\sum_{k =1}^{n-1}\frac{n\sqrt{(n-1)...(n-k)}}{k!}\left(\frac{4}{n^{1-\varepsilon}} \right)^{k/2}\right),\]
where we got the last line using (65). Now taking \(n\) commmon in the paranthesis above and using the Cauchy-Schwarz inequality one has,
\[\leq n\exp\left(-n^{\frac{1}{2}-\varepsilon}\right)\left(1+\sum_{k =1}^{n-1}\frac{(n-1)...(n-k)}{k!}\left(\frac{1}{n^{1/2+\varepsilon/2}}\right) ^{k}\right)^{1/2}\left(1+\sum_{k=1}^{n-1}\frac{1}{k!}\left(\frac{4}{n^{1/2-3/2 \varepsilon}}\right)^{k}\right)^{1/2}\] \[\leq n\exp\left(-n^{\frac{1}{2}-\varepsilon}\right)\left(1+\sum_ {k=1}^{n-1}\binom{n-1}{k}\left(\frac{1}{n^{1/2+\varepsilon/2}}\right)^{k} \right)^{1/2}\left(1+\sum_{k=1}^{\infty}\frac{1}{k!}\left(\frac{4}{n^{1/2-3/2 \varepsilon}}\right)^{k}\right)^{1/2}\] \[\leq n\exp\left(-n^{\frac{1}{2}-\varepsilon}\right)\exp\left(n^{ \frac{1}{2}-\varepsilon/2}\right)\exp\left(2n^{-1/2+3/2\varepsilon}\right)<1.\]
This ensures that the disk \(B(z_{1},r)\) is inside the lemniscate. Now with \(r:=1-\frac{1}{n^{1-\varepsilon}}\), \(\tilde{r}:=\frac{2}{n^{1-\varepsilon}}\) and defining \(\tilde{P}_{n}\) similarly to \(\tilde{Q}_{n}\), we define the following events,
\[\begin{cases}\mathcal{G}_{1}&:=|\tilde{P}_{n}(rX_{1})|\leq\exp\left(-n^{\frac{ 1}{2}-\varepsilon}\right)\\ \mathcal{G}_{k}&:=\left|\frac{\tilde{P}_{n}^{(k)}(rX_{1})\tilde{r}^{k}}{\tilde {P}_{n}(rX_{1})}\right|\leq n\sqrt{(n-1)...(n-k)}\left(\frac{4}{n^{1- \varepsilon}}\right)^{k/2},\ \ \ \ \ \text{for }k=2,...,n.\end{cases} \tag{66}\]
By the conditions in (65) it immediately follows that,
\[\mathbb{P}\left(B(rX_{1},\tilde{r})\subset\Lambda_{n}\right)\geq\mathbb{P} \left(\cap_{1}^{n}\mathcal{G}_{k}\right). \tag{67}\]
Let us calculate the probabilities of the events \(\mathcal{G}_{1},...,\mathcal{G}_{n}\) individually. To estimate \(\mathbb{P}\left(\mathcal{G}_{1}\right)\), we take logarithm, use the fact that the mean of this random variable is \(0\), and apply the Berry-Esseen Theorem (2.1) as done in Lemma 2.6. Then it follows that for some constant \(C_{1}\),
\[\mathbb{P}\left(\mathcal{G}_{1}\right)\geq\frac{1}{2}-\frac{C_{1}}{n^{ \varepsilon}}. \tag{68}\]
For the events \(\mathcal{G}_{k},\) we use Chebyshev's inequality to obtain,
\[\mathbb{P}\left(\left|\frac{\tilde{P}_{n}^{(k)}(rX_{1})\tilde{r}^{k }}{\tilde{P}_{n}(rX_{1})}\right|\geq n\sqrt{(n-1)...(n-k)}\left(\frac{4}{n^{1- \varepsilon}}\right)^{k/2}\right)\] \[\leq\frac{1}{n^{2}(n-1)...(n-k)}\left(\frac{n^{1-\varepsilon}}{4} \right)^{k}\tilde{r}^{2k}\mathbb{E}\left[\left|\frac{\tilde{P}_{n}^{(k)}(rX_{ 1})}{\tilde{P}_{n}(rX_{1})}\right|^{2}\right]. \tag{69}\]
We estimate \(\mathbb{E}\left[\left|\frac{\tilde{P}_{n}^{(k)}(rX_{1})}{\tilde{P}_{n}(rX_{1} )}\right|^{2}\right]\) using the following facts
\[\mathbb{E}\left[\frac{1}{z-X_{1}}\right]=0,\ \ \forall z\in\mathbb{D}, \tag{70}\] \[\mathbb{E}\left[\frac{1}{|r-X_{1}|^{2}}\right]=\frac{1}{1-r^{2}}. \tag{71}\]
The identity (70) follows from the Cauchy integral formula, and (71) follows using standard integration techniques.
\[\mathbb{E}\left[\left|\frac{\tilde{P}_{n}^{(k)}(rX_{1})}{\tilde{P }_{n}(rX_{1})}\right|^{2}\right]=\mathbb{E}\left[\left|\sum_{2\leq i_{1}<i_{2}<...<i_{k}\leq n}\frac{1}{(rX_{1}-X_{i_{1}})...(rX_{1}-X_{i_{k}})}\right|^{2}\right]\] \[= \mathbb{E}\left[\sum_{2\leq i_{1}<...<i_{k}\leq n}\frac{1}{(rX_{1 }-X_{i_{1}})...(rX_{1}-X_{i_{k}})}\sum_{2\leq j_{1}<...<j_{k}\leq n}\frac{1}{ \overline{(rX_{1}-X_{j_{1}})...(rX_{1}-X_{j_{k}})}}\right]\] \[= \frac{1}{2\pi}\int_{0}^{2\pi}\mathbb{E}\left[\sum_{2\leq i_{1}<...<i_{k}\leq n}\frac{1}{(re^{i\theta}-X_{i_{1}})...(re^{i\theta}-X_{i_{k}})} \sum_{2\leq j_{1}<...<j_{k}\leq n}\frac{1}{\overline{(re^{i\theta}-X_{j_{1}})...(re^{i\theta}-X_{j_{k}})}}\right]d\theta\] \[= \mathbb{E}\left[\sum_{2\leq i_{1}<...<i_{k}\leq n}\frac{1}{(r-X_{ i_{1}})...(r-X_{i_{k}})}\sum_{2\leq j_{1}<...<j_{k}\leq n}\frac{1}{\overline{(r-X_ {j_{1}})...(r-X_{j_{k}})}}\right]\]
Notice that by the independence of the random variables, and identity (70), the cross terms will vanish. We estimate the remaining terms using (71) in the following way.
\[\mathbb{E}\left[\left|\frac{\tilde{P}_{n}^{(k)}(rX_{1})}{\tilde{P }_{n}(rX_{1})}\right|^{2}\right]= \mathbb{E}\left[\sum_{2\leq i_{1}<...<i_{k}\leq n}\frac{1}{|r-X_{ i_{1}}|^{2}...|r-X_{i_{k}}|^{2}}\right]=(n-1)...(n-k)\mathbb{E}\left[\frac{1}{|r-X_{1}|^{ 2}}\right]^{k}\] \[\leq (n-1)...(n-k)(1-r^{2})^{-k}\leq 2^{k}(n-1)...(n-k)n^{k(1- \varepsilon)}. \tag{72}\]
Now plugging the bound (72) in (69) and taking the complementary events we get,
\[\mathbb{P}\left(\left|\frac{\tilde{P}_{n}^{(k)}(rX_{1})\tilde{r}^{k}}{\tilde{P }_{n}(rX_{1})}\right|\leq n\sqrt{(n-1)...(n-k)}\left(\frac{4}{n^{1-\varepsilon }}\right)^{k/2}\right)\geq 1-\frac{1}{n^{2}}. \tag{73}\]
Making use of (73) and (69) in (65) we arrive at the required probability.
\[\mathbb{P}\left(B(rX_{1},\tilde{r})\subset\Lambda_{n}\right) \geq\mathbb{P}\left(\mathcal{G}_{1}\right)-\mathbb{P}\left( \mathcal{G}_{1}\cap(\cap_{2}^{n}\mathcal{G}_{k})^{c}\right)\] \[\geq\frac{1}{2}-\frac{C_{1}}{n^{\varepsilon}}-\sum_{2}^{n}\frac{ 1}{n^{2}}\] \[\geq\frac{1}{2}-\frac{C}{n^{\varepsilon}}.\]
Then using the bound (64) in Lemma 4.1 and (63) we get the required probability.
\[\mathbb{P}(D_{1}) \geq\mathbb{P}\left(\left\{\sum_{2}^{n}\mathbbm{1}_{\mathcal{F}_{ i}}\geq n^{\varepsilon/2}\right\}\bigcap\left\{B(rX_{1},\tilde{r})\subset \Lambda_{n}\right\}\right)\] \[\geq\frac{1}{2}-\frac{C}{n^{\varepsilon}}-\frac{2C}{n^{ \varepsilon/2}}\geq\frac{1}{2}-\frac{C}{n^{\varepsilon/2}}. \tag{74}\]
Now setting (74) in (58) and taking the limsup we get the asymptotic upper bound, i.e,
\[\limsup_{n\to\infty}\!\!\frac{\mathbb{E}[C(\Lambda_{n})]}{n}\leq\limsup_{n \to\infty}\ \frac{1}{n}\left[n-n\left(\frac{1}{2}-\frac{2C}{n^{\varepsilon/2}}\right)+n^{1- \varepsilon/2}\right]\leq\frac{1}{2}.\]
### Acknowledgement
The author expresses gratitude to his thesis advisor Dr. Koushik Ramachandran for suggesting the problem and for feedback on the article. The author deeply appreciates the support, encouragement, and numerous simulating conversations he received from his advisor throughout this project.
|
2305.09794 | Lattice Dynamics and Thermal Transport in Semiconductors with
Anti-bonding Valence Bands | Achieving high thermoelectric performance requires efficient manipulation of
thermal conductivity and a fundamental understanding of the microscopic
mechanisms of phonon transport in crystalline solids. One of the major
challenges in thermal transport is achieving ultralow lattice thermal
conductivity. In this study, we use the anti-bonding character of the
highest-occupied valence band as an efficient descriptor for discovering new
materials with an ultralow thermal conductivity. We first examined the
relationship between anti-bonding valence bands and low lattice thermal
conductivity in model systems PbTe and CsPbBr3. Then, we conducted a
high-throughput search in the Materials Project database and identified over
600 experimentally stable binary semi-conductors with a strong anti-bonding
character in their valence bands. From our candidate list, we conducted a
comprehensive analysis of the chemical bonds and the thermal transport in the
XS family, where X=K, Rb, and Cs are alkaline metals. These materials all
exhibit ultralow thermal conductivities less than 1 W/(m K) at room temperature
despite simple structures. We attributed the ultralow thermal conductivity to
the weakened bonds and increased phonon anharmonicity due to their anti-bonding
valence bands. Our results provide chemical intuitions to understand lattice
dynamics in crystals and open up a convenient venue towards searching for
materials with an intrinsically low lattice thermal conductivity. | Jiaoyue Yuan, Yubi Chen, Bolin Liao | 2023-05-16T20:31:04Z | http://arxiv.org/abs/2305.09794v1 | # Lattice Dynamics and Thermal Transport in Semiconductors with Anti-bonding Valence Bands
###### Abstract
Achieving high thermoelectric performance requires efficient manipulation of thermal conductivity and a fundamental understanding of the microscopic mechanisms of phonon transport in crystalline solids. One of the major challenges in thermal transport is achieving ultralow lattice thermal conductivity. In this study, we use the anti-bonding character of the highest-occupied valence band as an efficient descriptor for discovering new materials with an ultralow thermal conductivity. We first examined the relationship between anti-bonding valence bands and low lattice thermal conductivity in model systems PbTe and CsPbBr\({}_{3}\). Then, we conducted a high-throughput search in the Materials Project database and identifed over 600 experimentally stable binary semiconductors with a strong anti-bonding character in their valence bands. From our candidate list, we conducted a comprehensive analysis of the chemical bonds and the thermal transport in the XS family, where X=K, Rb, and Cs are alkaline metals. These materials all exhibit ultralow thermal conductivities less than 1 W/(m K) at room temperature despite simple structures. We attributed the ultralow thermal conductivity to the weakened bonds and increased phonon anharmonicity due to their anti-bonding valence bands. Our results provide chemical intuitions to understand lattice dynamics in crystals and open up a convenient venue towards searching for materials with an intrinsically low lattice thermal conductivity.
anti-bonding valence bands, thermal conductivity, thermoelectrics
Introduction
Crystalline solids with a low thermal conductivity have been extensively studied for applications such as thermoelectric materials [1; 2; 3; 4], thermal insulation and thermal barrier coatings [5]. For example, to achieve highly efficient thermoelectric power generation or cooling, one has to overcome the challenge to minimize the lattice thermal conductivity \(\kappa\) while maintaining the electrical conductivity \(\sigma\) to maximize the thermoelectric figure of merit \(ZT=\frac{\sigma S^{2}T}{\kappa}\), wherein \(S\) is the Seebeck coefficient and \(T\) is the temperature. A high figure of merit \(ZT\) in crystalline solids requires that the electron flow remains unimpeded while phonons get heavily scattered, a scenario known as the "phonon-glass, electron-crystal" (PGEC) [6; 7; 8; 9]. Some of the most successful techniques developed for the manipulation of phonon transport include introducing defects [10; 11], nano/microstructural modifications using grain-boundary engineering [12; 13] and interfaces [14]. However, these extrinsic approaches all introduce defects and/or interfaces that can also interfere with electron transport [4]. In this light, crystalline materials with an intrinsic ultralow thermal conductivity are particularly desirable. In previous studies, intrinsically low thermal conductivities have been achieved by exploring phonon scattering mechanisms originating from chemical bonding and structural aspects [15; 16; 17; 18; 19]. These include layered structures [20; 21], liquid-like sublattices [22; 23; 24], local structural distortions [25], ferroelectric-instability-induced phonon softening [26; 27], rattling phonon modes [28; 29; 30; 31], and anharmonic lattice vibrations originating from electron lone pairs [32; 33; 34].
At a microscopic level, the intrinsic lattice thermal conductivity of crystalline materials is largely controlled by the chemical bonding strength, which significantly impacts the velocity of the heat-carrying acoustic phonons. This explains the much lower thermal conductivity in materials with dominant ionic or van der Waals bonds than those with dominant covalent bonds [35]. However, ionic and van der Waals bonds tend to have limited overlap between neighboring electronic orbitals, leading to weak electronic dispersion and low electron mobility. To balance the impact of the chemical bonds on the transport properties of both phonons and electrons, special types of covalent bonds with a mixed ionic character have been pursued. One prominent example is the abnormally low intrinsic lattice thermal conductivity of the IV-VI semiconductor family that hosts many of the best known thermoelectric materials, including PbTe, SnSe and GeTe [36; 37; 38; 39]. Despite their simple and
high-symmetry crystal structures, these materials exhibit much lower thermal conductivity than other materials with similar atomic masses [40]. One common feature underlying their low lattice thermal conductivity is the presence of soft optical phonon modes that can strongly scatter heat-carrying low-frequency acoustic phonons, which originate from the long-ranged interatomic interactions due to resonant chemical bonds [40; 41]. It is also understood that the lone electron pair associated with the group IV element in these materials plays an important role in their unusual bonding structure [32; 42; 43]. Due to their mixed ionic-covalent bonding character, the electron mobility is not compromised in these materials, making them promising platforms to realize PGEC. Another emerging example with an ultralow lattice thermal conductivity is the lead halide perovskites (LHPs), which have attracted significant recent attention for photovoltaic applications [44]. Both fully inorganic and inorganic-organic hybrid versions of these materials have a thermal conductivity below 1 W/(m K) at room temperature, which has been attributed to soft chemical bonds[45]. It has been increasingly recognized that the mixed ionic-covalent character of the chemical bonds in LHPs, likely also linked to the lone electron pair in Pb\({}^{2+}\) ions [46], is not only responsible for their strong lattice anharmonicity, low lattice thermal conductivity and facile ion migration [47], but also their extraordinary optoelectronic and electron transport properties. Given the detailed understanding of the unusual lattice properties of these materials, the remaining challenge is to efficiently identify other materials with similar characters that can be promising candidates for applications requiring an intrinsically low thermal conductivity.
Indeed, the quests for new materials with extremely low thermal conductivity have been abundant. Previous studies have utilized high-throughput screening techniques to identify potential candidates based on physical characteristics such as large atomic mass [48], structural information of rattling atoms [49], and the combination of machine-learning algorithms and automatic _ab initio_ calculations [50; 51]. However, a comprehensive search focusing on the chemical bonding character and its impact on lattice dynamics has not been carried out due to the lack of an effective descriptor. In this work, we focus on a chemical bonding signature: highest-occupied valence bands with a strong anti-bonding character in a semiconductor. Recent studies have suggested that anti-bonding chemical bonds are closely related to ultralow thermal conductivities in a range of materials [52; 53; 54]. The advantage of using the anti-bonding character of the highest-occupied valence band as a descriptor is that it can be efficiently analyzed using the crystal orbital Hamilton populations (COHP)
method [55; 56], which only requires ground-state density functional theory (DFT) calculations. This method can be applied to any inorganic crystal structure and requires only basic structural and compositional information as input and minimal time and computing resources. The low computational cost makes it a suitable method to screen materials in databases to identify new semiconductors with an ultralow lattice thermal conductivity. In this paper, we first show that the strong anti-bonding valence bands (ABVBs) underlie the unusual lattice dynamics in known example materials with an ultralow thermal conductivity, such as PbTe and CsPbBr\({}_{3}\). In particular, the highest occupied valence bands with a strong anti-bonding character not only lead to weakened chemical bonds, but also give rise to soft optical phonons. Then, we conduct a high-throughput screen within the Materials Project database [57] to search for binary semiconductors with a strong ABVB. As a result, we identified more than 600 binary semiconductors with an anti-bonding valence band from a pool of over 6,000 candidates, allowing for the possibility of realizing ultralow intrinsic thermal conductivities. Out of the identified candidates, we highlight the XS (X=Na, K, Rb, and Cs) semiconductor family, where the sulfur ions are in an unusual valence state (S\({}^{-}\)). We found that they all exhibit ultralow thermal conductivities despite having simple crystal structures, which can be attributed to their strongly ABVBs. Our findings suggest that a highest-occupied valence band with a strong anti-bonding character is indicative of impeded thermal transport and our high-throughput screening strategy also offers a novel approach for identifying materials with an intrinsically low thermal conductivity.
## II Methods
Our density functional theory (DFT) electronic structure calculations were performed using the Vienna Ab initio Simulation Package (VASP) [58; 59] with the projector augmented wave (PAW) method [60] and the Perdew-Burke-Ernzerhof form of the generalized gradient approximation (PBE-GGA) of the exchange-correlation functional [61]. The valence wavefunctions were expanded on a plane-wave basis with a 400 eV energy cutoff for all the materials studied in this work. The spin polarization and spin-orbit interaction were explicitly taken into account. The energy and force convergence criteria were set to be 1 \(\times\) 10\({}^{-7}\) and 0.01 eV/A, respectively. The \(\Gamma\)-centered \(n\times n\times n\) (n = 1, 2, 3, or 4) \(\mathbf{k}\)-point grids were used to sample the first Brillouin zone depending on the unit cell size during high
throughput material screening. For selected materials studied in this paper, further **k**-point convergence was tested to make sure the lattice parameters and forces were converged. Particularly, the bonding nature associated with various electron energy bands were analyzed using the COHP method [55; 56]. The COHP method partitions the energy of the band structure into interactions between pairs of atomic orbitals between adjacent atoms. It is a bond-weighted measure of the electronic density of states (DOS) and provides a quantitative measure of the bonding and anti-bonding contributions to the band energy. Importantly for this work, the sign of the COHP differs for bonds with bonding or anti-bonding nature: a positive (negative) sign corresponds to anti-bonding (bonding) interactions. By convention, COHP diagrams plot the negative value (-pCOHP) such that bonding (anti-bonding) states on the right (left) of the axis can be easily visualized. We further quantify the degree of anti-bonding using the integrated area under the COHP curves with respect to the electron band energy.
The phonon dispersion relations were obtained by conducting finite-displacement force calculations [62], from which the harmonic interatomic force constants (IFCs) were extracted. Then the dynamic matrices were constructed and diagonalized using the Phonopy package to generate phonon eigenfrequencies [63]. Convergence of the phonon dispersions with respect to the supercell size and the **k**-grid sampling were tested in materials selected for a detailed study. The phonon dispersion calculation of polar materials also included the non-analytic polar corrections. We calculated the lattice thermal conductivity \(\kappa_{\text{ph}}\) by iteratively solving the phonon Boltzmann transport equation using the ShengBTE package [64]. Using the finite displacement method, we computed the anharmonic 3rd-order IFCs. The 3rd-order finite displacement calculation employed the identical supercell size and **k**-grid sampling as the 2nd-order IFCs. To ensure convergence, we tested several neighbor interaction cutoffs. The q-mesh density, which is the phonon momentum space sampling grid, was set to \(10\times 10\times 10\) for most materials. We examined the convergence of grid density for all cases, and the reported values of thermal conductivity are all converged.
## III Results and Discussion
### Anti-bonding valence bands in PbTe and CsPbBr3
In the chemical bonding theory [65; 66], molecular orbitals (MOs) are formed by the overlap of atomic orbitals. For example, Fig. 1(a) illustrates the \(\pi\) bond formation of atomic
Figure 1: **Schematics of anti-bonding states and anti-bonding valence bands.** (a) Molecular orbital diagram of a \(\pi\) bond formed by p orbitals. (b) Energy of typical bonding and anti-bonding orbitals as a function of atomic distances. (c) COHP diagram of silicon, showing the bonding nature of the valence band and the anti-bonding nature of the conduction band. (d) Electronic density of states of PbTe, projected onto atomic orbitals. (e) Molecular orbital diagram of PbTe, showing three stages of orbital interactions: (I) isolated Pb and Te atomic levels; (II) broadened atomic levels due to s-s and p-p mixing; (III) s-p hybridization introduces s orbital features to valence band maximum (VBM) and conduction band minimum (CBM) and an occupied anti-bonding valence band below the Fermi level \(E_{F}\). (f) COHP diagram of PbTe, showing the strong anti-bonding nature of the valence band.
\(p\) orbitals between two adjacent atoms. The overlap of electron wave functions splits the original atomic \(p\) orbital into two MOs. The lower energy orbital is the bonding MO, shown as the "bonding \(\pi\)" in Fig. 1(a). The higher energy orbital is the anti-bonding MO, shown as the "anti-bonding \(\pi^{*}\)". Due to the Pauli exclusion principle, the two electrons will occupy the two spin states of the bonding MO, leaving the anti-bonding MO unoccupied. Consider the energy level of the two MOs as a function of the atomic distance, as shown in Fig. 1(b). The occupation of the bonding MO lowers the total energy near the equilibrium interatomic distance and results in stabilization of the chemical bond, while the occupation of the anti-bonding MO raises the energy and, thus, weakens the bond and potentially increases the bond anharmonicity. Many of the covalently bonded semiconductors possess a valence band with occupied bonding states and a conduction band with empty anti-bonding states. For example, the valence band and the conduction band in silicon are formed by the bonding and anti-bonding states of the \(sp^{3}\) hybridized orbitals, respectively. This can be clearly seen from the COHP analysis of silicon, which is shown in Fig. 1(c).
In contrast, the bonding nature of the electronic bands in PbTe is qualitatively different. PbTe has been extensively studied due to its exceptional thermoelectric properties including a high Seebeck coefficient and a low thermal conductivity [10; 67; 68]. Despite its high-symmetry structure, the bulk lattice thermal conductivity in single-crystalline PbTe is unexpectedly low, around 2 W/(m K) at room temperature [68]. Fig. 1(d) shows the PbTe electronic density of states (DOS) that has been projected onto Pb-6p, Pb-6s, Te-5p, and Te-5s orbitals. Based on the electronic DOS, Fig. 1(e) shows a simplified linear combination of atomic orbitals (LCAO) picture of PbTe. Pb\({}^{2+}\) ions have empty 6p states, but the 6s states are occupied by a "lone pair" of electrons [69]. This unique "lone pair" configuration enables efficient s-p mixing between the Pb-6s states and the Te-5p states, forming bonding and anti-bonding states. Since both Pb-6s and Te-6p states are fully occupied, both the resulted bonding and anti-bonding states are occupied, leading to the highest-occupied valence band possessing a strong anti-bonding character, as shown in the COHP analysis shown in Fig. 1(f). This intimate relationship between the lone-pair electrons of the divalent group-IV cations and the ABVB was also discussed in previous studies [69]. However, a direct evaluation of the impact of the ABVB on the thermal transport is still lacking.
To quantify the ABVB effect on thermal transport properties in PbTe, we hypothesized that, by removing electrons from the anti-bonding states at the VBM and then relaxing the
lattice, the chemical bond strength will be increased and the bond anharmonicity will be reduced, leading to a stabilized lattice and an increased thermal conductivity. To test this hypothesis, we used a \(4\times 4\times 4\) supercell in the calculation, where electrons were removed in PbTe and the lattice was fully relaxed afterwards. We performed the calculation by removing 2 and 8 electrons (labeled "-2e" and "-8e" cases, respectively, in the following discussion) out of a total number of 640 valence electrons, which can be considered a small perturbation to the pristine PbTe electronic structure and only represent the anti-bonding states very close to the VBM. A 4-fold degeneracy exists for 8 VBM electrons located at L (0.5, 0.5, 0.5) points, which are shifted to the \(\Gamma\) point in the supercell calculation. Therefore, a single \(\Gamma\) point sampling was sufficient in the \(4\times 4\times 4\) supercell.
The calculated IFCs and the bond lengths are shown in Table 1, where we compare three cases: unperturbed PbTe, the "-2e" case and the "-8e" case. Here, \(K_{12}\) is the trace of the IFC tensor of the nearest Pb-Te bond. The negative sign of the \(K_{12}\) values indicates the overall stable bonding character of the Pb-Te bond and their magnitude reflects the bonding strength. From Table 1, it is clearly seen that removing electrons from the ABVB states decreases the length of the Pb-Te bond and increases its bonding strength, thus stabilizing the PbTe lattice. This effect is further illustrated in Fig. 2(a) as the total ground-state energy of the unperturbed PbTe and the "-2e" case is plotted as a function of the lattice constant. The minimum location, corresponding to the equilibrium lattice parameter, shifts towards a smaller value in the "-2e" case, with an increased quadratic coefficient from 0.0304 to 0.0319, indicating a strengthened bond. The difference in the ground-state energy between the two cases, \(\Delta E\), reflects the contribution from the anti-bonding component, showing a destabilizing trend.
The impact of removing electrons from ABVB states on phonon properties is examined in Fig. 2(b), which illustrates the phonon dispersions of the unperturbed PbTe and the "-8e" case. Firstly, the acoustic phonon modes have an increased velocity in the "-8e" case
\begin{table}
\begin{tabular}{c|c c c} IFC & PbTe & -2e & -8e \\ \hline \hline K\({}_{12}\) & -0.786 & -0.869 & -1.18 \\ \hline Bond Length & 3.28Å & 3.26Å & 3.19Å \\ \end{tabular}
\end{table}
Table 1: Traces of the IFC Tensor and Bond Length of PbTe
compared to that in the unperturbed case, which is a consequence of the increased bonding strength consistent with the magnitude of \(K_{12}\) in Table 1. Secondly, removing electrons from the ABVB states significantly raises the frequency of the soft optical phonons near the \(\Gamma\) point, suggesting that the occupied ABVB in PbTe not only weakens the chemical bond, but also leads to long-ranged force interactions resulting in soft optical phonons. Both effects are expected to strongly affect the thermal conductivity. Fig. 2(c) shows the calculated thermal conductivity in unperturbed PbTe as compared to "-2e" and "-8e" cases as a function of the number of neighbor shells included in calculating the third-order IFCs. The corresponding scattering properties of PbTe are included in the Supplementary Material [70]. The thermal conductivity in the "-2e" case is approximately 1.5 times that of the unperturbed PbTe, while in the "-8e" case, the thermal conductivity is boosted by at least a factor of 3. These findings indicate that the presence of occupied ABVBs is the origin for the abnormally low lattice conductivity.
Figure 2: **Impact of the anti-bonding valence band on thermal transport in PbTe.** (a) Total ground-state energy of unperturbed PbTe and the “-2e” case as a function of the supercell lattice constant. The difference of the two energies, \(\Delta E\) is also shown, which reflects the contribution of the removed anti-bonding states near the VBM. The energy axis is shifted such that the energy of the “-2e” case at the equilibrium volume is zero. (b) Calculated phonon dispersion relations of the unperturbed PbTe and the “-8e” case, showing the impact of anti-bonding states on the acoustic phonon velocity and the soft optical phonon frequency.(c) Calculated lattice thermal conductivity of the unperturbed PbTe and the “-2e” and “-8e” cases as a function of the interacting neighbor shells included in the calculation, showing the significant impact of the occupied anti-bonding valence states on the thermal conductivity of PbTe.
thermal conductivity in PbTe from a chemical-bonding-theory point of view. To solidify these findings, we further applied the same strategy (removing valence electrons) to Si with bonding valence bands, and the results can be found in the Supplementary Material [70]. In contrast to PbTe with ABVB, removing valence electrons in the bonding states in Si leads to a weakened bond strength, a decrease in the speed of sound and a reduction in the thermal conductivity. This result further establishes the impact of ABVB on the intrinsic lattice thermal conductivity.
Similarly, we also examined the relationship between the ABVB and the thermal transport properties of the inorganic halide perovskite CsPbBr\({}_{3}\) that also contains the Pb\({}^{2+}\) ion. Halide perovskites have gained attention as a new class of photovoltaic and thermoelectric material [71, 72, 73, 44]. While the ultralow thermal conductivity of organic-inorganic hybrid halide perovskites was often attributed to the cation dynamic disorder, it is not well understood why thermal transport in crystalline all-inorganic halide perovskites such as CsPbBr\({}_{3}\) is also significantly suppressed [74, 25]. Similar to PbTe, the Pb\({}^{2+}\) ion in CsPbBr\({}_{3}\) hosts Pb-6s lone pair electrons [75, 76, 77] that promote the s-p mixing between Pb-6s and Br-4p orbitals,
Figure 3: **Anti-bonding valence bands in CsPbBr\({}_{3}\) and their impact on phonons.** (a) The COHP diagram for CsPbBr\({}_{3}\), where a strong anti-bonding feature is shown for the VBM. (b) Isosurfaces of the electronics wavefunctions at the VBM in CsPbBr\({}_{3}\). The s orbitals on Pb atoms and p orbitals on Br atoms show anti-bonding interactions as reflected by the opposite signs of the wavefunctions associated with facing Pb and Br atomic pairs. (c) The calculated phonon dispersion of the cubic phase of CsPbBr\({}_{3}\) at 400 K. Low acoustic velocity associated with the anti-bonding valence bands is observed, which drives the low intrinsic thermal conductivity.
leading to an ABVB in CsPbBr\({}_{3}\) that can be quantified by the COHP analysis shown in Fig. 3(a). A visualization of the Pb-Br anti-bonding interaction is shown in Fig. 3(b), where isosurfaces of the electronic wavefunctions at the VBM of CsPbBr\({}_{3}\) are shown. Here, blue and yellow isosurfaces indicate opposite signs of the wavefunction and the opposite signs of the isosurfaces associated with facing Pb and Br pairs confirm the anti-bonding nature of the valence band in CsPbBr\({}_{3}\). Figure 3(c) shows the phonon dispersion relation of the cubic phase CsPbBr\({}_{3}\) at 400 K simulated by the temperature-dependent effective potential (TDEP) method [78], where the low acoustic phonon velocity originated from the weakened bonds due to the ABVBs is clearly seen that drive the low lattice thermal conductivity.
### High-throughput Screening of Semiconductors with ABVBs
Results from PbTe and CsPbBr\({}_{3}\) suggest that a strong anti-bonding character in the highest occupied valence band, which can be efficiently evaluated by the COHP analysis, is a convenient indicator for weakened covalent bonds, large phonon anharmonicity and a low lattice thermal conductivity. This motivates us to perform a high-throughput screening for semiconductors with strong ABVBs in the Materials Project database [57] by the COHP analysis. As a first step, we focused on binary compounds and found \(\sim\)1000 binary candidates in the Materials Project database that are stable and have a finite band gap between 0 to 3 eV. Materials containing heavy elements with unfilled \(f\) electrons were excluded from the candidate list due to the known inaccuracy of DFT to deal with these materials. For each candidate, we calculated COHP for all pair-wise interactions within 1.5 times the nearest-neighbor distance. Then the strength of the anti-bonding character was quantified by integrating the area under the COHP curves within 0.1 eV of the valence band edge, thus only focusing on the highest-occupied valence band states. All candidates were sorted by the strength of the anti-bonding character associated with their highest occupied valence band states. A screened list consisting of 625 experimentally stable binary semiconductors are provided in Table S2 of the Supplementary Material [70]. Among these materials of interest, we present a comprehensive study of the thermal transport of the XS family, where X = Na, K, Rb, Cs are alkaline metals. Their structural properties and thermal conductivity at room temperature are summarized in Table 2, and the convergence tests of their thermal conductivities are provided in the Supplementary Material [70]. As
shown in Table 2, with an increasing mass of the alkaline metal, the room-temperature thermal conductivity decreases from 5 W/(m K) in NaS to 0.1 W/(m K) in CsS. KS, RbS, CsS all show an ultralow room-temperature thermal conductivity \(<0.8\) W/(m K) despite their relatively simple crystal structures. The results open up possibilities for their potential applications in thermoelectrics and also motivate further experimental studies. Notably, KS reaches an ultralow thermal conductivity of 0.8 W/(m K) at room temperature without heavy elements or complex structures. A detailed analysis of the bonding mechanism and the thermal transport was thus carried out for KS as an example to demonstrate the decisive impact of the ABVB on its ultralow thermal conductivity.
### Detailed Analysis of Bonding and Thermal Transport in KS
Despite its unusual chemical formula, KS (or K\({}_{2}\)S\({}_{2}\)) has been theoretically found to be as stable as K\({}_{2}\)S at room temperature under ambient pressure [79] and has been experimentally synthesized [80; 81]. Building upon these results, we conducted a more in-depth analysis of KS, focusing specifically on its chemical bonding properties. The COHP diagram shown in Fig. 4(a) attributes the strong anti-bonding character of the valence band to two kinds of S-S bonds. These two S-S bonds can be visualized in the isosurfaces of VBM electron wavefunctions as shown in Fig. 4(b), where the blue and yellow isosurfaces represent opposite signs of the wavefunction. From the isosurfaces, the two anti-bonding S-S bonds are \(\pi^{*}\) bonds of the S-3p orbitals, where the neighboring wavefunctions possess opposite signs. The longer bond (2.15 A), marked in a blue box in Fig. 4(b), has more prominent anti-bonding component closer to the VBM. The shorter bond labeled in an orange box shows stronger anti-bonding due to its shorter (2.13 A) length. The calculated electronic band structure
\begin{table}
\begin{tabular}{c|c c c c c c} Material & Band Gap [eV] & Crystal System & Space Group & \(\kappa_{x}\) & \(\kappa_{y}\) & \(\kappa_{z}\) & [W/(m K)] \\ \hline \hline NaS & 1.23 & hexagonal & P6\({}_{3}\)/mmc & 5.9 & 5.9 & 3.9 & \\ \hline KS & 1.47 & hexagonal & P62m & 0.84 & 0.84 & 0.76 & \\ \hline RbS & 1.58 & hexagonal & P62m & 0.65 & 0.65 & 0.64 & \\ \hline CsS & 1.73 & orthorhombic & Immm & 0.15 & 0.14 & 0.083 & \\ \end{tabular}
\end{table}
Table 2: Basic properties and calculated thermal conductivity of NaS, KS, RbS, CsS
Figure 4: **Anti-bonding valence bands in KS.** (a) The COHP diagram of KS, where the S-S bonds show strong anti-bonding features near the VBM. The longer and shorter S-S bonds correspond to a bond length of 2.15 Å and 2.13 Å, respectively. (b) The isosurfaces of the VBM electronic wavefunctions in KS. The longer and shorter S-S bonds are marked in blue and orange boxes, respectively. (c) The calculated electronic band structure of KS. The S orbitals contribute dominantly to the valence band. (d) \(l\)-decomposed and site-projected electronic DOS for S orbitals in KS. \(p_{x}\) (equivalent to \(p_{y}\)) orbitals form the valence bands, and the highest occupied valence band consists of the anti-bonding \(\pi^{*}\) state formed by \(p_{x}\) (\(p_{y}\)) orbitals. (e) Molecular bond analysis for the S-S bond. The 3p orbitals of a single S\({}^{-}\) ion has 5 electrons. The covalent S-S bond form \(\sigma,\sigma^{*},\pi\) and \(\pi^{*}\) orbitals, where \(\sigma,\pi\) and \(\pi^{*}\) bonds are occupied. These occupied bonds correspond to the electronic DOS below the Fermi level as shown in Fig. 4(d) and the highest occupied valence band is composed of \(\pi^{*}\) anti-bonding states.
in Fig. 4(c) shows that orbitals from the S atom make dominant contributions to electronic states near the band edges, while K orbitals only contribute to unoccupied higher energy conduction bands. The effective masses of electrons and holes near the band edges can also be extracted from the calculated band structure: the electron effective mass is \(0.24m_{0}\) along the out-of-plane direction and \(1.48m_{0}\) along the in-plane direction; the hole effective mass is \(1.2m_{0}\) along the out-of-plane direction and \(3.4m_{0}\) along the in-plane direction, where \(m_{0}\) is the free electron mass. The relatively low effective masses show that the electronic transport properties are maintained despite the reduced bonding strength leading to a lower thermal conductivity. From the orbital-decomposed and site-projected electronic DOS for an S atom shown in Fig. 4(d), we can observe that the equivalent \(p_{x}\) and \(p_{y}\) orbitals form the anti-bonding \(\pi^{*}\) state that corresponds to the highest occupied valence band in KS. The molecular orbital diagram of the S-S bond in KS is illustrated in Fig. 4(e) to reveal the origin of the ABVB in KS. The outermost 4s electron in a K atom is transferred to an S atom to form a \(S^{-}\) ion in its unusual monovalent configuration. The 3p orbitals of a single \(S^{-}\) ion thus contain 5 electrons. The covalent S-S bond form \(\sigma,\sigma^{*},\pi\) and \(\pi^{*}\) orbitals, where \(\sigma,\pi\) and \(\pi^{*}\) orbitals are occupied by the \(S^{-}\) ion. The \(\sigma\) orbitals are occupied by the \(S^{-}\) ion. The \(\sigma\) orbitals are occupied by the \(S^{-}\) ion.
orbitals are occupied, yielding a bond order of 1. Here the anti-bonding \(\pi^{*}\) state is occupied thanks to the additional electron transferred from the K atom. These occupied orbitals correspond to the electronic DOS below the Fermi level in Fig. 4(e). Therefore, the highest occupied valence band is composed of anti-bonding \(\pi^{*}\) states. This mechanisms is in addition to the group-IV lone-pair electrons that can give rise to ABVBs. Recently, He et al. also pointed out that p-d hybridization can lead to ABVBs and ultralow thermal conductivities in Cu- and Ag-containing compounds [19]. These chemical insights into ABVB formation can further guide the search and design of new materials with an ultralow intrinsic thermal conductivity.
To complete our discussion, we report the calculated thermal transport properties of KS in Fig. 5. Figure 5(a) shows the phonon dispersion, suggesting that KS is dynamically stable. More importantly, flat phonon bands can be observed near 1.5 THz, which are expected to scatter acoustic phonons and hinder thermal transport. Figure 5(b) depicts the phonon-phonon scattering rates as a function of the phonon frequency, which are compared to the expected frequency-square scaling. The scattering rates of the low-frequency acoustic phonons follow the frequency-square trend while the flat phonon bands lead to a peak in the scattering rates near 1.5 THz. The calculated lattice thermal conductivity of KS is 0.8 W/(m K) at room temperature, which is lower than most compounds with similar atomic mass and simple crystals structure. The temperature-dependent thermal conductivity of KS from 100 K to 500 K is displayed in Figure 5(c). The compound CsS in the same family with a heavier alkaline metal even possesses a room-temperature thermal conductivity as low as 0.1 W/(m K) (more details are provided in the Supplementary Material [70]), even lower than many amorphous and organic materials. These materials are promising candidates for applications where an ultralow thermal conductivity is desired.
## IV Conclusion
In conclusion, we examined the connection between ABVBs and low thermal conductivities with first-principles calculations in PbTe and CsPbBr\({}_{3}\) and we found that the ABVBs are responsible for the weakened chemical bonds and the soft optical phonons that lead to abnormally low lattice thermal conductivities in both compounds. Based on this observation, we conducted a high throughput materials search based on ABVBs and found over
600 experimentally stable binary semiconductors with strong ABVBs. Among the candidate materials, we analyzed in detail the XS (X = Na, K, Rb, and Cs) family in terms of their strong ABVB states and evaluated their lattice thermal conductivities. With an ultralow thermal conductivity of 0.1 W/(m K), CsS is an exceptional example of a crystalline material with a lower thermal conductivity than most amorphous and organic materials. Several other materials on our list with strong ABVBs are potentially interesting as well, such as NaCl\({}_{3}\), and further studies are needed to fully evaluate the impact of the ABVBs on their electrical and thermoelectric transport properties. Our material search can also be expanded to more complicated materials containing three or more elements. Our results suggest that the intuitions offered by the chemical bonding theory can provide simple but powerful guidelines for understanding the thermal conductivity of materials as well as for discovering new materials with unusual thermal transport properties.
###### Acknowledgements.
We acknowledge Fanghao Zhang for fruitful discussions. This work is based on research supported by the U.S. Office of Naval Research under the award number N00014-22-1-2262. Y.C. also acknowledges the support from the Graduate Traineeship Program of the NSF Quantum Foundry via the Q-AMASE-i program under award number DMR-1906325 at the University of California, Santa Barbara (UCSB). This work used Stampede2 at Texas Advanced Computing Center (TACC) and Expanse at San Diego Supercomputer Center (SDSC) through allocation MAT200011 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants 2138259, 2138286, 2138307, 2137603, and 2138296. Use is also made of computational facilities purchased with funds from the National Science Foundation (award number CNS-1725797) and administered by the Center for Scientific Computing (CSC) at University of California, Santa Barbara (UCSB). The CSC is supported by the California NanoSystems Institute and the Materials Research Science and Engineering Center
(MRSEC; NSF DMR-1720256) at UCSB.
|
2309.00303 | Optical Probing of Ultrafast Laser-Induced Solid-to-Overdense-Plasma
Transitions | Understanding the target dynamics during its interaction with a relativistic
ultrashort laser pulse is a challenging fundamental multi-physics problem
involving at least atomic and solid-state physics, plasma physics, and laser
physics. Already, the properties of the so-called pre-plasma formed as the
laser pulse's rising edge ionizes the target are complicated to access in
experiments and modeling, and many aspects of this laser-induced transition
from solid to overdense plasma over picosecond time scales are still open
questions. At the same time, applications like laser-driven ion acceleration
require precise knowledge and control of the pre-plasma because the efficiency
of the acceleration process itself crucially depends on the target properties
at the arrival of the relativistic intensity peak of the pulse. By capturing
the dynamics of the initial stage of the interaction, we report on a detailed
visualization of the pre-plasma formation and evolution. Nanometer-thin
diamond-like carbon foils are shown to transition from solid to plasma during
the laser rising edge with intensities < 10^16 W/cm^2. Single-shot
near-infrared probe transmission measurements evidence sub-picosecond dynamics
of an expanding plasma with densities above 10^23 cm^-3 (about 100 times the
critical plasma density). The complementarity of a solid-state interaction
model and a kinetic plasma description provides deep insight into the interplay
of ionization, collisions, and expansion. | Yasmina Azamoum, Georg Alexander Becker, Sebastian Keppler, Guillaume Duchateau, Stefan Skupin, Mickael Grech, Fabrice Catoire, Sebastian Hell, Issa Tamer, Marco Hornung, Marco Hellwing, Alexander Kessler, Franck Schorcht, Malte Christoph Kaluza | 2023-09-01T07:26:52Z | http://arxiv.org/abs/2309.00303v1 | # Optical Probing of Ultrafast Laser-Induced Solid-to-Overdense-Plasma Transitions
###### Abstract
Understanding the target dynamics during its interaction with a relativistic ultrashort laser pulse is a challenging fundamental multi-physics problem involving at least atomic and solid-state physics, plasma physics, and laser physics. Already, the properties of the so-called pre-plasma formed as the laser pulse's rising edge ionizes the target are complicated to access in experiments and modeling, and many aspects of this laser-induced transition from solid to overdense plasma over picosecond time scales are still open questions. At the same time, applications like laser-driven ion acceleration require precise knowledge and control of the pre-plasma because the efficiency of the acceleration process itself crucially depends on the target properties at the arrival of the relativistic intensity peak of the pulse. By capturing the dynamics of the initial stage of the interaction, we report on a detailed visualization of the pre-plasma formation and evolution. Nanometer-thin diamond-like carbon foils are shown to transition from solid to plasma during the laser rising edge with intensities \(<10^{16}\) W/cm\({}^{2}\). Single-shot near-infrared probe transmission measurements evidence sub-picosecond dynamics of an expanding plasma with densities above \(10^{23}\) cm\({}^{-3}\) (about 100 times the critical plasma density). The complementarity of a solid-state interaction model and a kinetic plasma description provides deep insight into the interplay of ionization, collisions, and expansion.
## Introduction
Since the turn of the millennium, the interaction of ultraintense laser pulses with thin foils has been shown to produce ion beams with unique properties[1, 2], paving the way for groundbreaking applications like time-resolved radiography of electric and magnetic fields in plasmas[3], fast ignition in inertial confinement fusion[4, 5], material testing and analysis[6], proton and carbon ion radiobiology[7, 8] and cancer therapy[9]. The rapid progress in laser-driven ion acceleration has demonstrated the production of proton energies up to \(\sim\) 100 megaelectronvolts (MeV)[10], yet protons of \(\sim\) 200 MeV are required for radiation oncology[11]. Nevertheless, protons with energies of several hundred MeV were predicted theoretically[12, 13], further motivating ongoing experimental endeavors[10, 14].
Due to the complex nature of the interaction, several ion acceleration mechanisms can be triggered, which intricately depend on the laser pulse parameters, the properties of the target, the plasma, and their spatiotemporal evolution during the interaction. The laser pulse's temporal intensity profile, i.e., the laser contrast, exhibits a rising edge preceding the relativistic peak and may even include a pedestal due to amplified spontaneous emission (ASE) and ultra-short pre-pulses. Thus, in real-world experiments, the target is always ionized _before_ the relativistic intensities are reached, forming a so-called pre-plasma. Efficient ion acceleration may occur depending on the pre-plasma state. For instance, in the case of Radiation Pressure Acceleration (RPA)[15] using nanometer-thin foils, one tries to minimize pre-plasma expansion at all costs because the strong pressure of the relativistic pulse is supposed to accelerate a thin layer of ions and electrons of a _non-expanded_ target. Thus, RPA requires an ultrahigh laser contrast, which poses an extreme challenge for current state-of-the-art high-power lasers. In contrast, pre-plasma conditions can be tailored to optimize Target Normal Sheath Acceleration (TNSA)[16] employing \(\upmu\)m-thick foils. The electrostatic field, which accelerates the protons at the target rear side, is induced by hot electrons heated by the main laser pulse in the pre-plasma produced on the target front side. Although extensively investigated, to date, proton energies achieved in TNSA[17] and RPA[18] have not yet exceeded 100 MeV.
Alternative acceleration schemes proposed in[12, 13] are based on Relativistic-Induced Transparency (RIT)[19, 20]. In contrast to surface acceleration as in TNSA and RPA, the laser peak penetrates a near-critical overdense pre-plasma due to the relativistic increase of the electrons' mass, which leads to a change in the plasma's refractive index (RIT effect). Hence, efficient heating of the particles may occur in the now extended interaction volume. Though appealing, the success of RIT-based schemes in experiments is challenging. Pre-expanded nanofoils are usually employed to this regime[10, 14]. In this case, the pre-plasma density evolution must be tailored to the main pulse's rising edge.
The fine-tuning of the target state before the pulse peak's arrival requires accurate modeling to identify the tunable key parameters. In typical modeling approaches based on particle-in-cell (PIC) codes[21, 22], the interaction is described starting from a pre-plasma state, which is then irradiated by
the relativistic peak. The initial stage of the plasma formation from the solid state by the laser rising edge is usually ignored. However, several preponderant fundamental processes occur in this early interaction stage. Among others, these include initial ionization in the solid state, conduction electron heating by the laser, electron energy-coupling to the lattice or ions, phase transitions, and collisions. Simplifying the interplay of these processes is commonly done by making strong assumptions about the complex process of pre-plasma formation. Hydrodynamic codes[23, 24] are usually employed to infer pre-plasma properties considering the laser's rising edge and possible pre-pulses. Although these codes provide a reasonable estimate of the spatial density profiles, the distribution functions of the species, i.e., their temperatures and the ionization state, are based on approximations. For instance, only averaged physical quantities (densities, velocities, etc.) are considered in these codes, assuming the distribution functions of particles at equilibrium. This oversimplification may result in an inaccurate prediction of the pre-plasma properties and the acceleration process, motivating further experimental and theoretical investigations.
In experiments, capturing the ultrafast evolution of the pre-plasma during the steep laser rising edge, i.e., femtosecond (fs) dynamics on the nanometer (nm) scale, is challenging. Moreover, the shot-to-shot fluctuations inherent to high-power lasers make single-shot probing diagnostics desirable.
Optical probing is a convenient tool to investigate such plasmas. Light with wavelength \(\lambda\) can only propagate in a plasma with electron density \(n_{e}<n_{c}\) where \(n_{c}\) is the critical density given by \(n_{c}=\omega^{2}m_{e}\varepsilon_{0}/e^{2}\approx 1.11\times 10^{21}\,\mathrm{cm}^{-3}/[ \lambda(\mu\mathrm{m})]^{2}\). Then, a non-collisional, non-magnetized, and non-relativistic plasma exhibits a real-valued refractive index \(\eta=\sqrt{1-n_{e}/n_{c}}\). The nm-scale pre-plasma expansion was recently measured with reflected visible probe light from the target[25]. In contrast, the density and the temperature evolutions of plasmas which are overdense for near-infrared (NIR) light, \(n_{e}>10^{21}\,\mathrm{cm}^{-3}\), could only be diagnosed using laser-driven XUV or hard X-rays sources in combination with hydrodynamic codes or X-ray absorption spectroscopy with computationally demanding ab initio calculations[26, 27] (sub-ps time resolution), or using x-ray free electron lasers[28] (fs- and nm- resolutions). While these techniques give access to overdense plasma properties and an accurate description at the atomic level, they imply using limited-access facilities or, at least, a rather complex setup for the probe. For none of the mentioned methods, single-shot probing was reported.
In this paper, we will exploit that longer probe wavelengths, e.g., in the near-infrared (NIR) regime, can still be used to diagnose such plasmas. When \(n_{e}>n_{c}\), the probe light is primarily reflected as \(\eta\) becomes imaginary. However, a fraction of the light still penetrates the plasma over the skin depth \(l_{s}\approx c/\omega_{p}\), where \(\omega_{p}=\sqrt{n_{e}e^{2}/(m_{e}\varepsilon_{0})}\) is the plasma frequency. If the plasma is sufficiently thin (\(<l_{s}\)), the NIR light tunnels through and can be detected to investigate the target's dynamics. Besides, as \(l_{s}\propto 1/\sqrt{n_{e}}\), a large electron density range can be probed in ultrathin targets. Furthermore, when the pulse is temporally chirped, the probe light allows the investigation of time-dependent plasma dynamics in single-shot measurements[29].
To model such optical probing of thin, overdense plasma dynamics, we propose a novel and alternative approach to describe, with a limited number of assumptions, the pre-plasma formation from the initial laser-target interaction, namely, the transition from the solid state to the plasma state. The further pre-plasma evolution before the arrival of the main pulse peak can be readily described using well-established PIC codes. So far, such a transition has only been investigated at lower peak intensities (\(<10^{13}\) W/cm\({}^{2}\)), using laser pulses (\(\sim 100\) ps) with reported plasma dynamics on tens of picoseconds (ps) time scales[30]. In contrast, an ultrafast transition was only investigated in experiments with \(<100\) fs-resolution[31] using sub-picosecond laser pulses. However, to our knowledge, the ultrafast solid-to-plasma transition during a steep laser rising edge has not yet been described in detail.
Using a pump-probe approach comprising single-shot NIR probe light transmission measurements and a two-step interaction model, we report on the experimental observation of an ultrafast transition from solid to highly overdense and expanding plasma. The latter is triggered during the laser rising edge when irradiating nm-thin diamond-like carbon (DLC) foils by femtosecond laser pulses with peak intensities exceeding \(1\sim 10^{16}\) W/cm\({}^{2}\). Similar parameters can readily be employed when using a controlled pre-pulse interacting with a relativistic peak in the context of RIT, which additionally motivates this study.
## Results
The evolution of the laser contrast of the pump pulses, used to irradiate nm-thick DLC foils, is depicted in Fig. 1a. The profile is well described by the two fitting curves in red, which are used in the upcoming data analysis and modeling. Particularly relevant will be the steep rising edge in the time window \(-3.7\) ps \(\leq t_{\mathrm{pump}}\leq-0.2\) ps, described by the contrast ratio CR = exp(\(-\left|t_{\mathrm{pump}}\right|/277\) fs) with an intensity profile \(I=I_{\mathrm{peak}}\times\mathrm{CR}\), where \(I_{\mathrm{peak}}\) is the maximum intensity at the pulse's peak. To ensure that the plasma is formed during this steep rising edge only (around an intensity \(I\sim 10^{12}\) W/cm\({}^{2}\)), \(I_{\mathrm{peak}}\) is reduced to \(\sim 10^{15}\) W/cm\({}^{2}\) so that significant ionization starts after \(t_{\mathrm{pump}}\!\sim-4\) ps. As the steep rising edge covers \(\sim 5\) orders of magnitude in
intensity, varying the peak intensity by one order of magnitude may result in a relative time shift of the plasma formation. However, due to the particular functional form of the contrast ratio CR, besides this shift, the temporal evolution of the plasma remains unchanged. The experimental setup is shown in Fig. 1b. The interaction region was diagnosed in the transverse \(y\)-direction and in time, using a temporally-chirped broadband probe pulse whose different wavelengths arrive at different interaction times. With this approach, the plasma formation and evolution can be recorded within a single shot, achieving a sub-ps time resolution [32] (see details for pulse characterization and probing in Methods).
A typical space- and time-resolved probe transmission map measured with a 1D spatially-resolved imaging spectrometer through a main-pulse irradiated, 10-nm thick DLC foil as the target is shown in Fig. 1c. The map reveals the transition from the target being transparent to an opaque state where the probe is blocked for \(\lambda\lesssim 820\) nm and hence \(T_{r}\sim 0\). The plasma profile in the transverse y-direction exhibits a shape and size similar to the focal spot, cf. inset of Fig. 1b, indicating that the laser's first low-intensity Airy ring beyond the first minimum induces these wings.
In the upcoming analysis we focus on the plasma dynamics in the central high-intensity region of the focal spot at \(y=0\) um. For the sake of simplicity, throughout the paper the particle densities are expressed as a function of the critical plasma density \(n_{e}=1.72\times 10^{21}\) cm\({}^{-3}\) for the probe wavelength \(\lambda=800\) nm. We observed in our simulations that the plasma dispersion is negligible in the probe wavelength range \(\lambda\approx 700-900\) nm. DLC foils of various thicknesses (5, 10, 20, 50 nm) were used.
The measured absolute transmissions \(T\) for all DLC thicknesses are depicted as the blue lines in Fig. 2 (see Methods for measurements and processing details). The transmission profiles for each foil are reproducible when varying the peak intensity over one order of magnitude, which contrasts with previous works [30, 31]. This result confirms that the plasma always forms during the steep rising edge described by CR = exp(-|t_{\text{pump}}|/277\) fs). Varying the peak intensity only shifts the plasma formation in time, i.e., by \(\sim 1\) ps, but the temporal evolution remains the same. Therefore, the knowledge of the absolute timing of the probe's arrival is not required in the framework of this investigation. The time axes in Fig. 2 are, hence, expressed in relative times where \(t_{\text{relative}}=0\) ps is set to the value \(T_{r}\sim 50\) % corresponding to the inflection point of the transmission profile. The profiles show a sub-picosecond transition from a transparent, solid-state target foil represented by the plateau (\(T=T_{0}\)) at early times to an overdense plasma state at later times (\(T\sim 0\)). The transmission dynamics can be characterized by the thickness-dependent time \(\tau\sim 400-700\) fs required for the transmission to drop from 90 % to 10 %.
As the targets are very thin (thickness \(\ll\lambda\)), the low measured probe transmissions (\(T\sim 0.01\)) imply highly overdense plasmas with electron densities \(n_{e}\gg n_{c}\). To evaluate the impact of the plasma electron density on the probe transmission, we calculate the optical tunneling of the probe through a homogeneous plasma slab analytically (see Methods.). For a plasma slab of 10 nm-thickness with electron density \(n_{e}=50n_{c}\), this estimation leads to \(T\sim 0.26\). Hence, a significant fraction of the probe intensity is expected to tunnel through the target. However, our measurements yield even lower transmission values, indicating that the plasma is highly overdense \(>50n_{c}\). Furthermore, assuming a plasma slab of 5 nm thickness with electron density \(n_{e}^{\text{fil}}=371n_{c}\), corresponding to full ionization of the DLC foil (see Methods for target properties), the previous plasma slab model would give \(T\sim 0.024\), which is significantly higher than the measured value of 0.01. For comparison, a density of \(\sim 2n_{e}^{\text{fil}}\) would decrease the transmission to 0.5 %. Since the
Figure 1: Single-shot space and time-resolved probe transmission measurement. (a) The pump laser’s temporal intensity contrast. The shaded regions (1) and (2) are related to the modeling discussed in Fig. 3. The feature at \(t_{\text{pump}}\sim-5\) ps is an artifact from the measurement, cf. Methods. (b) The experimental arrangement. The pump pulse is at normal incidence on the target, while the probe pulse is obliquely incident under an angle \(\alpha=37^{\circ}\). The inset shows the pump focus’s normalized spatial intensity profile. (c) 1D-spatially and temporally resolved relative transmission \(T_{r}\) of the probe for a 10 nm-thick DLC foil measured using a 1D-spatially resolving (SR) imaging spectrometer and a chirped probe pulse. The wavelength on the top axis is converted into a relative time on the bottom axis. The inserted yellow curve is the normalized spatial intensity profile of the pump pulse in the transverse y-direction.
plasma density cannot exceed \(n_{e}^{\rm fl}\), yet another process must be responsible for the reduction of probe light transmission in the experiment. The simple plasma slab model shows that for \(n_{e}~{}\gg~{}n_{c}\), increasing the plasma thickness \(d\) while keeping the product \(n_{e}d\) constant decreases the transmission. Therefore, our measurements confirm that significant plasma expansion already occurs on sub-ps time-scales, and thus needs to be considered in the modeling.
## Discussion
Comprehensive numerical simulations were carried out to explain the experimental findings. We used two complementary interaction models to compute the time-dependent free electron density \(n_{e}\left(t\right)\). As the transmission profiles in the experiment were found to be insensitive to the peak intensity and the plasma forms during the steep rising edge of the laser pulse, the laser intensity is described by \(I=I_{0}\exp(-\left|t_{\rm pump}\right|/277\,\rm fs)\) in the simulations with \(I_{0}=~{}10^{15}~{}\rm W/cm^{2}\). The transmission of a probe plane wave propagating through the generated plasma with density \(n_{e}\left(z,~{}t\right)\) was calculated to compare the simulation results to the experimental measurements. Further details of our computational methodology are provided in Methods.
In a first attempt, the interaction was simulated using the one-dimensional (1D) PIC code SMILEI [22], considering a cold DLC foil interacting with the pump pulse (simulation parameters are given in Methods). Fig. 3a shows the resulting transmission profiles using the electron densities obtained from the SMILEI PIC code alone. For all target thicknesses, the transmission drops quickly with \(\tau\leq 100\) fs, i.e. much faster than experimentally measured. In addition, Fig. 3b shows that the plasma formation (ionization) starts at \(t_{\rm pump}\sim-1\,\rm ps\) where \(I\sim 10^{13}~{}\rm W/cm^{2}\), which is the intensity threshold for photo-ionization of carbon atoms. This ultrafast transition from a cold target to a highly overdense plasma, i.e., \(>100n_{c}\) over a time interval \(<100\) fs, corresponds to the abrupt drop of the predicted probe transmission.
This discrepancy with the experiment can be attributed to the inadequate description of the pristine target. The PIC code treats the target as an ensemble of individual carbon atoms, not accounting for the solid state. Hence, only the ionization of the atoms is considered. However, DLC may already be ionized as a solid carbon foil at lower energy. Indeed, the target is a semiconductor with a band gap of \(\sim 1.1\) eV. This energy value is much smaller than the first ionization energy of carbon atoms \(\sim 11.3\) eV. The plasma is, thus, expected to be formed earlier and at lower laser intensities. Therefore, the actual ionization dynamics of the solid foil need to be included in the modeling.
To correctly account for ionization in solids, a simulation was carried out with a solid-state interaction (SSI) model adapted from Ref. [33]. The ionization is described by solving multiple rate equations [34] (details are given in Methods). This model was already used to successfully interpret an experiment of laser-induced plasma formation from a dielectric solid presented in Ref. [30]. The SSI simulations take into account the full pump laser's temporal intensity profile (Fig. 1a, solid red line), yielding the temporal evolution of the plasma density \(n_{e}\left(t\right)\) as shown by the blue line in Fig. 4a.
As expected, significant plasma generation occurs in the steep rising edge \(t_{\rm pump}>-3\,\rm ps\). By assuming homogeneous, non-expanding plasmas with thicknesses corresponding to the target foils, the transmission profiles can be computed and are shown in Fig. 3c. The SSI model stops at \(t_{\rm pump}\sim-2\,\rm ps\) in the rising edge where the maximum density \(n_{e}\sim 70n_{c}\) is achieved. The original SSI model was developed for dielectric materials such as fused silica SiO\({}_{2}\) with a high band gap \(\sim 9\) eV and a maximum density \(n_{e}\sim 20n_{c}\). Because DLC is a semiconductor with a much lower band gap( \(\sim 1.1\) eV), further ionization of inner shells in the band structure may occur. Thus, higher electron density may be produced, and the model validity has been reasonably extended to \(n_{e}\sim 70n_{c}\) for our configuration (this value will be further
Figure 2: Measured (blue) and calculated (red) absolute transmission \(T(t)\) for DLC foils of thicknesses from 5 to 50 nm. The measurements are averaged over four shots with peak intensities \(l_{\rm peak}\sim 10^{15}-10^{16}~{}\rm W/cm^{2}\). The shaded region is the standard deviation over all shots for each foil. The red curve is computed using the TSI model. The measured and calculated curves are superimposed at their inflection points at \(t_{\rm relative}=0\) ps, corresponding to 50 % relative transmission. The red and the black double arrows delimit the time intervals for SSI and PIC and the extended region of the SSI to reach the melted state, respectively.
discussed below). Therefore, such highly overdense plasma may be described by the model.
However, going beyond this value would certainly break the model's validity, e.g., because of the ionization of the inner shells of carbon atoms. This process is not adequately accounted for in the SSI model. Therefore, except for the 50 nm foil, the transmission dynamics are not fully described by this model alone, either, since total opaqueness \(T\sim 0\) is not reached at \(t_{\mathrm{pump}}\sim-2\) ps. Nevertheless, we observe a significantly slower transmission decrease in the SSI model compared to the previous PIC results in Fig. 3a, closer to our experimental observations. Furthermore, the density evolution in Fig. 3d indicates that ionization starts earlier and, thus, at lower intensities than predicted by the PIC code alone involving atomic ionization rates.
### Solid-state and kinetic plasma description: Two-step model
To overcome the limitations of both models and provide a better description of the target dynamics, we propose a combination of the SSI model at earlier times and the PIC description at later times. On the one hand, the SSI model describes the laser interaction with solids well, including the initial ionization. On the other hand, the PIC code correctly handles the kinetics of a highly overdense plasma, including plasma expansion and inner shells' ionization process. We will refer to the combination of SSI and PIC as the two-step interaction (TSI) model. In the TSI model, the simulation starts with the SSI model and is continued by a PIC simulation after an overdense plasma is formed. As the PIC description considers only free particles like electrons, ions, and atoms, a reasonable switching point is when the melting state of DLC is reached. At this point, the band structure disappears, and the ions start to be free.
A semiconductor under femtosecond laser excitation may undergo non-thermal melting [35; 36]. In contrast to thermal melting, a significant fraction of the electrons is promoted abruptly from the valence band to the conduction band. Consequently, the lattice bonds are rapidly weakened, and the ions or atoms start to move before reaching the thermal melting point. This scenario may occur as an exponentially increasing laser intensity continuously irradiates the DLC. The non-thermal melting may start at \(n_{e}\sim 10n_{c}\)[35], the threshold for Si, a semiconductor with a band gap of \(\sim 1.12\) eV similar to DLC. Additionally, ions require a few 100 fs to be entirely free [35]. Therefore, to ensure an initial plasma state composed of entirely free ions as assumed in the PIC description, we extrapolate the SSI model to \(n_{e}^{m}\sim 70n_{c}\) so that the PIC simulation starts about 0.5 ps after the beginning of the melting process. To bridge the SSI ionization dynamics to the PIC simulations when non-thermal melting occurs, electron and lattice temperatures, \(T_{e}\) and \(T_{t}\), respectively, must be determined. Following Refs. [30; 33] and references therein, we use a standard two-temperature model (TTM). The results shown in Fig. 4a indicate that, at the melting, \(n_{e}^{m}\) is reached at \(t_{\mathrm{pump}}\sim-1.93\) ps, corresponding to an intensity of \(I\sim 10^{12}\) W/cm\({}^{2}\) and \(T_{e}^{m}\sim 4.6\) eV and \(T_{t}^{m}\sim 0.34\) eV. (Further details on the TSI parameters are provided in Methods.).
The calculated transmission dynamics using the densities \(n_{e}\) (\(z,t\)) from the TSI model are shown in Fig. 2. All transmission profiles reach \(T\sim 0\) in the PIC stage of the model, where plasma expansion is considered. The characteristic times \(\tau\) of the transmission dynamics are a few hundreds of fs, which agrees well with the measurements. The calculated transmission curves are superimposed with their experimental counterparts for each foil thickness by overlapping their respective inflection points at \(t_{\mathrm{relative}}=0\) ps. It is worth noting that extrapolating the SSI model to \(n_{e}\sim 70n_{c}\) shows an excellent agreement with the experiment, further validating the SSI model's application to lower band gap materials such as DLC. The extended SSI domain is indicated by the black double arrows in Fig. 2. Some slight discrepancies between the TSI results and our measurements can be observed, such as a transmission shift in the plateau region (5, 20, and 50 nm cases), a higher transmission predicted by the TSI model for the PIC results with the thinnest foils, and the reverse behavior for the 50 nm case. Possible reasons may be the experimental target thickness uncertainty of about 20 % or a systematic underestimation of the ionization yield and thus \(n_{e}\) in the
Figure 3: Computed time-dependent probe transmission and the corresponding electron densities for initially cold DLC foils with thicknesses ranging from 5 to 50 nm interacting with the pump laser using different models. (a) and (b) show the \(T(t)\) and the maximum density \(n_{e}^{\mathrm{max}}\) along the z-axis as a function of time, respectively computed using the SMILEI PIC code. (c) and (d) give the \(T(t)\) and \(n_{e}^{\mathrm{max}}\), respectively, computed using the SSI model where \(n_{e}^{\mathrm{max}}=n_{e}\), since the plasma is assumed to be spatially homogeneous (extracted from Fig. 4a). The SSI model validity stops at \(n_{e}\sim 70\,n_{c}\) at \(I\sim 10^{12}\) W/cm\({}^{2}\) see details in the text. The shaded regions (1) and (2) correspond to time intervals indicated in the laser temporal intensity contrast in Fig. 1(a).
PIC simulations due to the abrupt switch to a pure particle description. Besides, an overestimation of \(n_{e}\) in the SSI description can also be attributed to an inhomogeneous ionization likely occurring for thicker foils because \(l_{s}\) is shorter than the plasma thickness. Such target inhomogeneity is currently not accounted for in the first part of the TSI model. Moreover, during its transition from solid to plasma, the target passes through the highly nonlinear and - for our conditions - ultrafast regime of warm dense matter (WDM), which neither SSI nor PIC models adequately describe. Considering and mitigating these limitations is beyond the scope of this work. Nevertheless, our TSI approach confirms that an ultrafast solid-to-overdense plasma occurs in the experiments. To correctly describe this early stage of the interaction, both the solid (SSI model) and the plasma (PIC approach) properties are essential, and only the combination of both models yields an accurate description of the experimental measurements.
### Initial plasma expansion and interplay of ionization processes
To emphasize the role of the plasma expansion, Fig. 4b,c shows the spatiotemporal evolution of the plasma properties for the 5 nm and the 50 nm-thick foils. For the thinnest foil in Fig. 4b, \(n_{e}\)\((z,t)\) exhibits a strongly expanded profile at the end of the simulation, caused by the rapid heating of the plasma as the intensity approaches the peak. At \(t_{\text{pump}}=0\) ps, the plasma thickness with density \(n_{e}\geq n_{c}\) is estimated to be \(\sim 300\) nm, about 60 times the original target thickness. Besides, the relatively low maximum density of \(n_{e}\sim 7n_{c}\) highlights the importance of the plasma expansion for lowering the probe transmission. Additionally, the density profile evolves symmetrically in time, in contrast to the strong asymmetry observed for the 50 nm-thick foil shown in Fig. 4c, where a high-density region reaching a maximum of \(\sim 270n_{c}\) with extended low-density drops on the target front and back side, while the drop is steeper at the back. These results point out that, on the one hand, our measurements evidenced the plasma expansion for the thinnest foil, and, on the other hand, applying the TSI model is crucial for correctly describing the interaction in its initial phase. Such detailed knowledge of the target evolution is paramount to match the laser contrast conditions to the target (or plasma) thickness to achieve efficient laser-driven ion acceleration.
Finally, the interplay of fundamental processes such as ionization and collisions, which eventually determine the target properties at the peak arrival, are accessible using our experimental measurements and their comparison to our modeling strategy. During the laser-induced solid-to-plasma transition investigated in this work, the free electrons are produced in the SSI step by the MPI process in the solid state. Due to the low electron temperature during this step (\(T_{e}<5\) eV) inferred from the TTM, cf. Fig. 4a, collisional ionization (CI) is negligible, estimated to be \(\sim 1\) %. In fact, although \(n_{e}\sim 70n_{c}\) at \(t_{\text{pump}}\sim-2\) ps, the laser intensity \(I\sim 10^{12}\) W/cm\({}^{2}\) is not sufficient to heat the electrons to induce a significant number of ionizing collisions. In the PIC step, however, collisions occur as free electrons are available and heated by the exponentially increasing laser intensity. Consequently, CI starts at a lower intensity than the threshold intensity \(I\sim 10^{13}\) W/cm\({}^{2}\) for photo-ionization of carbon ions and becomes quickly dominant. The abrupt behavior of CI, being negligible at the end of SSI and being dominant at the beginning of the PIC phase, suggests that CI starts to become dominant during the highly non-linear and ultrafast WDM transition, which our TSI model does not describe. Therefore, despite being in an intensity range suitable for photo-ionizing carbon ions, this ionization process does not play a role in our experiments. Simulations without photo-ionization in SMILEI show the same results as in Fig. 2 and thus confirm this interpretation. While the ionization charge state of the pre-plasma is usually based on assumptions when modeling relativistic laser-matter interaction, in this work,
Figure 4: Computed plasma properties during the interaction from the TSI model for DLC foils of 5 and 50 nm thicknesses. Here, the pump laser propagates in positive z-direction. (a) Time-dependent electron density from the SSI model, lattice (\(T_{l}\)), and electron (\(T_{e}\)) temperatures from the TTM model for all thicknesses and \(T_{e}\) for 5 and 50 nm thicknesses from PIC simulations. (b) and (c) show the spatiotemporal dynamics of the electron and the carbon ion densities for 5 and 50 nm thick foils, respectively. The blue dotted lines correspond to the maximum electron density on the z-axis as a function of time.
gaining insight into the ionization processes leads naturally to the detailed knowledge of different ion species and their dynamics in the plasma. For example, the final average charge state \(C^{4+}\) shown in Fig. 4b,c could not have been predicted without considering collisions and CI being the dominant ionization mechanism.
## Conclusions
In summary, our investigation sheds light on the sub-picosecond transition from a solid target to a highly overdense plasma (\(n_{e}>100\,n_{c}\)) produced with nm-thin DLC foils during the laser rising edge with intensities increasing up to \(I\sim\ 10^{16}\) W/cm\({}^{2}\). Even though this stage of pre-plasma formation is crucial for the conditions for the subsequent ion acceleration during a relativistic laser-thin foil interaction, this transition has neither been studied in detail in simulations nor detected in experiments. We demonstrated an all-optical single-shot technique that characterizes the complete target evolution. Because our technique relies on optical tunneling, accessing the overdense plasma regime is possible. Our findings indicate that correctly describing the target transition from a solid to a plasma state is crucial for understanding the plasma evolution in such laser-solid interaction. Our single-shot NIR probe transmission measurements evidence a non-negligible plasma expansion that can significantly reduce the probe intensity tunneling through very thin foils. We develop a general picture of the evolution of the plasma by employing a two-step interaction model comprising a combination of a solid-state interaction model and a PIC code. A detailed description of the pre-plasma properties before the peak arrival is achieved, going well beyond the previous modeling of relativistic laser-matter interactions. Our approach can readily provide a detailed description of the plasma formed by pre-expanding a thin foil using a controlled pre-pulse, usually in the intensity range studied in this work. Besides being of fundamental interest, such insight is crucial to finding the matching laser-target conditions required for the RIT-based acceleration regime. Our experimental findings and the application of our modeling approaches might therefore help to bring laser-accelerated ion technologies to societal applications.
## Methods
### Laser system and pulse characterization
The experiments were carried out using the all-diode pumped high-power laser system POLARIS[37], operated by the Helmholtz-Institute Jena and the Institute of Optics and Quantum Electronics in Jena. The temporal intensity contrast of the pump laser, as measured with a third-order cross-correlator (Amplitude, Sequoia), is shown in Fig. 1a. The profile is well described by the two fitting curves in red. The indicated artifact at \(t_{\text{pump}}\sim-5\) ps is due to a post-pulse induced by a glass wafer inserted in the beam path for debris shielding, therefore not affecting the interaction, and is therefore ignored throughout our analysis.
Particularly relevant is the steep rising edge in the time window \(-3.7\) ps \(\leq t_{\text{pump}}\leq-0.2\) ps, described by the contrast ratio CR = exp(\(-\big{|}t_{\text{pump}}\big{|}/277\) fs) with an intensity profile \(I=I_{\text{peak}}\times\text{CR}\), where \(I_{\text{peak}}\) is the intensity of the peak characterized by \(\sim 150\) fs Full Width at Half Maximum (FWHM) pulse duration. In the experiments, the peak intensity is reduced to \(I_{\text{peak}}\sim 10^{15}\) W/cm\({}^{2}\) by inserting a half-inch aperture in the beam path of the few J-energy, \(140\) mm-diameter, and linearly polarized pulses centered at \(\lambda_{p}=1030\) nm before being focused with an off-axis parabola (300 mm-focal length) at normal incidence on the DLC foil. Thus, the relative contrast profile is kept similar to that for the relativistic pulses. The resulting \(\approx 3\) mJ energy pump pulses are focused to a spot showing an Airy pattern (see inset in Fig. 1b) with a \(\approx 40\) um FWHM diameter containing \(\sim 60\) % of the pulse energy after the aperture.
### Target
The diamond-like carbon (DLC) targets used in this experiment are free-standing foils of pure carbon, produced by pulsed laser deposition technique[38] with a mass density of \(\rho_{\text{DLC}}=2.15\) g/cm\({}^{3}\). The DLC foil is an amorphous semiconductor[39], characterized by an electronic band structure with a band gap of \(\sim 1.1\) eV. The latter was estimated using the Tauc method[40]. The target refractive index is given by \(\eta_{\text{DLC}}=n_{\text{DLC}}+i\kappa_{\text{DLC}}\) where \(n_{\text{DLC}}\approx 2.65\)[41] and the extinction coefficient \(\kappa_{\text{DLC}}\approx 0.5\) were obtained from a wavelength-dependent transmission measurement carried out using a Shimadzu Solid Spectrometer 3700-spectrometer. The carbon ionization energies are \(11.3\), \(24.4\), \(47.8\), \(64.5\), \(392\), and \(490\) eV.
### Single-shot space and time-resolved probe transmission diagnostic
The plasma dynamics are investigated by longitudinally irradiating a \(\sim 100\) um-extended region of the interaction with \(p\)-polarized broadband (\(\Delta\lambda\approx 150\) nm centered at \(\lambda\approx 840\) nm under an incidence angle of \(\alpha=37^{\circ}\)) and \(\approx 12\) um-energy probe pulses produced in a Non-collinear Optical Parametric Amplifier (NOPA)[42]. When optimally compressed, the probe pulses have a duration of \(\approx 14\) fs. However, by applying a positive chirp to the pulses, their duration is stretched to \(\sim 6\) ps so that their different wavelength components arrive at different interaction times. With this approach, the plasma formation and evolution can be recorded within a single shot, achieving a sub-ps time resolution[32]. An extent of \(\Delta x\approx 3\) um of the interaction region is imaged onto the entrance slit of the 1D-spatially resolving spectrometer. The relative timing between POLARIS-main and NOPA probe pulses (seeded by the same oscillator) can be adjusted using a delay stage with sub-ps resolution. We measure the relative probe
transmission through the plasma \(T_{r}=T/T_{0}\), where \(T_{0}\) and \(T\) are the measured transmission values without and with the interaction induced by the pump pulse, respectively. The conversion of the probe wavelength to time is calibrated using an additional pre-pulse with an adjustable time delay. The measured absolute transmission \(T\) in Fig. 2 is obtained as the lineout at \(y=0\)\(\mu\)m averaged by 3 \(\mu\)m. The measurements in the same figure are averaged over four shots with intensities in the range \(I\sim 10^{15}-10^{16}\)\(W/cm^{2}\) for each foil thickness.
### Modeling
#### Analytical model for optical tunneling
We assume a \(p\)-polarized probe plane wave at an angle of incidence of \(\alpha=37^{\circ}\) at the overdense and homogeneous plasma slab of thickness \(d\) with the dielectric function \(\varepsilon<0\). Then, exploiting Maxwell's boundary conditions [43] at the front and rear side of the slab yields the transmission
\[\mathrm{T}=\frac{4e^{-4\pi\gamma d/\lambda}}{(1+C^{2})(1+e^{-8\pi\gamma d/ \lambda})+2(1-C^{2})e^{-4\pi\gamma d/\lambda}}\]
with \(\gamma=\sqrt{\sin^{2}\alpha-\varepsilon},\;C=\frac{\gamma^{2}-\varepsilon^{2} \cos^{2}\alpha}{2\gamma\cos\alpha}\) and \(\lambda\) being the central probe wavelength. For a highly overdense plasma, we can assume \(\varepsilon\approx 1-n_{e}/n_{c}\).
#### Numerical simulations
We used two complementary interaction models, discussed in detail below, to compute the time-dependent free electron density \(n_{e}(t)\). As the transmission profiles in the experiment were found to be insensitive to the peak intensity and the plasma forms during the steep rising edge of the laser pulse, the laser intensity is described by \(I=I_{0}\exp(-\big{|}t_{\mathrm{pump}}\big{|}/277\;\mathrm{fs})\) in the simulations, and \(I_{0}=10^{15}\;\mathrm{W/cm^{2}}\). The transmission of a probe plane wave propagating through the generated plasma with density \(n_{e}(z,t)\) was calculated by solving Maxwell's equations. To this end, the matrix method presented, e.g., in Refs. [44; 45] was adapted. The Drude Model is used to compute the complex dielectric function \(\varepsilon\) expressed as a function of \(n_{e}\) as
\[\varepsilon=\eta_{\mathrm{DLC}}^{2}-\frac{n_{e}}{n_{c}}(1+i\frac{\nu_{c}}{ \omega})\]
with \(\eta_{\mathrm{DLC}}\) being the refractive index of the pristine target (see target section.); the temperature-averaged electron-ion collision frequency was chosen as \(\nu_{c}=5\times 10^{14}\;\mathrm{s}^{-1}\), consistent with what is discussed and used in the two-temperature model; \(\omega\) is the probe's angular frequency.
### Particle-in-cell simulation (PIC)
We use the one-dimensional (1D) PIC code SMILEI [22]. The relevant implemented processes include multiphoton ionization (MPI), field ionization (tunneling ionization, TI) and collisional ionization (CI) for atoms, and binary collisions between electrons and ions. In the simulation box of \(L=1\)\(\mu\)m length with a cell size of \(\Delta z=0.156\) nm, the target was modeled as a slab of cold carbon atoms at solid density \(n_{a}=62n_{c}\), positioned at \(z=0\)\(\mu\)m (cf. Fig. 4b, c). The pump laser with a central wavelength \(\lambda_{p}=1030\) nm enters the box from \(z<0\). Its temporal intensity envelope follows the steep rising edge mentioned above. The temporal resolution was \(\Delta t=5\times 10^{-4}\;\mathrm{fs}\). The number of particles per cell was initialized as follows: 2000 carbon atoms per cell were used in the simulation using cold target, and 1452, 274, and 2000 particles per cell were used in the TSI model for the carbon ions \(C^{1+},C^{2+}\) and electrons, respectively.
### Solid-state interaction model (SSI)
To correctly account for ionization in solids, we use a solid-state interaction (SSI) model adapted from Ref. [33]. In this model, the ionization is described by solving state-of-the-art multiple rate equations [34] where the target band structure is described by a set of states accounting for the electron dynamics in the conduction band. An electron density \(n_{i}\) is associated with each state (where \(i\in 0,1,2\)), and the coupled system reads
\[\frac{\partial n_{0}}{\partial t} = W_{\mathrm{Pl}}+2\bar{\alpha}n_{2}-W_{1}n_{0}-n_{0}/\tau_{r},\] \[\frac{\partial n_{1}}{\partial t} = W_{1}n_{0}-W_{1}n_{1}-n_{1}/\tau_{r},\] \[\frac{\partial n_{2}}{\partial t} = W_{1}n_{1}-\bar{\alpha}n_{2}-n_{2}/\tau_{r}.\]
The first conduction state, \(n_{0}\) is filled with a \(W_{\mathrm{Pl}}\) rate (MPI or TI depending on the intensity) obtained from the Keldysh theory [46]. This stage describes the primary photo-ionization process. Each conduction state is bridged through one-photon absorption similar to the mechanism of inverse Bremsstrahlung absorption through the rate \(W_{1}=3.5\times 10^{-7}\;E_{L}^{2}\) in units of \(\mathrm{s}^{-1}\), where \(E_{L}\) is the laser electric field in units of \(\mathrm{V/m}\). The SSI model includes CI and possible electron avalanche ionization as highly energetic electrons in the conduction band may transfer a fraction of their energy by collisions with electrons in the valence band. The last state corresponds to the minimum energy required to induce impact ionization (i.e., at least 1.5 times the bandgap [34]) with the rate \(\bar{\alpha}=10^{15}\;\mathrm{s}^{-1}\). Conduction electrons can also recombine within a characteristic time-scale of \(\tau_{r}=1\) ps. These parameter values were already used in various studies compatible with the present conditions [47; 48; 49; 50]. The free electron density is \(n_{e}=n_{0}+n_{1}+n_{2}\). Since the emptying of the valence band is not accounted for, \(n_{e}\) can exceed tens of critical plasma densities. This approach remains valid as long as
the band structure remains intact, i.e., before melting occurs.
## 3 Two-temperature model (TTM)
Following Refs. [30, 33] and references therein, we use a standard two-temperature model (TTM),
\[C_{e}\,\frac{\partial T_{e}}{\partial t}=\,\frac{\partial U}{ \partial t}-\frac{3}{2}k_{B}\,\frac{\partial n_{e}}{\partial t}T_{e}-G(T_{e}-T _{l}),\] \[C_{l}\,\frac{\partial T_{l}}{\partial t}=\,\,G(T_{e}-T_{l}).\]
The heat capacities are \(C_{e}=3n_{e}k_{B}/2\) and \(C_{l}=3n_{a}k_{B}/2\). The electron-ion energy exchange factor \(G\) is evaluated by \(G=C_{e}v_{c}m_{e}/m_{a}\). \(n_{a}\) and \(m_{a}\) are the carbon atomic density and mass, respectively. The electron-to-ion mass ratio weights the collision frequency to account for energy exchange (\(v_{c}\) accounts for momentum transfer). The source term is evaluated with the Drude model:
\[\frac{\partial U}{\partial t}=\frac{e^{2}n_{e}v_{c}}{m_{e}( \omega^{2}+v_{c}^{2})}E_{L}^{2}\]
where \(v_{c}=v_{\rm ph}\) is the electron-phonon collision frequency as the collisions are mainly driven by phonons in the solid state. It then reads \(v_{\rm ph}=v_{\rm phb}\)\(T_{l}/T_{0}\), and \(v_{\rm phb}\) is the electron-phonon collision frequency at room temperature \(T_{0}=300\) K. It is set to \(v_{\rm ph0}=10^{14}\) s\({}^{-1}\)[48]. \(v_{c}\) is limited to \(5\times 10^{15}\) s\({}^{-1}\) to account for the upper value of the collision frequency imposed by the electron mean free path [33].
## 4 Two-step interaction model (TSI)
In our TSI model, the PIC simulation starts with a homogeneous plasma slab with a density \(n_{e}^{m}\approx 70n_{c}\) of the initial target thickness. The electrons and carbon ions species are initialized with Maxwell-Boltzmann distribution functions with temperatures \(T_{e}^{m}\) and \(T_{l}^{m}\), respectively, computed in the TTM. Since \(n_{e}^{m}\) exceeds the carbon solid atomic density \(n_{a}=62n_{c}\), the plasma is modeled partially ionized with a mixture of single and double ionization states of carbons \(C^{1+}\)and \(C^{2+}\)with \(n_{c^{1+}}=54n_{c}\) and \(n_{c^{2+}}=8n_{c}\), respectively. In the low-intensity range with \(I<10^{12}\) W/cm\({}^{2}\) where the SSI description holds, we expect full single ionization of carbon atoms by MPI reaching the density \(n_{c^{1+}}=n_{a}\) before a significant number of \(C^{2+}\) is produced. Collisions are of minor importance because electrons are only moderately heated by the laser in this intensity range. With increasing intensity, a fraction of these ions is further ionized to \(C^{2+}\) to reach \(n_{e}^{m}\). The other simulation parameters are kept the same as in the section PIC above.
## Acknowledgements
The research leading to these results has received funding from LASERLAB-EUROPE (Grant Agreement No. 871124, European Union's Horizon 2020 research and innovation program) and from the Bundesministerium fur Bildung und Forschung (BMBF, Grants Agreement No. 03VNE2068D, No. 03Z1H531, No. 05K16SJC, No. 05F19SJC, No. 05P15SJFA1, and No. 05P19SJFA1).
|
2301.02322 | Redder than Red: Discovery of an Exceptionally Red L/T Transition Dwarf | We present the discovery of CWISE J050626.96$+$073842.4 (CWISE J0506$+$0738),
an L/T transition dwarf with extremely red near-infrared colors discovered
through the Backyard Worlds: Planet 9 citizen science project. Photometry from
UKIRT and CatWISE give a $(J-K)_{\rm MKO}$ color of 2.97$\pm$0.03 mag and a
$J_{\rm MKO}-$W2 color of 4.93$\pm$0.02 mag, making CWISE J0506$+$0738 the
reddest known free-floating L/T dwarf in both colors. We confirm the extremely
red nature of CWISE J0506$+$0738 using Keck/NIRES near-infrared spectroscopy
and establish that it is a low-gravity late-type L/T transition dwarf. The
spectrum of CWISE J0506$+$0738 shows possible signatures of CH$_4$ absorption
in its atmosphere, suggesting a colder effective temperature than other known,
young, red L dwarfs. We assign a preliminary spectral type for this source of
L8$\gamma$-T0$\gamma$. We tentatively find that CWISE J0506$+$0738 is variable
at 3-5 $\mu$m based on multi-epoch WISE photometry. Proper motions derived from
follow-up UKIRT observations combined with a radial velocity from our
Keck/NIRES spectrum and a photometric distance estimate indicate a strong
membership probability in the $\beta$ Pic moving group. A future parallax
measurement will help to establish a more definitive moving group membership
for this unusual object. | Adam C. Schneider, Adam J. Burgasser, Justice Bruursema, Jeffrey A. Munn, Frederick J. Vrba, Dan Caselden, Martin Kabatnik, Austin Rothermich, Arttu Sainio, Thomas P. Bickle, Scott E. Dahm, Aaron M. Meisner, J. Davy Kirkpatrick, Genaro Suarez, Jonathan Gagne, Jacqueline K. Faherty, Johanna M. Vos, Marc J. Kuchner, Stephen J. Williams, Daniella Bardalez Gagliuffi, Christian Aganze, Chih-Chun Hsu, Christopher Theissen, Michael C. Cushing, Federico Marocco, Sarah Casewell, the Backyard Worlds, :, Planet 9 Collaboration | 2023-01-05T22:40:13Z | http://arxiv.org/abs/2301.02322v1 | # Redder than Red: Discovery of an Exceptionally Red L/T Transition Dwarf
###### Abstract
We present the discovery of CWISE J050626.96+073842.4 (CWISE J0506+0738), an L/T transition dwarf with extremely red near-infrared colors discovered through the Backyard Worlds: Planet 9 citizen science project. Photometry from UKIRT and CatWISE give a \((J-K)_{\rm MKO}\) color of 2.97\(\pm\)0.03 mag and a \(J_{\rm MKO}\)\(-\)W2 color of 4.93\(\pm\)0.02 mag, making CWISE J0506+0738 the reddest known free-floating L/T dwarf in both colors. We confirm the extremely red nature of CWISE J0506+0738 using Keck/NIRES near-infrared spectroscopy and establish that it is a low-gravity late-type L/T transition dwarf. The spectrum of CWISE J0
han et al., 2011; Helling et al., 2014). Red near-infrared colors have been efficiently utilized to characterize and discover new young brown dwarfs and planetary-mass objects (e.g., Kellogg et al., 2015; Schneider et al., 2017). There also exists a population of red L dwarfs that do not have obvious signs of youth (e.g., Looper et al., 2008; Kirkpatrick et al., 2010; Marocco et al., 2014). While the exact reasons for the red colors of these relatively high-gravity objects are not entirely clear, their spectra have been well-reproduced by the presence of micron or submicron-sized grains in their upper atmospheres (Marocco et al., 2014; Hiranaka et al., 2016; Charnay et al., 2018). This high-altitude dust suppresses emission at shorter wavelengths much more efficiently than longer wavelengths, leading to significantly reddened spectra compared to "normal" brown dwarfs. There is evidence that the strength of silicate absorption features in the mid-infrared correlates with the near-infrared colors of L dwarfs (Burgasser et al., 2008; Suarez & Metchev, 2022), indicating that variations in silicate cloud thickness also plays a role. Further, viewing angle (Vos et al., 2017) and variability (Ashraf et al., 2022) have been shown to be related to the colors of substellar objects. There is also evidence that convective instabilities can produce similar effects as clouds in young red L dwarfs Tremblin et al. (2017). In any case, young red L dwarfs and old reddened L dwarfs have proven to be compelling laboratories for the study of low temperature substellar atmospheres.
The vast majority of the current population of directly-imaged planetary-mass companions are also young and have similar effective temperatures, masses, and radii as young L dwarfs, as well as observed properties, including unusually red near-infrared colors. Examples include 2M1207b (Chauvin et al., 2004, 2005; Patience et al., 2010), HD 206893B (Milli et al., 2017; Delorme et al., 2017; Kammerer et al., 2021; Meshkat et al., 2021; Ward-Duong et al., 2021), VHS J125601.92\(-\)125723.9B (Gauza et al., 2015), 2MASS J22362452+4751425b (Bowler et al., 2017), BD+60 1417B (Faherty et al., 2021), HR8799bed (Marois et al., 2008), and HD 203030B (Metchev & Hillenbrand, 2006). Young, red L dwarfs in the field provide an opportunity to study the physical properties of giant exoplanet-like atmospheres without the technical challenge of blocking host star light.
In this article, we present the discovery of CWISE J050626.96+073842.4 (CWISE J0506+0738), an exceptionally red brown dwarf discovered as part of the Backyard Worlds: Planet 9 (BYW) citizen science project (Kuchner et al., 2017). We detail its discovery in Section 2, present Keck/NIRES spectroscopic follow-up observations in Section 3, analyze these data in Section 4, and discuss CWISE J0506+0738 in the context of other red brown dwarfs in Section 5.
## 2 Discovery of CWISE 0506+0738
CWISE J0506+0738 was submitted as an object of interest to the BYW project by citizen scientists Austin Rothermich, Arttu Sainio, Sam Goodman, Dan Caselden, and Martin Kabatnik because it had notable motion amongst epochs of WISE observations. BYW uses unWISE images (Lang, 2014; Meisner et al., 2018) covering the 2010-2016 time frame and is typically sensitive to objects with proper motions \(\gtrsim\) 0\(\farcs\)05-0\(\farcs\)1 yr\({}^{-1}\). As part of the initial investigation to evaluate whether or not CWISE J0506+0738 was a newly discovered substellar object, we gathered available photometry from the Two Micron All-Sky Survey (2MASS) reject catalog (Skrutskie et al., 2006; 2MASS Team, 2006), the United Kingdom Infrared Telescope (UKIRT) Hemisphere Survey DR1 (UHS; Dye et al., 2018), and the CatWISE 2020 main catalog (Marocco et al., 2021), and determined a photometric spectral type of \(\sim\)L7.5 using the method described in Schneider et al. (2016a). It was noted during the initial evaluation of this object that its \(J-K\) color, using UHS \(J\)- and 2MASS \(K\)-band photometry, was exceptionally red (\(J-K\) = 3.17\(\pm\)0.21 mag), more than half a magnitude redder than the reddest known free-floating L dwarf, PSO J318.5338\(-\)22.8603 (\(J-K\) = 2.64\(\pm\)0.02 mag; Liu et al., 2013). An inspection of 2MASS, UHS, WISE, and Pan-STARRS DR2 (Magnier et al., 2020) images showed no sources of contamination, suggesting that the near-infrared colors accurately reflect the true spectral energy distribution of the source (Figure 1).
The astrometry and photometry of CWISE J0506+0738 were further analyzed using measurements from the UHS DR2 catalog, which will provide \(K\)-band photometry for much of the northern hemisphere (Bruursema et al. in prep.). CWISE J0506+0738 was found to have a \(K\)-band magnitude of 15.513\(\pm\)0.022 mag, consistent with the previous 2MASS measurement but significantly more precise. This measurement results in a UHS \((J-K)_{\rm MKO}\) color of 3.24\(\pm\)0.10 mag, slightly redder but consistent with UHS and 2MASS photometry. We therefore considered this candidate a high-priority target for follow-up spectroscopic observations.
## 3 Observations
### Ukirt/WFCAM
In an effort to refine the astrometry and photometry of CWISE J0506+0738, we observed it with the \(J_{\rm MKO}\) filter on the infrared Wide-Field Camera (WF
CAM; Casali et al., 2007) on UKIRT on 20 September 2022. Observations were performed using a 3 \(\times\) 3 microstepping pattern, with the resulting 9 images interleaved (Dye et al., 2006) to provide improved sampling over that of a single WFCAM exposure. The microstepping sequence was repeated five times, resulting in 45 single exposures each lasting 20 seconds, for a total exposure time of 900 seconds. We re-registered the world coordinate system (WCS) of each interleaved frame using the Gaia DR3 catalog (Gaia Collaboration et al., 2022). Images were then combined using the imstack routine from the CASUTOOLS package1(Irwin et al., 2004). The position and photometry of CWISE J0506+0738 were extracted using the CASUTOOLS imcore routine.
Footnote 1: [http://casu.ast.cam.ac.uk/surveys-projects/software-release](http://casu.ast.cam.ac.uk/surveys-projects/software-release)
Combining the position of this \(J\)-band observation with the UHS \(K\)-band observation, we calculated proper motion components of \(\mu_{\alpha}\) = 31.5\(\pm\)2.6 mas yr\({}^{-1}\) and \(\mu_{\delta}\) = -82.7\(\pm\)2.7 mas yr\({}^{-1}\). CatWISE 2020 reports proper motions of \(\mu_{\alpha}\) = 44.2\(\pm\)7.9 mas yr\({}^{-1}\) and \(\mu_{\delta}\) = -97.5\(\pm\)8.4 mas yr\({}^{-1}\) (with offset corrections applied according to Marocco et al., 2021). The proper motion calculated from our UKIRT observations is significantly more precise than the proper motion measurements from CatWISE 2020, and we adopt the former for our analysis.
We measure a \(J_{\rm MKO}-\)band magnitude of 18.487\(\pm\)0.017 mag from these observations, which is \(>\)2\(\sigma\) brighter than the value from the UKIRT Hemisphere Survey (18.76\(\pm\)0.10 mag). To verify our measured photometry, we compared the photometry for other sources found to have similar magnitudes (18.4 \(<J<\) 18.6 mag) in our images to UHS values. We found a median \(J\)-band difference for the 52 objects in this sample to be -0.03 mag, with a median absolute deviation of 0.07 mag, showing that differences as large as that measured for this object (0.27 mag) are relatively rare. The origin of the difference between these \(J\)-band measurements is unclear, though we note that variability may be a contributing factor, as young (and red) objects are often found to have larger amplitude variability than field-age objects with similar spectral types (e.g., Vos et al., 2022). While this new \(J\)-band
Figure 1: Images of CWISE J0506+0738 from 2MASS (upper left and center), UHS (bottom left and center), Pan-STARRS (upper right, three-color image with \(g/i/y\) bands), and WISE (lower right, three-color image with \(W1/W2/W3\) bands). The position of CWISE J0506+0738 as determined in the UHS \(K\)-band images is denoted by a red circle. Note that CWISE J0506+0738 is undetected at 2MASS \(J\) and in the Pan-STARRS 3-color image, but clearly detected in the 2MASS \(K\)-band, UHS, and WISE images. The greenish hue of CWISE J0506+0738 in the WISE images shows that this object is significantly brighter at WISE channel W2 (4.6 \(\mu\)m) than WISE channel W1 (3.4 \(\mu\)m) or W3 (12 \(\mu\)m), typical of brown dwarfs with late-L or later spectral types.
measurement results in bluer \((J-K)_{\rm MKO}\) = 2.97\(\pm\)0.03 mag and \(J_{\rm MKO}\)\(-\)W2 = 4.94\(\pm\)0.02 mag colors, they both remain significantly redder than those of any previously identified free-floating brown dwarf.
All UKIRT photometry and astrometry for CWISE J0506+0738 are provided in Table 1.
### Keck/NIRES
CWISE J0506+0738 was observed with the Near-Infrared Echellette Spectrometer (NIRES; Wilson et al., 2004) mounted on the Keck II telescope on UT 19 January 2022. NIRES provides a resolution \(\lambda/\Delta\lambda\approx\) 2700 over five cross-dispersed orders spanning a wavelength range of 0.9-2.45 \(\mu\)m. CWISE J0506+0738 was observed in four 250 second exposures nodded in an ABBA pattern along the slit, which was aligned with the parallactic angle, for a total on-source integration time of 1000 seconds. The spectrum was extracted using a modified version of the SpeXTool package (Vacca et al., 2003; Cushing et al., 2004), with the A0 V star HD 37887 (\(V\) = 7.67) used for telluric correction. The large \(J-K\) color of CWISE J0506+0738 resulted in significant signal to noise (S/N) differences across the final reduced spectrum, with a S/N\(\sim\)25 at the \(J\)-band peak (\(\sim\)1.3 \(\mu\)m) and a S/N\(\sim\)200 at the \(K\)-band peak (\(\sim\)2.2 \(\mu\)m).
The inter-band flux calibration for Keck/NIRES orders is occasionally skewed by seeing or differential refraction slit losses. In particular, there is a gap between the third (\(K\)-band) and fourth (\(H\)-band) orders spanning 1.86 to 1.89 \(\mu\)m2, and the overlap between the fourth and fifth (\(J\)-band) orders lies in a region of strong telluric and stellar H\({}_{2}\)O absorption. We therefore re-scaled the resulting spectrum to have a \(J-K\) synthetic color consistent with UKIRT \(J\)-band and UHS \(K\)-band photometry by applying small multiplicative constants to the \(H\)- and \(K\)-band portions of the spectrum. The final reduced spectrum is shown in Figure 2.
Footnote 2: [https://www2.keck.hawaii.edu/inst/nires/genspecs.html](https://www2.keck.hawaii.edu/inst/nires/genspecs.html)
## 4 Analysis
### Spectral Type
As with many of the known, young, late-type red L dwarfs, none of the L dwarf spectral standards (Kirkpatrick et al., 2010; Cruz et al., 2018) provide a suitable match to the near-infrared spectrum of CWISE J0506+0738. The best match to the \(J\)-band portion of the spectrum is the L7 standard 2MASSI J0825196+211552 (Kirkpatrick et al., 1999; Cruz et al., 2018), which is shown in the top panel of Figure 2. CWISE J0506+0738 shows much stronger H\({}_{2}\)O absorption around 1.1 \(\mu\)m, a feature commonly seen in low-gravity L dwarfs. This comparison also shows how red CWISE J0506+0738 is compared to a normal, field-age/field-gravity late-L dwarf. The bottom panel of Figure 2 shows a comparison of CWISE J0506+0738 with PSO J318.5338\(-\)22.8603 (Liu et al., 2013), which is typed as L7 VL-G in that work. These two objects match relatively well across the \(J\)-band portion of the spectrum, though the extreme redness of CWISE J0506+0738 can still be seen in this comparison via the mismatch in the \(H\)- and \(K\)-band portions of their spectra.
We also note that the spectrum of CWISE J0506+0738 has a noticeable absorption feature at the \(H\)-band peak. There is also a second, less-pronounced absorption feature present in the \(K\)-band portion of CWISE J0506+0738's spectrum between 2.2 and 2.3 \(\mu\)m. While we cannot _a priori_ rule out systematic noise or a data reduction artifact for these features, we note that no similar features have been seen in Keck/NIRES spectra of L dwarfs obtained and reduced by our group (e.g., Meisner et al., 2021; Schapera et al., 2022; Softich
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{1}{c}{ Parameter} & Value & Ref. \\ \hline R.A. (\({}^{\circ}\)) (epoch=2022.7)a & 76.6124377 & 1 \\ Dec. (\({}^{\circ}\)) (epoch=2022.7)a & 7.6449299 & 1 \\ R.A. (\({}^{\circ}\)) (epoch=2017.8)a & 76.6123885 & 2 \\ Dec. (\({}^{\circ}\)) (epoch=2017.8)a & 7.6450716 & 2 \\ \(\mu_{\alpha}\) (mas yr\({}^{-1}\)) & 31.5\(\pm\)2.6 & 1 \\ \(\mu_{\delta}\) (mas yr\({}^{-1}\)) & -82.7\(\pm\)2.7 & 1 \\ \(d^{b}\) (pc) & 32\({}^{+4}_{-3}\) & 1 \\ RV (km s\({}^{-1}\)) & +16.3\({}^{+8.8}_{-7.7}\) & 1 \\ \(J_{\rm MKO}\) (mag) & 18.487\(\pm\)0.017 & 1 \\ \(K_{\rm MKO}\) (mag) & 15.513\(\pm\)0.022 & 2 \\ W1 (mag) & 14.320\(\pm\)0.015 & 3 \\ W2 (mag) & 13.552\(\pm\)0.013 & 3 \\ Sp. Type & L8\(\gamma\)–T0\(\gamma\) & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Properties of CWISE J050626.96+073842.4
et al., 2022; Theissen et al., 2022). We also note that these features occur at the approximate locations of CH\({}_{4}\) absorption seen in model spectra of low-surface gravity brown dwarfs with effective temperatures \(\lesssim\)1400 K. Figure 3 compares solar-metallicity model spectra from Marley et al. (2021) with fixed low-surface gravities (log(g)=3.5) and varying effective temperatures. Prominent methane absorption features can be seen in the \(H\)- and \(K\)-bands for \(T_{\rm eff}\lesssim\)1400 K. While these models are informative for (potentially) identifying the source of some of the absorption features seen in the spectrum of CWISE J0506+0738, we were unable to find any models that successfully reproduced the overall shape of CWISE J0506+0738's spectrum, similar to previous studies of young brown dwarfs (e.g., Manjavacas et al., 2014).
The presence of CH\({}_{4}\) in the \(H\)- and \(K\)-band peaks of CWISE J0506+0738's spectrum would suggest that this source is early T dwarf (Burgasser et al., 2006), although these features are fairly weak in strength. Charnay et al. (2018) showed that the presence of clouds can greatly reduce the abundance of CH\({}_{4}\) in the photospheres of low-gravity objects, a possible explanation for the absence of CH\({}_{4}\) bands in the spectra of 2M1207b and HR8799bcd (Barman et al., 2011a,b; Konopacky et al., 2013). If the same effect holds here, it would argue for a particularly low temperature for CWISE J0506+0738, below that of the \(T_{\rm eff}\approx 1200\) K planetary-mass L dwarf PSO J318.5338\(-\)22.8603 and VHS 1256\(-\)1257B which originally showed no indica
Figure 2: The Keck/NIRES spectrum of CWISE J0506+0738, shown in the original resolution (grey lines) and smoothed to a resolution of \(\lambda/\Delta\lambda\approx 100\) (black lines). CWISE J0506+0738 is compared to the L7 spectral standard 2MASSI J0825196+211552 (Kirkpatrick et al., 2000; Cruz et al., 2018) in the top panel, and the young L7 VL-G dwarf PSO J318.5338\(-\)22.8603 (Liu et al., 2013) in the bottom panel. Both comparisons highlight the extremely red nature of CWISE J0506+0738. All spectra are normalized between 1.27 and 1.29 \(\mu\)m, and prominent absorption features have been labeled.
tion of CH\({}_{4}\) absorption in the 1-2.5 \(\mu\)m region3. (Liu et al., 2013; Gauza et al., 2015). These two sources do have detectable absorption in the 3.3 \(\mu\)m \(\nu_{3}\) CH\({}_{4}\) fundamental band (Miles et al., 2018), and cloud scattering opacity is likely responsible for muting the 1.6 \(\mu\)m and 2.2 \(\mu\)m bands in these red L dwarfs (Charnay et al., 2018; Burningham et al., 2021). Indeed, it has been noted previously that PSO J318.5338\(-\)22.8603 is just on the warmer side of the transition to CH\({}_{4}\) becoming the dominant carbon-bearing molecule in its atmosphere (Tremblin et al., 2017). We tentatively assert that both \(H\)- and \(K\)-band features in the spectrum of CWISE J0506+0738 are due to CH\({}_{4}\) absorption, which may be tested with more detailed analysis (e.g., atmospheric retrievals; Burningham et al., 2017, 2021) and higher S/N moderate-resolution data. Given the similarity of the \(J\)-band portion of CWISE J0506+0738's spectrum to PSO J318.5338\(-\)22.8603 (L7 VL-G), and likely detection of CH\({}_{4}\) in the \(H\)- and \(K\)-bands, we assign a near-infrared spectral type of L8\(\gamma\)-T0\(\gamma\) to CWISE J0506+0738, where the \(\gamma\) signifies very low surface gravity (Kirkpatrick, 2005).
Footnote 3: Recent high S/N _JWST_/NIRSPEC observations of VHS 1256\(-\)1257B have revealed the presence of weak 1.6 \(\mu\)m absorption in its spectrum (Miles et al., 2022).
### Spectral Evidence of Youth
The characterization of brown dwarfs and planetary mass objects as "low surface gravity" or "young" typically arises from gravity-sensitive (or more specifically, photosphere pressure-sensitive) spectral features quantified by spectral indices (e.g., Steele & Jameson, 1995; Martin et al., 1996; Luhman et al., 1997; Gorlova et al., 2003; McGovern et al., 2004; Kirkpatrick et al., 2006; Allers et al., 2007; Manjavacas et al., 2020). Many of these spectral indices, however, are designed for optical spectra (e.g., Cruz et al., 2009) or are only applicable to objects with spectral types earlier than \(\sim\)L5 (e.g., Allers & Liu, 2013; Lodieu et al., 2018). The \(H\)-cont index is a gravity-sensitive index defined in Allers & Liu (2013) that is one of the few gravity-sensitive indices applicable to spectral types later than L5. This index is designed to approximate the slope of the blue side of the \(H\)-band peak, with low-gravity objects exhibiting a much steeper slope than field-age brown dwarfs. However, this index is defined using a band centered at 1.67 \(\mu\)m, which is where a feature potentially attributable to CH\({}_{4}\) occurs in our spectrum. Thus the \(H\)-cont index does not provide an accurate assessment of the slope of the blue side of the \(H\)-band peak for this object.
We have created a modified slope index for the blue side of the \(H\)-band peak by computing a simple linear least-squares fit to the 1.45-1.64 \(\mu\)m region after normalizing to the \(J\)-band peak between 1.27 and 1.29 \(\mu\)m. We measured this slope (normalized flux/\(\mu\)m) for several late-L and early-T dwarfs, both field and young association members, as shown in Figure 4. We note that the largest slope for the entire sample belongs to WISE J173859.27+614242.1, an object that has been difficult to classify (Mace et al., 2013), but is most consistent with an extremely red L9 (Thompson et al., 2013). It is unclear if this object is young, has an extremely dusty photosphere, or both. For typical L7-T0 dwarfs, \(H\)-slope values for field objects range from 2-4, while equivalently classified young L dwarfs have values that range over 3-5. For CWISE J0506+0738, we find a slope of 4.38, significantly larger than field-age late-L dwarfs. The known population of young, very red L dwarfs simi
Figure 3: Model spectra from Marley et al. (2021) with varying effective temperatures and surface gravity fixed at log(g)=3.5. The gray bands highlight the approximate regions of the absorption features seen in the spectrum of CWISE J0506+0738.
larly has larger \(H\)-slope values than their field-age counterparts.
Schneider et al. (2014) also showed that the H\({}_{2}(K)\) index defined in Canty et al. (2013) could distinguish young, low-gravity late-Ls from the field late-L population. The H\({}_{2}(K)\) index determines the slope of the \(K\)-band between 2.17 \(\mu\)m and 2.24 \(\mu\)m. CWISE J0506+0738 has an H\({}_{2}(K)\) value of 1.030, which is again consistent with the known population of low-gravity late-type L dwarfs (\(1.029\leq\) H\({}_{2}(K)\)\(\leq 1.045\)) compared field-age L6-L8 brown dwarfs (H\({}_{2}(K)\)\(\gtrsim 1.05\)).
Another spectral feature that has been used to distinguish low-surface gravity late-L dwarfs are the K I absorption lines between 1.1 and 1.3 \(\mu\)m (McGovern et al., 2004; Allers and Liu, 2013; Miles et al., 2022). Our Keck/NIRES spectrum does not have sufficient S/N around the \(J\)-band peak to investigate these lines. A higher S/N spectrum would help to ensure no ambiguity regarding the surface gravity of CWISE J0506+0738.
### Radial Velocity
The resolution of the Keck/NIRES data is sufficient to obtain a coarse measure of the radial velocity (RV) of CWISE J0506+0738, particularly in the vicinity of strong molecular features. We followed a procedure similar to that described in (Burgasser et al., 2015) (see also Blake et al., 2010; Hsu et al., 2021), forward-modeling the wavelength-calibrated spectrum prior to telluric correction in the 2.26-2.38 \(\mu\)m region. This spectral band contains the prominent 2.3 \(\mu\)m CO 2-0 band present in L dwarf spectra, as well as strong telluric features that allow refinement of the spectral wavelength calibration (cf. Newton et al., 2014). We used a \(T_{\rm eff}=1300\) K, \(\log g=4.5\) dex (cgs) BTSettl atmosphere model (\(M[\lambda]\)) from Allard et al. (2012) which provides the best match to the CO band strength, and a telluric absorption model (\(T[\lambda]\)) from Livingson and Wallace (1991). We forward modeled the data (\(D[\lambda]\)) using four parameters: the barycentric radial velocity of the star (RV\({}_{\oplus}\)), the strength of telluric absorption (\(\alpha\)), the instrumental gaussian broadening profile width (\(\sigma_{broad}\)), and the wavelength offset from the nominal SpeXtool solution (\(\Delta\lambda\)):
\[D[\lambda]=(M[\lambda^{*}+\Delta\lambda]\times T[\lambda+\Delta\lambda]^{\alpha })\,\otimes\kappa_{G}(\sigma_{broad}) \tag{1}\]
with \(\lambda^{*}=\lambda(1+RV_{\oplus}/c)\) accounting for the radial motion of the star and \(\kappa_{G}\) representing the gaussian broadening kernel. Preliminary fits that additionally included rotational broadening of the stellar spectrum indicated that this parameter was equal to the instrumental broadening and is likely unresolved (\(v\sin i\lesssim 65\) km/s), so it was ignored in our final fit.
After an initial "by-eye" optimization of parameters, we used a simple Markov Chain Monte Carlo (MCMC) algorithm to explore the parameter space, evaluating goodness of fit between model and data using a \(\chi^{2}\) statistic. Figure 5 displays the posterior distribution of our fit parameters after removing the first half of the MCMC chain ("burn-in"), which are normally distributed. There is a small correlation between RV\({}_{\oplus}\) and \(\Delta\lambda\) which is expected given that stellar and telluric features are intermixed in this region. This correlation increases the uncertainties of these parameters. We find that the best-fit model from this analysis is an excellent match to the NIRES spectrum, with residuals consistent with uncertainties. After correction for barycentric motion (\(-19.2\) km/s), we determine a heliocentric radial velocity of \(+16.3^{+8.8}_{-7.7}\) km s\({}^{-1}\) for for CWISE J0506+0738.
## 5 Discussion
### Redder than Red
CWISE J0506+0738 has exceptionally red colors compared to the known brown dwarf population. Figure 6 highlights this by comparing CWISE J0506+0738 to other UHS DR2 L and T dwarfs (Schneider et al. in prep.) and red L dwarfs not covered by the UHS survey. Table 2 summarizes photometric and spectral type information for all known free-floating L dwarfs with \(J-K\) colors greater than 2.2 mag. All photometry is on the MKO system and comes from the VISTA Hemisphere Survey (VHS; McMahon et al., 2013), Liu et al. (2016), or Best et al. (2021). WISE J173859.27+614242.1 has no near-infrared MKO photometry in the literature or
Figure 4: \(H\)-band slope index versus spectral type for field late-L and T dwarfs (colored circles) based on data from the SPLAT archive (Burgasser and Splat Development Team, 2017), with colors corresponding to spectral type. Young L and T dwarfs are represented by purple squares. CWISE J0506+0738 (blue diamond) is an outlier amongst field-age late-Ls, similar to the young, late-type L dwarf population. Small offsets have been added to spectral type values for differentiation purposes.
in available catalogs. For this source, we used its low-resolution near-infrared spectrum published in Mace et al. (2013) normalized to its most precise \(K\)-band photometric measurement (2MASS \(K_{\rm S}\); Skrutskie et al., 2006), and then computed synthetic \(J_{\rm MKO}\) and \(K_{\rm MKO}\) photometry. Even amongst known red L dwarfs, CWISE J0506+0738 stands out as exceptionally red, being \(\sim\)0.3 mag redder in both \((J-K)_{MKO}\) and \(J_{MKO}\)\(-\)W2 color than all other known free-floating L dwarfs.
Directly imaged planetary-mass companions also have exceptionally red near-infrared colors. Some of the L-type companions (Table 2) do not have _WISE_ W1 (3.4 \(\mu\)m) and W2 (4.6 \(\mu\)m) photometry, but have equivalent Spitzer/IRAC photometry in ch1 (3.6 \(\mu\)m) and ch2 (4.5 \(\mu\)m). For HD 203030B, we use \(J\)- and \(K\)-band photometry from Metchev & Hillenbrand (2006) and Miles-Paez et al. (2017), and convert Spitzer/IRAC ch1 and ch2 photometry from Martinez & Kraus (2022) using the Spitzer-WISE relations from Kirkpatrick et al. (2021). For VHS 1256\(-\)1257B, we use \(J\)- and \(K\)-band photometry from Gauza et al. (2015), and convert Spitzer/IRAC ch2 photometry from Zhou et al. (2020) to W2 using the Kirkpatrick et al. (2021) relation. We chose not to use the published W1 photometry of VHS 1256\(-\)1257B from Gauza et al. (2015) because of its large uncertainty (0.5 mag). For BD+60 1417B, all photometry comes directly from Faherty et al. (2021). Both HD 203030B and BD+60 1417B are included in both panels of Figure 6, while VHS 1256\(-\)1257B is included in the left panel of Figure 6. We note that none of these companions have \((J-K)_{MKO}\) or \(J_{MKO}\)\(-\)W2 colors as red as CWISE J0506+0738. Of the remaining planetary-mass companions that lack 3-5 \(\mu\)m photometry, only 2M1207b (\(J-K\)=3.07\(\pm\)0.23 mag; Chauvin et al., 2004,
Figure 5: MCMC forward model fit of the normalized 2.26–2.38 \(\mu\)m spectrum of CWISE J0506+0738 for RV measurement. The panels along the diagonal show the posterior distributions for our four fitting parameters: the barycentric radial velocity of the star (RV\({}_{\earth}\) in km/s), the strength of the telluric absorption (\(\alpha\)), the instrumental gaussian broadening profile width (\(\sigma_{broad}\) in km/s), and the wavelength offset from the nominal SpeXtool solution (\(\Delta\lambda\) in Å). The lower left panels illustrate correlations between parameters; only the RV and \(\Delta\lambda\) parameters show a modest inverse correlation, effectively expanding the uncertainty on the RV measurement. The upper right corner shows the NIRES spectrum of CWISE J0506+0738 prior to telluric correction (black line) and the best-fit model spectrum (magenta line) composed of stellar model and telluric absorption components (offset lines above fit). Residuals (data minus model, blue line) are consistent with measurement uncertainties (grey band).
2005; Mohanty et al., 2007; Patience et al., 2010) and HD 206893B (\(J-K\)=3.36\(\pm\)0.08 mag; Milli et al., 2017; Delorme et al., 2017; Kammerer et al., 2021; Meshkat et al., 2021; Ward-Duong et al., 2021) have redder \(J-K\) colors than CWISE J0506+0738.
### WISE Photometric Variability
Young brown dwarfs have been shown to have enhanced photometric variability compared to field-age brown dwarfs (Biller et al., 2015; Metchev et al., 2015; Schneider et al., 2018; Vos et al., 2020, 2022). Most brown dwarfs with detected variability at 3-5 \(\mu\)m, measured largely with Spitzer/IRAC, have amplitudes of a few percent or less (see compilation in Vos et al., 2020). Multi-epoch photometry from WISE generally does not have the precision to detect such variability (Mace, 2015, Brooks et al. submitted). However, objects with extremely high-amplitude variability could be distinguished in multi-epoch WISE data.
Given tentative evidence of near-infrared photometric variability (see Section 3.1), we investigated WISE (Wright et al., 2010) and NEOWISE (Mainzer et al., 2011, 2014) data for evidence of mid-infrared variability for CWISE J0506+0738. WISE/NEOWISE has been scanning the mid-infrared sky for over 10 years, and a typical location on the sky has been observed with the W1 and W2 filters every six months since early 2010.4 During each \(\sim\)1 day visit, 10-15 individual exposures are typically acquired. We chose to analyze these single exposures as opposed to epochal coadds (e.g. "unTimely"; Meisner et al., 2022) because CWISE J0506+0738 is brighter than the nominal threshold where single exposure photometry becomes unreliable, especially at W2 (\(\sim\)14.5 mag; Schneider et al., 2016); and the concern that the coadded frames would dilute any traces of photometric variability. Such coadded photometry may prove useful for future investigations of long-term/long-period variability.
Footnote 4: With the exception of a \(\sim\)3 year gap between the initial WISE mission and reactivation as NEOWISE from February 2011 to December 2013.
We gathered photometry from the WISE/NEOWISE Single Exposure Source Catalogs (WISE Team, 2020, 2020, 2020, 2020) for CWISE J0506+0738 and the same set of known L, T, and Y dwarfs shown in Figure 6. Collectively, these objects should have comparable levels of low-amplitude variability generally undetectable by WISE. For each source, we measured the average and standard deviation of both W1 and W2 magnitudes. We omit frames with _qual_frame_ values equal to zero, as these frames likely have contaminated flux measurements. Because single exposure frames are subject to astronomical transients (e.g., cosmic ray hits, satellite streaks), we excluded 4\(\sigma\) outliers from the set of single exposure photometry for each source. We also excluded sources that were either blended or contaminated (e.g., bright star halos, diffraction spikes).
Figure 7 compares mean and standard deviation values, which show clear trends in both W1 and W2 photometry. We immediately identify four objects with magnitudes between 12 and 14.5 that have photometric scatter above the 5-95% confidence interval (\(\gtrsim\)2\(\sigma\)) in either W1 or W2.
Figure 6: Color-color diagrams showing known brown dwarfs recovered in the UKIRT Hemisphere Survey (Schneider et al. in prep), supplemented with known red L dwarfs from Table 2. CWISE J0506+0738 is a clear outlier, being significantly redder than other known L dwarfs both in \(J-K\) and \(J-\)W2 color.
_2MASS 21392676+020226 (2MASS J2139+0220)_ is a T1.5 dwarf (Burgasser et al., 2006) that is well-known for its large-amplitude infrared variability. Radigan et al. (2012) monitored 2MASS J2139+0220 and found \(J\)-band variability with a peak-to-peak amplitude of \(\sim\)26%, which until recent observations of VHS 1256\(-\)1257B (Zhou et al., 2022) was the highest amplitude variability found for any brown dwarf. Since the Radigan et al. (2012) study, this object has been the subject of numerous variability investigations (Apai et al., 2013; Khandrika et al., 2013; Karalidi et al., 2015), with Yang et al. (2016) finding variability of 11-12% in Spitzer/IRAC ch1 and ch2 photometry. The extreme variability of 2MASS J2139+0220 is attributed to variations in the thickness of silicate clouds (Apai et al., 2013; Karalidi et al., 2015; Vos et al., 2022). This object has also been shown to have a nearly edge-on inclination (Vos et al., 2017), and is a kinematic member of the \(\sim\)200 Myr-old Carina-Near moving group (Zhang et al., 2021).
_WISE J052857.68+090104.4 (WISE J0528+0901)_ is a clear W1 outlier, originally classified as a late-M giant by Thompson et al. (2013) but later reclassified as a very low-gravity L1 brown dwarf member of the \(\sim\)20 Myr 32 Orionis group (Burgasser et al., 2016). This planetary-mass object has an anomalous \(J-\)W2 color, suggestive of excess flux at 5 \(\mu\)m, although Burgasser et al. (2016) found no evidence of circumstellar material or cool companions. The source may also be a variable in the W2 band, but its fainter magnitude here makes it less distinct than comparably bright L and T dwarfs. Nevertheless, these data suggest that WISE J0528+0901 has an unusually dusty and variable atmosphere, making it a compelling source for future photometric monitoring.
_PSO J318.5338\(-\)22.8603_ is a clear W2 outlier and exceptionally red \(\beta\) Pic member that has been shown to have large-amplitude infrared variability in the infrared (Biller et al., 2015; Vos et al., 2019), with a peak-to-peak amplitude of 3.4% in Spitzer/IRAC ch2 photometry (Biller et al., 2018). Interestingly, PSO J318.5338\(-\)22.8603 is an outlier in W2 and not in W1, which may indicate a cloud depth effects given that the W1 and W2 bands probe different depths in the atmosphere.
_2MASS WJ0310599+164816 (2MASS J0310+1648AB)_ is another W2 outlier, and is an optically classified L8 Kirkpatrick et al. (2000). This object is a resolved (0\(\farcs\)2) \(\sim\)equal brightness binary (Stumpf et al., 2010) that shows evidence of high amplitude variability in the near-infrared (Buenzli et al., 2014). While the variability observations were not long enough to determine a true amplitude or period, the brightening rate of \(\sim\)2% per hour was the largest measured in the sample. While there is no clear evidence of youth for 2MASS J0310+1648AB in the literature, this object was typed as L9.5 (sl. red) in Schneider et al. (2014). Further investigation of the potential youth and cloud properties of this object may be warranted.
Figure 7: Standard deviation (\(\sigma\)) versus average magnitude over all single-exposure WISE/NEOWISE W1 (left) and W2 (right) detections of known brown dwarfs. Color contours indicate 16–84% and 5–95% confidence intervals in 0.5 magnitude bins. The insets on each panel show the difference between measured \(\sigma\) values and polynomial fits to the magnitude trend. 2MASS J2139+0220 (dark green square), PSO J318.5338\(-\)22.8603 (light green circle), 2MASSW J0310599+164816 (light purple hexagon), CWISE J0506+0738 (cyan diamond), and WISE J052857.68+090104.4 (dark purple pentagon) are all highlighted as clear deviants from these trends.
CWISE J0506+0738 joins this group of variability outliers, as one of very few objects with both W1 and W2 scatter outside the 16-84% confidence interval of comparable-brightness L and T dwarfs. To estimate the amplitude of variability associated with these deviations, we fit tenth-order polynomials to the scatter versus magnitude trends in W1 and W2, and calculated RMS values by finding the magnitude offset (in quadrature) for our outlying targets. Assuming sinusoidal variability, RMS values can be converted to peak-to-peak amplitudes with a multiplicative factor of 2\(\sqrt{2}\). Using the 16-84% confidence region as uncertainties for the predicted values from the polynomial fits, we find peak-to-peak variability on the order of 13\(\pm\)1% for W1 and 12\(\pm\)2% for W2 for 2MASS J2139+0220, which is generally consistent with results from Spitzer (Yang et al., 2016). For CWISE J0506+0738, we estimate 15\(\pm\)5% variability for W1 and 23\(\pm\)9% variability for W2. Variability at these levels would certainly be extraordinary; however, we caution that the relatively low precision of WISE/NEOWISE single exposure measurements may inflate these results. Future photometric and/or spectroscopic monitoring would help to explore the variability properties of CWISE J0506+0738.
### Distance
CWISE J0506+0738 is faint at optical wavelengths and was therefore undetected by the Gaia mission (Gaia Collaboration et al., 2022). The currently available astrometry for CWISE J0506+0738 is insufficient for a parallax measurement. Because CWISE J0506+0738 has such an unusually shaped spectrum, standard spectral-type versus absolute magnitude relations for normal, field-age brown dwarfs are not applicable. There have been efforts to create relations between absolute magnitudes and spectral types for low-gravity brown dwarfs; however, these are typically valid for spectral types earlier than L7 (e.g., Faherty et al., 2016; Liu et al., 2016). Faherty et al. (2013) found absolute photometry of the young L5 dwarf 2MASS J03552337+1133437 was fainter than field L5 dwarfs at wavelengths shorter than \(\sim\)2.5 \(\mu\)m, and brighter at longer wavelengths. Schneider et al. (2016) investigated other young, red L dwarfs with measured parallaxes and found that \(K\)-band photometry produced photometric distances that aligned well with parallactic distances. This trend was also noted in Filippazzo et al. (2015), Faherty et al. (2016), and Liu et al. (2016).
Here, we use nine young, free-floating brown dwarfs (Table 2) with measured parallaxes (Liu et al., 2016; Best et al., 2020; Kirkpatrick et al., 2021; Gaia Collaboration et al., 2022) to compare measured distances to photometric distances based on absolute magnitude-spectral type relations for \(J_{\rm MKO}\), \(K_{\rm MKO}\), W1, and W2 (Dupuy and Liu, 2012; Kirkpatrick et al., 2021; Figure 8). Consistent with prior results, we find that \(K_{\rm MKO}\)-band photometric distances (average offset \(\Delta\)d = \(-\)0.8 pc, scatter \(\sigma_{d}\) = 3.3 pc) are generally more accurate than \(J_{\rm MKO}\) (\(\Delta\)d = \(-\)10 pc, \(\sigma_{d}\) = 5.1 pc), W1 (\(\Delta\)d = \(+\)2.6 pc, \(\sigma_{d}\) = 3.8 pc), or W2 (\(\Delta\)d = \(+\)4.5 pc, \(\sigma_{d}\) = 4.1 pc) photometric distances. To ensure these values are not biased, we also evaluated the fractional difference for each photometric band, defined as \(\Delta\)d/d\({}_{\rm pk}\), and find that \(K\)-band photometric distances are typically within 5% for this sample, compared to 52%, 11%, and 20% for \(J_{\rm MKO}\), W1, and W2, respectively.
Using the absolute magnitude-spectral type relation from Dupuy and Liu (2012), a spectral type of L9\(\pm\)1, and its measured \(K_{MKO}\) photometry, we estimate a photometric distance of 32\({}^{+4}_{-3}\) pc for CWISE J0506+0738. Again, given the exceptional nature of this source, and its unknown multiplicity, we advise that this distance estimate be used with caution until it can be confirmed with a trigonometric parallax.
### Moving Group Membership
Young brown dwarfs are often associated both spatially and kinematically with young, nearby moving groups, thereby serving as invaluable age benchmarks.
To assess the potential moving group membership of CWISE J0506+0738, we use the BANYAN \(\Sigma\) algorithm (Gagne et al., 2018), which deploys a Bayesian classifier to assign probabilities of moving group membership through 6D coordinate alignment (position and velocity) to 26 known moving groups in the solar neighborhood.
Figure 8: A comparison of photometric and parallactic distances for free-floating objects from Table 2 with measured parallaxes. Objects are labeled on the x-axis. Dashed lines show average differences between photometric and parallactic distances for each band, with colors corresponding to those given in the legend.
We used the position and proper motion of CWISE J0506+0738 from UKIRT and UHS measurements (Table 1), and our measured radial velocity from the NIRES spectrum (Section 4.3). With these values alone, we find an 82% membership probability in the \(\beta\) Pictoris moving group (BPMG; Zuckerman et al., 2001), a 3% membership probability in the AB Doradus moving group (ABDMG; Zuckerman et al., 2004), and a 15% probability of being unassociated with any moving group. The predicted/optimal distances for membership in BPMG and ABDMG are 32 pc and 64 pc, respectively; our estimated distance clearly aligns with the former. If we include the distance estimate in the BANYAN \(\Sigma\) algorithm, the probability of BPMG membership goes up to 99%.
We also tested the kinematic membership of CWISE J0506+0738 using the LACEwING analysis code (Riedel et al., 2017). Again, using just the position, proper motion, and radial velocity of CWISE J0506+0738, we find non-zero probabilities for ABDMG (56%), the Argus Moving Group (71%), BPMG (28%), the Columba Association (52%), and the Tucana-Horologium Association (6%). Note that LACEwING is stricter in assigning membership probabilities than BANYAN, with bona fide BPMG members having a maximum membership probability of \(\sim\)70% when only proper motion and radial velocity are used (Riedel et al., 2017). If we use our photometric distance as an additional constraint, BPMG is returned as the group with the highest probability of membership at 86%.
Membership in the \(\beta\) Pictoris moving group is clearly favored for CWISE J0506+0738, although a directly measured distance is necessary for confirmation. If confirmed, CWISE J0506+0738 would have the latest spectral type and lowest mass amongst free-floating BPMG members, following PSO J318.5338\(-\)22.8603 (Liu et al., 2013). Several candidate members with L7 or later spectral types have also been proposed (Best et al., 2015; Schneider et al., 2017; Kirkpatrick et al., 2021; Zhang et al., 2021; however, see Hsu et al., 2021). PSO J318.5338\(-\)22.8603 has proven to be an exceptionally valuable laboratory for studying planetary-mass object atmospheres (Biller et al., 2015, 2018; Allers et al., 2016; Faherty et al., 2016). A second planetary-mass object in this group that bridges the L/T transition will further contribute to these studies.
Assuming \(\beta\) Pic membership, we can use the group age of 22\(\pm\)6 Myr (Shkolnik et al., 2017) to estimate the mass of CWISE J0506+0738. To do this, we must first estimate the luminosity (\(L_{\rm bol}\)) or effective temperature (\(T_{\rm eff}\)) of the source. For the former, we used the empirical \(K\)-band bolometric correction/spectral type relation for young brown dwarfs quantified in Filippazzo et al. (2015). Combining this with the UHS \(K\)-band magnitude and our distance estimate, we infer a bolometric luminosity of log(\(L_{\rm bol}\)/\(L_{\odot}\)) = -4.55\(\pm\)0.12. We caution that this value is based on our estimated distance from Section 5.3, and will need to be updated when a measured parallax becomes available. We then used the solar metallicity evolutionary models of Marley et al. (2021) to infer a mass of 7\(\pm\)2 \(M_{\rm Jup}\). The evolutionary models also provide a radius of 1.32\(\pm\)0.03 \(R_{\rm Jup}\) for these parameters, consistent with the radii of low-gravity late-type L dwarfs (Filippazzo et al., 2015). Combining this radius with our bolometric luminosity, we find \(T_{\rm eff}\) = 1140\(\pm\)80 K. This is \(\sim\)130 K cooler than a field-age L9 (Kirkpatrick et al., 2021), consistent with previous works showing low-gravity late-Ls tend to be \(\sim\)100-200 K cooler than field-age objects at the same spectral type (Filippazzo et al., 2015; Faherty et al., 2016). In particular, this temperature is 50-100 K cooler than \(T_{\rm eff}\) estimates of PSO J318.5338\(-\)22.8603 (Liu et al., 2013; Miles et al., 2018), consistent with the appearance of CH\({}_{4}\) absorption at lower temperatures.
The predicted mass of 7\(\pm\)2 \(M_{\rm Jup}\) is well below the deuterium-fusion minimum mass of 14 \(M_{\rm Jup}\) commonly used to distinguish brown dwarfs from planetary mass objects. As such, this object helps bridge the mass gap between the lowest mass free-floating \(\beta\) Pic members and directly imaged exoplanets, such as 51 Eri b (\(\sim\)T6.5; Macintosh et al., 2015; Rajan et al., 2017). CWISE J0506+0738 could also help to constrain the effective temperature of the L/T transition at an age of \(\sim\)20-25 Myr (Binks & Jeffries, 2014; Bell et al., 2015; Messina et al., 2016; Nielsen et al., 2016; Shkolnik et al., 2017; Miret-Roig et al., 2020). CWISE J0506+0738 would be one of the youngest objects to join a small but growing number of benchmark substellar objects with known ages a the L/T transition such as HD 203030B (30-150 Myr; Metchev & Hillenbrand, 2006; Miles-Paez et al., 2017), 2MASS J13243553+6358281 (\(\sim\)150 Myr; Looper et al., 2007; Gagne et al., 2018), HIP 21152B and other T-type Hyades members (\(\sim\)650 Myr; Kuzuhara et al., 2022; Schneider et al., 2022), \(\epsilon\) Indi Ba (\(\sim\)3.5 Gyr; Scholz et al., 2003; Chen et al., 2022), and the white dwarf companion COCONNUS-1 (\(\sim\)7 Gyr; Zhang et al., 2020).
## 6 Summary
We have presented the discovery and analysis of an exceptionally red brown dwarf, CWISE J0506+0738, identified as part of the Backyard Worlds: Planet 9 citizen science project. The near-infrared spectrum of CWISE J0506+0738 is highly reddened and shows signatures of low-surface gravity, as well as weak absorption features
that we associate with methane bands. This object has the reddest \(J-K\) and \(J-\)W2 colors of any free-floating L-type brown dwarf, and we tentatively assign a near-infrared spectral type of L8\(\gamma\)-T0\(\gamma\). The exceptionally red color of CWISE J0506+0738 may be due to several factors. Objects with low surface gravities have inefficient gravitational settling of silicate dust grains, which can remain high in the atmospheres. Such grains can be directly detected at long wavelengths (e.g., Cushing et al., 2006; Burgasser et al., 2008; Suarez & Metchev, 2022) and could be constrained for CWISE J0506+0738 with future long-wavelength observations (e.g., Miles et al., 2022). The angle at which a brown dwarf is viewed has also been shown to affect its near-infrared colors, with objects viewed equator-on tending to have redder colors than those viewed pole-on (Vos et al., 2017). A measurement of CWISE J0506+0738's rotational period combined with its rotational velocity (e.g., \(v\)sin\(i\)) from a high-resolution spectrum could determine whether or not CWISE J0506+0738 is viewed closer to pole-on or equator-on. A high-resolution spectrum would also allow for a higher precision radial velocity measurement and a more detailed probe of gravity-sensitive features.
CWISE J0506+0738's astrometry and kinematics points to likely membership in the 22 Myr \(\beta\) Pictoris moving group, to be confirmed or rejected with future trigonometric parallax and higher precision radial velocity measurements. If associated, CWISE J0506+0738 would be the lowest-mass \(\beta\) Pictoris member found to date, with an estimated mass of 7\(\pm\)2 \(M_{\rm Jup}\), well within the planetary-mass regime. The extreme colors of this object, and its relatively low proper motion (\(<\)100 mas yr\({}^{-1}\)), suggests the existence of a other extremely red L dwarfs that may have been missed by previous searches due to assumptions about brown dwarf colors or selection requirements for large proper motions. Recent large-scale near-infrared surveys such as UHS (Dye et al., 2018) and VHS (McMahon et al., 2013) that push several magnitudes deeper than previous efforts (e.g., 2MASS) may be able to confidently detect the faint \(J\)-band magnitudes of similar objects.
Because of this object's unique spectroscopic properties, and the fact that young brown dwarfs often display large-amplitude variability (e.g., Vos et al., 2022), CWISE J0506+0738 is an intriguing target for future photometric or spectroscopic variability monitoring. Longer wavelength observations with the James Webb Space Telescope would have the additional advantage of further constraining the existence and abundance of CH\({}_{4}\) and analyzing the presence and properties of dust grains through silicate absorption features (Miles et al., 2022).
## Acknowledgments
The Backyard Worlds: Planet 9 team would like to thank the many Zooniverse volunteers who have participated in this project. We would also like to thank the Zooniverse web development team for their work creating and maintaining the Zooniverse platform and the Project Builder tools. This research was supported by NASA grant 2017-ADAP17-0067. This material is supported by the National Science Foundation under Grant No. 2007068, 2009136, and 2009177. This publication makes use of data products from the UKIRT Hemisphere Survey, which is a joint project of the United States Naval Observatory, The University of Hawaii Institute for Astronomy, the Cambridge University Cambridge Astronomy Survey Unit, and the University of Edinburgh Wide-Field Astronomy Unit (WFAU). UHS is primarily funded by the United States Navy. The WFAU gratefully acknowledges support for this work from the Science and Technology Facilities Council through ST/T002956/1 and previous grants. The authors acknowledge the support provided by the US Naval Observatory in the areas of celestial and reference frame research, including the USNO's postdoctoral program. (Some of) The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. This publication makes use of data products from the _Wide-field Infrared Survey Explorer_, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE which is a project of the Jet Propulsion Laboratory/California Institute of Technology. _WISE_ and NEOWISE are funded by the National Aeronautics and Space Administration. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has benefitted from the SpeX Prism Spectral Libraries, maintained by Adam Burgasser. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
UkIRT/WFCAM, Keck/NIRES, WISE, NEOWISE
BANYAN \(\Sigma\)(Gagne et al., 2018), CASUTOOLS (Irwin et al., 2004), LACEwING(Riedel et al., 2017), SpeXTool(Cushing et al., 2004), SPLAT (Burgasser, 2014), WiseView(Caselden et al., 2018) |
2302.14010 | Tuning the bulk behavior and 2D interfacial self-assembly of microgels
by Keggin-type polyoxometalate ionic specificity | Finding new ways to tune the behavior of thermoresponsive microgels in bulk
and confined at 2D liquid interfaces is key to achieve a deeper understanding
and control of these smart materials. We studied the interaction of positively
charged pNIPAM microgels with the Keggin-type polyoxometalate
$Na_{3}PW_{12}O_{40}$ (POM). In bulk, we observed charge inversions below and
above the volume phase transition temperature (VPTT) at significantly low POM
concentrations as $5\cdot10^{-5}$ M. In the presence of POM, the microgels
exhibited a deswelling-swelling-deswelling behaviour below the VPTT, and a
two-step further deswelling above the VPTT. When microgels were confined at 2D
water/air interfaces, adding $10^{-5}$ M of POM below the VPTT was equivalent
to heat above the VPTT and compress the monolayer from $5$ to $20\,\text{mN
m$^{-1}$}$. Above the VPTT, the diameter at the interface did not change while
the portion immersed in the subphase further deswelled, in agreement with the
behavior in bulk. Adding more POM did not change the diameter at the interface
nor the height of the microgels, showing a saturation effect in 2D. The
restructuring of the pNIPAM polymeric network by the POM was characterized by
EDS mapping and XPS. The microgel monolayers with POM improved their resistance
to plasma etching, which could be useful for soft colloidal lithography. | Antonio Rubio-Andrés, Delfi Bastos-González, Miguel Angel Fernandez-Rodriguez | 2023-02-27T18:06:33Z | http://arxiv.org/abs/2302.14010v2 | Tuning the bulk behaviour and 2D interfacial self-assembly of microgels by polyoxometalate ionic specificity
###### Abstract
Finding new ways to tune the behaviour of thermoresponsive microgels in bulk and at interfaces is key to achieve a deeper understanding and control of these smart materials. We studied the ionic specificity of a Keggin polyoxometalate (POM) on positively charged pNIPAM microgels. In bulk, we observed an inversion of charge and further dehydration in their already collapsed state. At water/air interfaces, we found a reduction of the microgel diameter and a collapse of the portion immersed in the water subphase at \(10^{-5}\) M. Above the microgel collapsing temperature, the diameter at the interface did not change but the portion immersed in the subphase collapsed even more. The restructuring of the pNIPAM polymeric network by the POM was observed by EDS mapping and XPS. This restructuring improved the ability of the microgel monolayers to perform as soft colloidal lithography masks by improving their resistance to plasma etching.
## I Introduction
Microgels are soft colloidal micro- and nanoparticles composed of crosslinked polymers that are swollen in a good solvent. Poly-(N-isopropylacrylamide) (pNIPAM) microgels dispersed in water are widely used thanks to their thermoresponsiveness, exhibiting a Volume Phase Transition Temperature (VPTT). The microgel swells below the VPTT and collapses above it, expelling water and stiffening in the process. [1; 2]. It is possible to tune their response to other external stimuli during the synthesis, e.g. they can become pH-responsive by adding amphoteric co-monomers [1; 3; 4].
One of the usual synthesis routes to obtain pNIPAM microgels is the precipitation polymerization, where they develop a Gaussian profile on the crosslinking density, with a highly crosslinked core and a less crosslinked corona [1; 5]. When a microgel adsorbs at a liquid interface, the portion in contact with the interface stretches due to the surface tension, only counterbalanced by the internal elasticity of the polymeric network, which it is proportional to the crosslinking density. As a result, the microgel exhibits a _fried-egg_ shape when it is seen from above the interface, with the central portion of the microgel still well solvated in the water subphase [6].
When adsorbed at interfaces, microgels can be used as Pickering emulsion stabilizers, where their responsiveness allows to destabilize the emulsions by the influence of an external stimulus [2; 7; 8; 9; 10]. Moreover, when transferred to a silicon substrate, the deposited microgels can act as lithography masks to fabricate arrays of vertically aligned silicon nanowires [11].
In order to improve their performance in these applications, we need to understand the behaviour of microgels adsorbed at interfaces. Indeed, the portion of the microgel still immersed in the water subphase will keep its responsiveness to external stimuli, while the portion stretched at the interface will be dominated by the surface tension [3; 12; 13]. Regarding the thermoresponsiveness, the diameter of the microgel at the interface remains the same regardless of the temperature. Nevertheless, as it occurs for microgels in bulk, the portion immersed in the water subphase collapses above the VPTT, decreasing the size of that portion [12; 13; 14; 15]. Upon deposition on a substrate this results in an increase of the microgel height [12].
Furthermore, Schmidt et al. showed that for pH-responsive microgels adsorbed at interfaces the behaviour is different upon pH-swelling/deswelling depending on the size of the microgel [3]. For big microgels (800 nm) they found a similar behavior compared to thermoresponsiveness, with the diameter at the interface being stable while the portion immersed in the water subphase was pH-responsive. Interestingly, for small microgels (250 nm) the diameter at the interface decreased upon pH-swelling of the microgels.
Another tool that can be used to tune the properties of microgels in dispersion, e.g. their charge and size, is the addition of salt. While this already provides electrostatic screening of charges, we will particularize here to stronger ionic specific interactions. The influence of the ionic specificity on pNIPAM has been widely analyzed in bulk for both free chains and microgels. In both cases, the properties of pNIPAM are deeply affected by salts belonging to the Hofmeister series, specially when anions act as counterions [16; 17]. However, there is a lack of experimental works studying how salts influence the interfacial properties of pNIPAM, probably due to the difficulty in obtaining reliable and accurate data in 2D systems.
There are many interesting ions candidates with complex behaviours, such as cobaltabisdicarbollide anions, but these exhibit interfacial activity [18], and therefore they would compete with the adsorption of microgels at
interfaces. An interesting kind of anions are the Keggin polyoxometalates (POMs), nanosized metal-oxide clusters with high valences and well defined structures [19]. They do not exhibit interfacial activity by themselves, but they can adsorb in the presence of interfacially active molecules [20; 21]. Moreover, they can induce the self-assembly of free polymer chains in bulk into sheets and globules [19; 22], but a comprehensive study on how this effect translates to microgels, both in bulk and at interfaces, is missing.
This strong interaction of POMs with pNIPAM has been observed also with proteins and arise as non classical quantum phenomena [23]. The paralelism comes from the amide groups of pNIPAM that resemble the aminoacid structure. Furthermore, POMs have been gaining attention due to their interesting and versatile applications, going from their use in catalytic reactions [24] to pollution removal [25].
In this work, we explored the effect of the POM\({}^{3-}\) anion \([PW_{12}O_{40}]^{3-}\) on the behaviour of pNIPAM microgels in bulk and how this translates to their behaviour at water/air interfaces, with potential applications in Soft Colloidal Lithography.
## Results and Discussion
We studied the interaction between positively charged pNIPAM microgels (1.6 wt%-BIS crosslinking density, AEMH at 2.6 wt% as co-monomer bringing positive charges, and V50 initiator) and the \([PW_{12}O_{40}]^{3-}\) Keggin polyoxometalate anion (POM\({}^{3-}\)). The later acts as a counterion and thus enhances its affinity with the microgel [16]. Please refer to the Supplementary Information (SI) for Materials and Methods. We kept the pH 3 in all experiments to ensure the integrity of the POM\({}^{3-}\)[26; 27]. In bulk, we characterized their hydrodynamic diameter \(D_{h}\) by Dynamic Light Scattering (DLS), and electrophoretic mobility \(\mu_{e}\) by laser doppler microelectrophoresis, as a function of the temperature and for different ionic concentrations (see Figure **S1** for the usual thermoresponsiveness characterization by DLS).
In Figure **1**a we represent the \(\mu_{e}\) trend from 25 \({}^{\circ}\)C to 49 \({}^{\circ}\)C, below and above the VPTT, for a microgel dispersion in the presence of \(10^{-4}\) M of POM\({}^{3-}\). In order to make comparisons, we also performed this characterization in the absence of salt, and with \(10^{-4}\) M of \(NaCl\). The green curve without salt shows positive \(\mu_{e}\) values as expected for the positive charge of our microgels, with no significant changes after adding \(10^{-4}\) M of \(NaCl\). Around the VPTT there was a sharp increase in \(\mu_{e}\) due to the collapse of the microgel, which resulted in more charges per unit of area. In the presence of the anionic POM\({}^{3-}\) at \(10^{-4}\) M the microgel charge was inverted even at 25 \({}^{\circ}\)C. We previously observed this charge inversion with the monovalent tetraphenil borate anion (\(Ph_{4}B^{-}\)) at \(10^{-3}\) M, but only above the VPTT, where the microgels show a more hydrophobic nature [16]. This might be due to the trivalent charge of POM\({}^{3-}\), but we did not find charge inversion when using a trivalent citrate anion (see Figure **S2**).
Moreover, we modelled the curves in Figure **1**a as sigmoids (see SI for further details), obtaining similar VPTT values for the case with no salt and with \(NaCl\): \(34.3\pm 0.8\,^{\circ}C\). The VPTT in the presence of POM\({}^{3-}\) changed to \(36.2\pm 0.4\,^{\circ}C\), pointing once more to a strong interaction between the pNIPAM and the POM\({}^{3-}\), compared to \(NaCl\) at the same concentration. Interestingly, while the citrate anion did not cause charge inversion, it displaced the VPTT by the same degree as POM\({}^{3-}\), pointing to ionic specific effects in both cases (see Table S1 in SI).
Therefore, our results reflect that the anionic POM\({}^{3-}\) and positively charged pNIPAM microgels show a high ionic specificity beyond a pure electrostatic attraction, inverting their charge even with the microgel in the swollen state and increasing the value of the VPTT by \(2^{\circ}C\). Moreover, upon the collapse, the absolute value of \(\mu_{e}\) increased to a higher extent thanks to the POM\({}^{3-}\).
Next, we conducted \(\mu_{e}\) and \(D_{h}\) measurements as a function of the POM\({}^{3-}\) concentration. At 25 \({}^{\circ}\)C, the results in Figure **1**b show no significant changes on the microgel charge below \(10^{-5}\) M. At \(5\cdot 10^{-5}\) M we found the isoelectric point indicating a full screening of the microgel charge. By increasing the POM\({}^{3-}\) concentration, a strong adsorption of POM\({}^{3-}\) was observed as the
Figure 1: **a)** Electrophrohetic mobility (\(\mu_{e}\)) as a function of temperature for positively charged pNIPAM microgels at pH 3 in the absence of salt (\(\blacksquare\)), with \(10^{-4}\) M of \(NaCl\) (\(\blacksquare\)) and \(10^{-4}\) M of POM\({}^{3-}\), (\(\blacksquare\)). The lines are sigmoidal fittings (see SI for details). **b-c)**\(\mu_{e}\) and hydrodynamic diameter \(D_{h}\) for the same microgels as in a) as a function of the concentration of POM\({}^{3-}\) at 25 \({}^{\circ}\)C (\(\blacksquare\)) and at 50 \({}^{\circ}\)C (\(\blacksquare\)). The lines are guides to the eye.
became negative. Above \(5\cdot 10^{-5}\) M, despite existing an electrostatic repulsion between the already negative microgel and the POM\({}^{3-}\) anions, the anions kept adsorbing on the microgel, increasing their negative charge. At 50 \({}^{\circ}\)C, the microgels collapsed with the corresponding increase in charge per unit area, exhibiting the same trend in \(\mu_{e}\) as at 25 \({}^{\circ}\)C. However, the charge inversion was significantly more pronounced above \(5\cdot 10^{-5}\) M indicating that the interaction of the POM\({}^{3-}\) with the pNIPAM was even stronger in its collapsed state. These results are in good agreement with our previous studies on the adsorption of POM\({}^{3-}\) and charge inversion for hard nanoparticles, where we observed a stronger interaction of POMs when surfaces were more hydrophobic [27].
We show the influence of POM\({}^{3-}\) on the size of microgels in Figure **1c**. At 25 \({}^{\circ}\)C, the size of the microgels slightly decreased until \(5\cdot 10^{-5}\) M, where a minimum was observed. This minimum in size matched with the isoelectric point observed in Figure **1b**, indicating that the strong interaction between the POM\({}^{3-}\) and the microgels was also reflected in their partial collapse at 25 \({}^{\circ}\)C and low POM\({}^{3-}\) concentrations.
We already found in previous results that it is possible to collapse the pNIPAM microgels below the VPTT thanks to ionic specific interactions, but at significantly higher ionic concentrations than those used in this study [28]. This strong adsorption of POM\({}^{3-}\) on the microgels became evident with the increase in POM\({}^{3-}\) concentration, as the microgels swelled again until the original \(D_{h}\) was reached at \(5\cdot 10^{-4}\) M. At these concentrations, \(\mu_{e}\) showed a charge reversal. It is reasonable to expect that the high amount of POM\({}^{3-}\) anions penetrating to some extent within the microgel polymer network could result in an electrostatic repulsion between the POM\({}^{3-}\) anions inside the microgel, causing their re-swelling. By increasing the POM\({}^{3-}\) concentration up to \(10^{-3}\) M we observed once more the partial collapse of the microgels. Our hypothesis is that the hydration of the POM\({}^{3-}\) ions causes a competition for the water molecules that hydrate the pNIPAM chains, causing a deswelling of the microgel, which becomes more evident as the electrolyte concentration increases [29; 30].
At 50 \({}^{\circ}\)C, when the microgel was in the collapsed state, the most relevant result was the further deswelling of the particles at \(5\cdot 10^{-5}\) M indicating that the microgel expelled even more water above the VPTT due to the presence of POM\({}^{3-}\) anions. A further 30% size reduction remained constant in all the range of tested concentrations above \(5\cdot 10^{-5}\) M. In this further collapsed state, we did not observe the increase in size as in the swollen state, which would point out that the increase was possible due to the softness of the microgel in the swollen state. To our knowledge, this is the first time that such significant microgel de-swelling above the VPTT has been observed.
Next, we will discuss the effect of the interaction of POM\({}^{3-}\) anions with pNIPAM microgels adsorbed at water/air interfaces by performing Langmuir-Blodgett experiments, where the monolayers were deposited on silicon substrates and characterized by atomic force microscopy (AFM). In these experiments, both the subphase in the Langmuir trough and the microgel dispersion were kept at pH 3 and at the same POM\({}^{3-}\) concentration.
Recent studies showed a pletora of new behaviours happening when microgels are confined at 2D fluid interfaces [9]. Therefore, before analysing the role of POM\({}^{3-}\), we characterized the compression curve of our microgels at pH 3 without salt, below and above the VPTT (see Figure **2**, and Figures **S3-S4**).
We reproduced the behaviour for small microgels that we reported in a previous work [31], where the small size was accompanied by polydispersity that frustrated the crystallization of the monolayer, as seen in the AFM images from Figure **2**. We also reproduced the results that we reported in a previous work finding that the microgels did not change their diameter at the interface regardless of being above or below the VPTT [12; 14], as reflected by the overlapping compression curves at 25 and 50 \({}^{\circ}\)C. As stated before, the portion immersed in the subphase was still thermresponsive, resulting in the collapse of that portion above the VPTT, which was reflected in an increase in height of the microgel after the deposition on a silicon substrate (see the inset in Figure **2**).
As explained in the introduction, recently new experimental evidence has been found on the role of charges on microgels adsorbed at fluid interfaces, in this case by changing the pH of pH-responsive microgels [3]. Our present study fills the gap on the role of ionic specificity in the behavior of microgels adsorbed at fluid interfaces. By size, our microgels would be within the of both our previous study [31] and the one by Schmidt et al. [3].
In Figure **3a**, we studied the role of the ionic specificity of the POM\({}^{3-}\) in the self-assembly of microgels at fluid interfaces at 5 and 20 mN m\({}^{-1}\) at 25\({}^{\circ}\)C, i.e. below the VPTT. We tracked the position of each microgel in the AFM images (see SI for details), and we characterized the nearest neighbour distance (NND), and the maximum height of the deposited microgel monolayers as the average of this value for all the microgels in an image. The uncertainty associated to each measurement was calculated from the full width at half maximum of the height and NND distributions. This uncertainty arised as a consequence of the polidispersity of the microgels at the interface.
In the absence of POM\({}^{3-}\), in Figure **3a** we found the usual compression of the monolayer from 5 to 20 mN m\({}^{-1}\), with the NND being reduced from 650\(\pm\)70 nm to 540\(\pm\)50 nm, and the maximum height increasing from 7\(\pm\)3 nm to 10\(\pm\)4 nm. When the POM\({}^{3-}\) was added at 10\({}^{-5}\) M, the lowest concentration at which we found effects in bulk (see Figure **1b**-c), at 5 mN m\({}^{-1}\) the NND decreased from 650\(\pm\)70 nm to 540\(\pm\)80 nm and the height increased from 7\(\pm\)3 nm to 13\(\pm\)7 nm, similar as compressing the monolayer to 20 mN m\({}^{-1}\) in the absence of POM\({}^{3-}\).
Furthermore, the already compressed microgels at 20
mN m\({}^{-1}\) reduced even more the interparticle distance in the presence of \(10^{-5}\,M\,\mathrm{POM}^{3-}\), reducing the NND from 540\(\pm\)50 nm to 470\(\pm\)70 nm, and increasing the height from 10\(\pm\)4 to 17\(\pm\)8 nm. As the concentration of \(\mathrm{POM}^{3-}\) was increased above \(10^{-5}\) M, the NND and height remained practically constant, reflecting that at \(10^{-5}\) M the effect at the interface saturated. This behaviour was different to the observed for microgels in bulk in Figures **1**b-c, where in the range of the concentrations measured, significant variations of the \(\mu_{e}\) and \(D_{h}\) were observed. Thus, we can conclude that the addition of \(\mathrm{POM}^{3-}\) at such low concentrations as \(10^{-5}\) M produced an effect on the microgels adsorbed at water/air interfaces which was equivalent to mechanical compression.
In Figure **3**b-c, we investigated the effect of temperature for microgel monolayers in the presence of \(\mathrm{POM}^{3-}\). In its absence, we observed how the NND was not modified upon heating above the VPTT, as discussed in Figure **2**[14]. When the \(\mathrm{POM}^{3-}\) was present at \(10^{-5}\) M, the NND remained equal both under and above the VPTT. Nevertheless, we observed a two-step increase in their height. First, the addition of \(\mathrm{POM}^{3-}\) at 25 \({}^{\circ}\)C increased the height of the deposited microgels, matching the one achieved by heating the sample above the VPTT in the absence of \(\mathrm{POM}^{3-}\), as reflected by the overlapping distribution functions in Figure **3**c. Thus, by adding \(10^{-5}\) M of \(\mathrm{POM}^{3-}\) we accomplished an effect equivalent to heating. Furthermore, when we heated the sample in the presence of \(\mathrm{POM}^{3-}\) up to 50 \({}^{\circ}\)C, we observed a further increase in their height up to \(23\pm 9\) nm.
Thus, the collapse of the swollen part of the microgel coming from the increase of the temperature above the VPTT only affected the height of the deposited microgels, increasing it when compared to their corresponding depositions at 25 \({}^{\circ}\)C.
Interestingly, this goes in a different direction to the work by Schmidt et al., since they found for small microgels that upon pH-swelling they decreased their diameter at the interface [3]. We found a different effect in the presence of \(\mathrm{POM}^{3-}\), where the collapse of the portion immersed in the subphase was accompanied by a decrease of their diameter at the interface, with a further collapse of the portion immersed in the subphase upon heating above the VPTT.
Comparing with \(NaCl\) at a significantly higher concentration, \(0.1\,M\), we did not find changes in either the NND nor the height of the microgels with respect to the absence of salt (Figure **S5**, and Table 3).
Our hypothesis for the effects observed in the presence of \(\mathrm{POM}^{3-}\) is that they could be due to the restructuring
Figure 2: Surface pressure \(\Pi\) versus area per particle A\({}_{p}\) and representative AFM images obtained by the simultaneous compression and deposition of a monolayer of microgels from a water/air interface on a silicon wafer. The inset shows the microgel height distributions measured at a surface pressure of \(\simeq\)20 mN m\({}^{-1}\) for 25 \({}^{\circ}\)C (\(\bullet\)) and 50 \({}^{\circ}\)C (\(\bullet\)). The color of the images frames showcase the temperature at which microgels were deposited at. Lines are guides to the eye.
of the polymer network, an effect observed for free pNIPAM chains [32]. This restructuring would be induced by the adsorbed POM\({}^{3-}\) both in the portion of the microgel immersed in the subphase and on the stretched corona at the interface. Nevertheless, the restructuring process would be fundamentally different from the one observed for free pNIPAM chains, since the crosslinked polymeric network of microgels would hinder the rich structuring observed for free pNIPAM chains. Furthermore, from previous studies of the strong interaction of POMs and aminoacids this seems to arise as a non classical quantum phenomena [23], but further studies are needed to elucidate if this is the case also for microgels.
In order to test the hypothesis of the restructuring of the pNIPAM polymeric network, we characterized the location of the pNIPAM and POM\({}^{3-}\) on the deposited monolayers by high-angle annular dark-field imaging (HAADF) scanning transmission electron microscopy (STEM, see details in SI and Figures **S6,S7 S8**), and electron dispersion spectroscopy (EDS).
Since the \([PW_{12}O_{40}]^{3-}\) POM\({}^{3-}\) contains wolframium (\(W\)), we used it as an indicator of the POM\({}^{3-}\) concentration, while nitrogen (\(N\)) was used as an indicator of the presence of pNIPAM. In Figure **4a**-b, we present the HAADF-STEM and EDS mapping of the microgels deposited at \(\simeq\)20 mN m\({}^{-1}\) in the presence of \(10^{-5}\) and \(10^{-3}\) M of POM\({}^{3-}\), respectively. In Figure **4a**, the denser and more crosslinked core revealed a higher \(N\) concentration than in the surrounding stretched corona. Figure **4b** shows that the POM\({}^{3-}\) was adsorbed both at the microgel core and the stretched corona, with more POM\({}^{3-}\) on the core compared to the stretched corona. Figure **4c** shows the corresponding spectra, both in the core of one of the microgels compared to its corona (see more details in Figures **S9**, **S10** and **S11**). While it is not easy to see by eye in Figure **4a** that there is more \(W\) in the core of the microgels at \(10^{-3}\) M, Figure **4c** quantitatively shows this.
These results showed that there was more concentration of POM\({}^{3-}\) in the core of the microgels and less in the corona, which would be due to the quantity of polymer available in each case to interact with the POM\({}^{3-}\). While at \(10^{-3}\) M there was more POM\({}^{3-}\) adsorbed compared to \(10^{-5}\) M, the collapse of the part immersed in the subphase and the decrease of the diameter at the interface saturated at \(10^{-5}\) M, as shown in Figure **3a**.
By itself, POM\({}^{3-}\) anions are not interfacially active, but their adsorption at the interface was promoted by the interfacial activity of the microgels [20; 21]. We might then expect more relative concentration of POM\({}^{3-}\) at the interface compared to the bulk. This would explain the saturation effect that we observe in our experiments at lower POM\({}^{3-}\) concentrations when compared to microgels in bulk. Despite the existence of an already crosslinked polymeric network that prevents the structuring observed for free pNIPAM chains in dispersion in the presence of POM\({}^{3-}\)[32], we observed a restructuring of the polymeric networks of the microgels promoted by the presence of POM\({}^{3-}\).
In Figure **S8**, the bridging of two microgel cores is accompanied by the presence of a higher concentration of POM\({}^{3-}\) in the pNIPAM bridge. This bridging was never observed in the absence of POM\({}^{3-}\) (see Figures **2**, **S3** and **S4**). Therefore, compared to free pNIPAM chains in bulk [32], the restructuring ability of the POM\({}^{3-}\) was significantly reduced when interacting with microgels due to the crosslinked polymer network, but still present (see Movie S1). Nevertheless, its strong affinity with pNIPAM resulted in the modification of the microgel architecture both in bulk and at interfaces, as discussed so far.
We also studied the integrity of the POM\({}^{3-}\) after the deposition of a monolayer from the water/air interface onto the silicon substrates by XPS (see details in the SI). The presence of \(W^{(6+)}4f_{7/2}\) and \(W^{(6+)}4f_{5/2}\) doublet peaks at binding energies of 35.95 and 38.00 eV revealed the presence of unaltered \(WO_{3}\), indicating that the POM\({}^{3-}\) structure remained unaltered [33]. A slightly reduction into the \(W^{5+}\) oxidation state was observed as new \(W^{(5+)}4f_{7/2}\) and \(W^{(5+)}4f_{5/2}\) signals arised at 34.45 and 26.44 eV, respectively, signaling the partial
Figure 3: **a)** Nearest Neighbour Distance (NND) between microgels (top) and maximum height of the microgels (bottom) against concentration of POM\({}^{3-}\) at \(\Pi=5\) (\(\blacksquare\)) and 20 mN m\({}^{-1}\) (\(\uparrow\)), at 25 \({}^{\circ}\)C. **b)** Nearest Neighbour Distance (NND) at \(\Pi=20\) mN m\({}^{-1}\) of microgels without POM\({}^{3-}\) (left) and with \(10^{-5}\) M POM\({}^{3-}\) (right), at 25 \({}^{\circ}\)C (blue) and 50 \({}^{\circ}\)C (red). **c)** Maximum height distribution at \(\Pi=20\) mN m\({}^{-1}\) of microgels, open symbols correspond to measurements in the absence of salt at 25 \({}^{\circ}\)C (\(\square\)) and 50 \({}^{\circ}\)C (\(\square\)), while solid symbols are in the presence of \(10^{-5}\) M of POM\({}^{3-}\) at 25 \({}^{\circ}\)C (\(\blacksquare\)) and 50 \({}^{\circ}\)C (\(\blacksquare\)). Lines are guides to the eye.
degradation of the POM\({}^{3-}\) (see Figure **S12**). We present the relative concentration of each element for the analysed sample in Table 1 at \(10^{-5}\) M of POM\({}^{3-}\), showing that \(\simeq 22\%\) of the POM\({}^{3-}\) partially degraded during the harsh process of depositing and drying on a silicon substrate.
In order to showcase the improved ability of the microgels restructured by POM\({}^{3-}\) to perform as lithography masks, we subjected microgel monolayers to air plasma treatments to compare their resistance to incineration. To do this, substrates with microgels deposited at \(\Pi=20\) mN m\({}^{-1}\) with no POM\({}^{3-}\) and with \(10^{-5}\) M POM\({}^{3-}\) were subjected to air plasma at 100 W for 30 min. We acquired AFM images before and after the treatment.
In Figure **5**, it is visible the compression of the monolayer induced by the POM\({}^{3-}\), with a reduction of the NND and brighter, i.e. higher, microgels. After the plasma treatment, we noticed that the addition of \(10^{-5}\) M POM\({}^{3-}\) prevented the full incineration of the microgel monolayer, which would hint to an improved performance as lithography masks by plasma dry etching.
## Conclusion
In this work, we explored the specific ionic effects of POM\({}^{3-}\) and pNIPAM microgels, as a tool for tuning both their bulk and 2D interfacial behaviour. Our results in bulk reinforce those exposed in literature where the interaction of pNIPAM microgels and POM\({}^{3-}\) goes significantly beyond an electrostatic interaction. It might possibly need a quantum description that can explain such a strong interaction. Below the VPTT, the high adsorption on the microgel was reflected in a charge inversion and partial collapse at a low \(5\cdot 10^{-5}\) M POM\({}^{3-}\) concentration. We believe that by increasing the POM\({}^{3-}\) concentration, the soft nature of the microgel in the swollen state allows the penetration of POM\({}^{3-}\) inside the microgel with a subsequent re-swelling thanks to the electrostatic repulsion between the POM\({}^{3-}\) anions. Higher POM\({}^{3-}\) concentrations lead again to a partial collapse caused by the adsorbed POM\({}^{3-}\) anions inside the microgel, competing for the water hydration within the polymer network. Above the VPTT, the POM\({}^{3-}\) adsorption was even stronger, showing a more pronounced charge inversion and a novel
Figure 4: HAADF-STEM images and EDS mapping of the microgel monolayers deposited at 20 mN m\({}^{-1}\) with **a)**\(10^{-5}\) M of POM\({}^{3-}\) and **b)**\(10^{-3}\) M of POM\({}^{3-}\). Top row pictures show \(N\) EDS mapping, while bottom row pictures show \(W\) EDS mapping. **c)** Spectra obtained from EDS. Red lines correspond to the core (solid line) and corona (dashed line) of a microgel deposited at \(10^{-5}\) M. Black lines correspond to the core (solid line) and corona (dashed line) of the microgels deposited at \(10^{-3}\) M. The bottom image shows a zoom-in of the region of interest for \(10^{-5}\) M.
effect: a further dehydration of the microgel in its already collapsed state.
At the interface, the Langmuir-Blodgett microgel monolayers deposited on silicon substrates revealed a POM\({}^{3-}\) saturation effect at \(10^{-5}\) M. The POM\({}^{3-}\) restructured both the stretched microgel corona at the interface and the portion of the microgel immersed in the subphase, with more concentration in the later, as confirmed by EDS mapping. In agreement with the results in bulk, the adsorption of the POM\({}^{3-}\) caused a collapse of the immersed part which was later reflected in an increase of the height of the deposited microgels, equivalent to heating above the VPTT in the absence of POM\({}^{3-}\). Furthermore, the adsorption of POM\({}^{3-}\) at 5 mN m\({}^{-1}\) was equivalent to compressing the monolayer in the absence of POM\({}^{3-}\) up to 20 mN m\({}^{-1}\). Upon heating above the VPTT, the microgel diameter at the interface did not change in any case, but the height increased corresponding to the further deswelling observed in bulk above the VPTT. Furthermore, the POM\({}^{3-}\) was only partially degraded after the deposition and drying process, as characterized by XPS.
Finally, we found an improvement of the microgel monolayers restructured by POM\({}^{3-}\) to perform as soft lithography masks, since they resisted better a harsh plasma treatment of 30 min at 100 W. We envision taking advantage of the effects presented in this work to enhance the stability of Pickering emulsions, or using them to destabilize them on demand.
## Acknowledgements
We acknowledge Prof. Jordi Faraudo Gener for insightful conversations, the CIC from University of Granada for the STEM-HAADF and EDS measurements, and the SCAI from the University of Malaga for the XPS measurements. This work was supported by the projects PID2020-116615RA-I00, PY20-00241, A-FQM-90-UGR20 and grants IJC2018-035946-I funded by MCIN/AEI/ 10.13039/501100011033, and EMERGIA grant with reference EMC21_00008 funded by Consejeria de Universidad, Investigacion e Innovacion de la Junta de Andalucia.
|
2307.00863 | Thompson Sampling under Bernoulli Rewards with Local Differential
Privacy | This paper investigates the problem of regret minimization for multi-armed
bandit (MAB) problems with local differential privacy (LDP) guarantee. Given a
fixed privacy budget $\epsilon$, we consider three privatizing mechanisms under
Bernoulli scenario: linear, quadratic and exponential mechanisms. Under each
mechanism, we derive stochastic regret bound for Thompson Sampling algorithm.
Finally, we simulate to illustrate the convergence of different mechanisms
under different privacy budgets. | Bo Jiang, Tianchi Zhao, Ming Li | 2023-07-03T09:04:41Z | http://arxiv.org/abs/2307.00863v1 | # Thompson Sampling under Bernoulli Rewards with Local Differential Privacy
###### Abstract
This paper investigates the problem of regret minimization for multi-armed bandit (MAB) problems with local differential privacy (LDP) guarantee. Given a fixed privacy budget \(\epsilon\), we consider three privatizing mechanisms under Bernoulli scenario: linear, quadratic and exponential mechanisms. Under each mechanism, we derive stochastic regret bound for Thompson Sampling algorithm. Finally, we simulate to illustrate the convergence of different mechanisms under different privacy budgets.
Machine Learning, ICML
## 1 Introduction
The multi-armed bandit (MAB) problem addresses the balancing of the trade-off between exploration and exploitation and has been widely applied in many real-world scenarios, from recommendation systems and information retrieval to healthcare and finance. In the settings of a MAB model, there are \(N\) arms available to the agent, and each arm's reward subjects to a particular distribution with an unknown mean. At each time step, the agent selects one arm. Then a reward is observed by the agent. The agent's ultimate goal is to gather as much cumulative reward as possible or minimize the total regret, i.e., designing a strategy that can explore different arms and exploit well-reward arm(s).
Nevertheless, personalized MAB implementations such as recommendation system is a double-edged sword: the gained utility also comes with the risk of privacy violation. Comparing to the offline learning models, online learning methods directly interact with sensitive user data, e.g., user clicks or purchasing history, and timely update the models to adjust their output, which makes privacy an even more serious concern. For example, physicians want to test the effect of different treatments, and he collects patients' health conditions after a certain treatment. However, one's heart-beat data may compromise one's living routine, such as daily exercise time, sleeping time, etc. Another example is stock recommendation. The system (agent) periodically suggests different stocks (arms) to the user. After the suggestion, he wants to learn the feedback about how many shares (can be 0) the user has bought. However, directly revealing may leak the user's buying power, personal preference, and what kind of risks he is hedging against. In this paper, we leverage privacy protection in the MAB problem, i.e., the MAB problem where the observable reward at each time satisfies certain privacy constraints.
Among all the privacy notions, Differential Privacy(Dwork et al., 2006; Dwork, 2008) has been accepted as the _de facto_ standard for quantifying privacy leakage in the privacy community. The advantage of DP is that it provides a rigorous privacy guarantee against the worst-case adversaries, and is amenable for mechanism design. Recently, the local version of DP, local differential privacy (LDP), has gained significant attention. The server/aggregator who collects data is considered untrusted by the users, who perturb their data before sending it to the server. LDP based mechanisms have been successfully adopted by Google's RAPPOR (Ulfar Erlingsson et al., 2014) for collecting web browsing behavior, and Apple's MacOS to identify popular emojis and media preferences in Safari (Cormode et al., 2018; Team). LDP also enables a variety of privacy-preserving mechanisms for both discrete and continuous-valued data. Such as randomized response mechanism (Wang et al., 2017), Laplacian mechanism (Dwork et al., 2006), Gaussian mechanism (Liu, 2019).
Non-private MAB problems have been studied for decades, among which, either frequentist methods like UCB (Upper Confidence Bound) or Bayesian methods like Thompson Sampling (Agrawal and Goyal, 2012) have been shown to achieve optimal regret performance (up to constant factors). There is also a line of works related to the regret bound of MAB algorithm (Auer et al., 2002). A privacy-preserving MAB can be described as, in each round, a privatized/perturbed version of the reward(s) is (are) observable, and each perturbed reward satisfies certain privacy requirements. The earliest work that studied LDP bandits is (Gajane et al., 2018), which proposed an LDP bandit algorithm that works for arms with Bernoulli rewards. In
(Basu et al., 2019), for bandits with bounded rewards, a Laplace mechanism and a Bernoulli mechanism are proposed, and corresponding UCB algorithms are developed. The upper and lower bound are derived based on UCB algorithms. In (Wang et al., 2020), the statistical regret bounds are derived under either DP or LDP for collaborative Lin-UCB algorithm in the context of collaborative MAB. However, seldom of these works try to derive theoretically regret bound for privacy-preserving Thompson Sampling algorithm. The challenge is that TS is a Bayesian approach that involves the posterior updating at the agent by observing the reward. However, under the privacy-preserving framework, the observable reward is noisy, and the posterior distribution is not fixed but depends on the concrete mechanism (noisy distribution). In this paper, we consider different noisy models providing LDP guarantees. Then under each mechanism, we derive the posterior distribution and bound the corresponding probabilities causing the sub-optimal selection. In this paper, for a given privacy budget for LDP, we derive upper regret bounds for the Thompson Sampling algorithm.
The main contributions of this work are summarized as follows: 1). We propose different privacy-preserving MAB mechanisms under Thompson Sampling satisfying Local Differential Privacy; 2). We derive Cumulative Regret Bounds (CRB) for these mechanisms; 3). Simulate with synthetic data to support our analysis and compare the performance with \(UCB\).
## 2 Model Setup and Problem Formulation
In this problem, we consider \(N\) arms in the system, and each arm's reward \(R\in\mathcal{R}\) follows a sub-Gaussian distribution with mean \(\mu\). We use \(\mu_{i}\) to denote the mean value of \(R^{i}\), where \(i\in\{1,2,...,N\}\) is the index of an arm. The agent, at each timestamp, selects one specific arm to play. The selected arm and the corresponding reward at time \(k\) are denoted as \(I_{k}\in\{1,2,...,N\}\) and \(R_{k}\) respectively. Note that, in this problem, we assume the user (all the arms belong to one user) wants to cooperate with the agent to minimize the cumulative regret (in terms of TS algorithm, help the agent better learn the mean value). On the other hand, the user also wants each of his instantaneous reward's privacy to be protected. Therefore, the reward at each time is protected by a privacy-preserving mechanism \(M\) (\(M\) is assumed to be time-invariant). We define different kinds of privacy-preserving mechanisms later. Denote \(Y_{k}\in\mathcal{Y}\) as the privatized version of \(R_{k}\), which is also the output of \(M\). The agent, after observing \(Y_{k}\), can further update his belief state and the corresponding strategy for the next time.
In this work, we investigate the Thompson Sampling algorithm, which is a very classic algorithm for MAB problems. The algorithm can be summarized as follows: The agent has an estimated reward distribution for each arm (usually starting from a uniform distribution), denote \(\hat{\mu}_{i}^{k}\) as the estimated mean reward for arm \(i\) at time \(k\). At each time, he randomly samples a reward for each arm from his estimated distribution and selects the arm that has the maximal sampled reward. After each round, by observing \(Y_{k}\), he updates his belief state accordingly (the distribution of the reward of the arm that just played).
To provide strict privacy protection to every instantaneous reward at each time, we let \(M\) satisfy \(\epsilon\)-LDP, which can be defined as, for any \(r,r^{\prime}\in\mathcal{R}\), \(y\in\mathcal{Y}\):
\[Pr(Y_{k}=y|R_{k}=r)\leq e^{\epsilon}Pr(Y_{k}=y|R_{k}=r^{\prime}), \tag{1}\]
where \(\epsilon\) is the privacy budget, the smaller \(\epsilon\), the stronger privacy guarantee the mechanism provides. Note that the privacy-preserving mechanism is protecting the privacy of each instantaneous reward (sampled from a fixed distribution), not the distribution itself. On the contrary, TS algorithm requires an estimation of the posterior distribution, which tries to infer the data distribution by nature. From the privacy perspective, the privacy leakage of each instantaneous reward is guaranteed to be upper bounded by \(\epsilon\), and the leakage of the distribution after observing \(k\) samples from the same arm is upper bounded by \(k\epsilon\) by the composability of LDP. The system model is depicted in Fig.1.
## 3 LDP-based Binomial Mechanisms
In this section, we first introduce three LDP-based Binomial privacy-preserving mechanisms, including linear mechanism, quadratic mechanism, and exponential mechanism. Then, we discuss how to implement these mechanisms into the TS algorithm. In the following of this paper, we assume that the reward of each arm is bounded, and the first arm \(I_{1}\) is the optimal arm (\(\mu_{1}=\max_{i}\mu_{i}\)).
Bernoulli mechanism converts a bounded reward to a Bernoulli distributed one, i.e., \(Y_{k}=Bernoulli(p(r))\), where \(p(r)\) denotes the probability that \(Y_{k}=1\) given the reward is \(r\). Algorithm 1 shows the detail of the algorithm for LDP based TS algorithm with Bernoulli mechanism.
Next, in the following theorem, we present the sufficient condition to satisfy the \(\epsilon\)-LDP.
**Theorem 1**.: _For a bounded Bernoulli mechanism, to satisfy \(\epsilon\)-LDP, the following conditions must hold: (1) \(p(0)\geq\frac{1}{e^{\epsilon}+1}\); (2) \(p(1)\leq\frac{e^{\epsilon}}{e^{\epsilon}+1}\); (3) \(p(r)\) is monotonically increasing._
In this paper, we consider three different probability functions under the Bernoulli mechanism. Linear probability function, Quadratic probability function, and Exponential probability function. Based on the sufficient conditions described in Theorem 1, \(p(r)\) for each mechanism are stated in the following Corollary.
**Corollary 1**.: _For linear probability function, \(p(r)=\frac{1}{1+e^{\varepsilon}}[(e^{\epsilon}-1)r+1]\); For quadratic probability function, \(p(r)=\frac{1}{1+e^{\varepsilon}}[(e^{\epsilon}-1-b)r^{2}+br+1]\), where \(b\in[0,2(e^{\epsilon}-1)]\); For exponential probability function, \(p(r)=\frac{e^{\epsilon r}}{e^{\epsilon}+1}\)._
**Remark 1**.: _It is worth noting that the non-linear probability functions are preferable to the linear under certain circumstances. One scenario is that the mean rewards of different arms are very close to each other, the non-linear probability functions provide a better convergence rate compared to the linear model (it discriminates the optimal arm faster than the linear model)._
## 4 Cumulative Regret Bounds
Next, we derive CRB for TS-LDP-B, and we consider problem-dependent regret, where the regret at each time \(t\) depends on the distance between the mean reward between arms \(1\) and \(i\);
### Problem-dependent regret bound for linear probability function
The Problem-dependent regret bound for linear probability function is stated in the following theorem with proof provided in Appendix A.
**Theorem 2**.: _Given any non-zero \(\Delta_{i}=\mu_{1}-\mu_{i}\), the cumulative regret for the linear probability function is upper bounded by (\(\epsilon>0\)):_
\[(1+\gamma)^{2}\left(\frac{e^{\epsilon}+1}{e^{\epsilon}-1}\right)^{2}\left\{ \sum_{i\neq i^{*}}\frac{\log(T)}{2\Delta_{i}}+O\left(\frac{N}{2\Delta_{min}} \right)\right\}. \tag{2}\]
_Where \(\Delta_{min}=\min_{i\in[N]}\Delta_{i}\), \(\gamma\in(0,1)\) is a threshold which helps to prove the regret bound._
**Proof outline for Theorem 2**: The basic idea to prove the algorithm is similar to (Agrawal and Goyal, 2013). The difference of the proof for TS-LDP-B (linear probability function) is that we change the term in the denominator (from \(d(\mu_{i},\mu_{1})\) to \(\Delta_{i}\), which is based on the Pinsker's inequality). In this way, we can express how the Bernoulli mechanism affects the regret bound.
**Remark:** Note that, the regret bound of non-private TS is \((1+\gamma)^{2}\left\{\sum_{i\neq i^{*}}\frac{log(T)}{2\Delta_{i}}+O(\frac{N}{ 2\Delta_{min}})\right\}\), wherein we change the denominator from \(d(\mu_{i},\mu_{1})\) to \(\Delta_{i}\) (loose version of the regret bound in (Agrawal and Goyal, 2013)). Compared to non-private TS, the regret has the term \(\left(\frac{e^{\epsilon}+1}{e^{\epsilon}-1}\right)^{2}\), which can be viewed as the cost for preserving privacy. When \(\epsilon\) approaches infinity, this factor approaches 1, and the regret approaches that of the non-private version. When \(\epsilon\) approaches zero, the regret becomes \(O(T)\).
### Problem-dependent regret bound for non-linear probability function
The Problem-dependent regret bound for the non-linear probability function is stated in the following Theorem with proof provided in Appendix B and C.
Figure 1: System model for privacy-preserving MAB instantaneous rewards are privatized independently.
**Theorem 3**.: _Given any non-zero \(\Delta_{i}=\mu_{1}-\mu_{i}\), the cumulative regret for the non-linear probability function is upper bounded by (\(\epsilon>0\)):_
\[(1+\gamma)^{2}\sum_{i\neq 1}\frac{\log(T)+1}{2\Delta_{i,\epsilon}^{2}}\Delta_{i} +O(N), \tag{3}\]
_where \(\Delta_{i,\epsilon}\) is the noisy difference between optimal arm and selected arm \(i\)._
**Proof outline for Theorem 3**: The proof structure of the non-linear probability function follows the similar idea of the linear probability function by changing the linear probability function to a non-linear probability function.
**Remark:** We can apply quadratic probability function and exponential probability function into (3). In quadratic probability function, \(\Delta_{i,\epsilon}=\mu_{1,\epsilon,quad}-\mu_{i,\epsilon,quad}=\frac{\{(e^{ \epsilon}-1-b)(\mu_{1}+\mu_{i})+b\}(\mu_{i}-\mu_{i})+(e^{\epsilon}-1-b)(\sigma _{1}^{2}-\sigma_{1}^{2})}{e^{\epsilon}+1}\), \(\mu_{i,\epsilon,quad}\) is the expected mean value of arm \(i\) after performs quadratic probability function and \(\sigma_{i}\) is arm \(i\)'s reward variance. we can see that it reduces to (2) when \(e^{\epsilon}-1-b=0\). This result conforms to our expectation because the linear probability is a special case for the quadratic probability function. In exponential probability function, \(\Delta_{i,\epsilon,exp}=\mu_{1,\epsilon,exp}-\mu_{i,\epsilon,exp}=\frac{e^{ \epsilon\mu_{i}}(e^{\epsilon\Delta_{i}}-1)+\tau_{1}(\epsilon)-\tau_{i}( \epsilon)}{e^{\epsilon}+1}\), \(\mu_{i,\epsilon,exp}\) is the expected mean value of arm \(i\) after performs exponential probability function and \(\tau_{i}(\epsilon)\) is the Jensen's gap between \(e^{\epsilon\mu_{i}}\) and \(E[e^{\epsilon r}]\). To make comparison with TS-LDP-B, we also provide UCB-LDP-B under non-linear probability function. It is,
\[R(T)\leq\sum_{i\neq 1}\left\{\frac{8\log(T)}{\Delta_{i,\epsilon}^{2}}+1+ \frac{\pi^{2}}{3}\right\}\Delta_{i}. \tag{4}\]
Our analysis is based on Ren et al. (Ren et al., 2020) proof structure. However, the difference between our algorithm and their algorithm (LDP-UCB-B linear) is that we change the linear probability function to a non-linear probability function.
## 5 Numerical Analysis
In this section, we illustrate the numerical results of our algorithms. Due to computation limitations, we only present the results for bandits with the Bernoulli mechanism. In the comparison, we compare LDP-TS-B studied in this paper to that of LDP-UCB-B (Ren et al., 2020). We also include the performance of the non-private UCB and TS algorithm (\(\epsilon=\infty\)) as a baseline to see the cost for preserving \(\epsilon\)-LDP.
The numerical results are illustrated in Fig. 2. In each experiment, we set the number of arms \(N=20\). The optimal arm with a mean reward of \(0.9\); five arms with \(0.8\); another five arms with \(0.7\); another five arms with \(0.6\); the other four arms with \(0.5\). We also let the rewards of arms follow different types of distributions: arms with mean rewards of \(0.9\) or \(0.6\) generate rewards from Bernoulli distributions; arms with mean rewards of \(0.8\) generate rewards from Beta\((4,1)\) distribution; arms with mean rewards of \(0.7\) generate rewards from \(\{0.4,1\}\) uniformly at random; and arms with mean rewards of \(0.5\) generate rewards from \([0,1]\) uniformly at random. Curves in each figure are averaged over 50 independent trials.
Fig. 2(a) and Fig. 2(d) show the effect of different \(\epsilon\) under linear model. We can see that the regret increases when \(\epsilon\) decreases. This result is consistent with our theoretical
results. Small \(\epsilon\) has much more privacy, and the regret becomes large. Meanwhile, LDP-TS-B (linear) has lower regret than LDP-UCB-B (linear) under the same \(\epsilon\).
Fig. 2(b) and Fig. 2(e) show the effect of different \(\epsilon\) under quadratic model. Same to the linear model, We can see that the regret increases when \(\epsilon\) decreases, and LDP-TS-B (quadratic) has lower regret than LDP-UCB-B (quadratic) under the same \(\epsilon\).
Fig. 2(c) and Fig. 2(f) show the effect of different \(\epsilon\) under exponential model. Similar to the previous observation, We can see that the regret increases when \(\epsilon\) decreases and LDP-TS-B (exponential) has lower regret than LDP-UCB-B (exponential) under the same \(\epsilon\).
## 6 Conclusion and Future Work
In this paper, we studied the Thompson Sampling algorithm with local differential privacy guarantee. We consider three privatizing mechanisms under the Bernoulli rewards and proved a regret upper bounds for the Thompson Sampling algorithm. Numerical results also confirmed our theoretical results. For future work, we are planning to derive a lower regret bound for general mechanisms.
|
2310.04012 | Self-injective algebras under derived equivalences | The Nakayama permutations of two derived equivalent, self-injective Artin
algebras are conjugate. A different but elementary approach is given to showing
that the weak symmetry and self-injectivity of finite-dimensional algebras over
an arbitrary field are preserved under derived equivalences. | Changchang Xi, Jin Zhang | 2023-10-06T04:44:29Z | http://arxiv.org/abs/2310.04012v3 | # Self-injective algebras under derived equivalences
###### Abstract.
The Nakayama permutations of two derived equivalent, self-injective Artin algebras are conjugate. A different but elementary approach is given to showing that the weak symmetry and self-injectivity of finite-dimensional algebras over an _arbitrary_ field are preserved under derived equivalences.
2020 Mathematics Subject Classification: Primary 18G80, 16E35, 16D50, Secondary 16G10, 18F30, 16P10. Keywords: Derived equivalence, Grothendieck group, Nakayama permutation, self-injective algebra, symmetric algebra, weakly symmetric algebra.
The paper is organized as follows: In Section 2 we fix notation and recall basic facts on derived equivalences. In Section 3 we study Grothendieck groups of triangulated categories. In Section 4 we prove that the Nakayama permutations of derived equivalent, self-injective Artin algebras are conjugate. Also, we point out that Rickard's result on derived equivalences preserving symmetry for finite-dimensional algebras can be generalized to the one for Artin algebras (see Remark 4.3). In Section 5 we show that finite-dimensional, weakly symmetric algebras over an arbitrary field are closed under derived equivalences, and provide an elementary proof of Rickard-Rouquier's result on self-injective algebras under derived equivalences. Finally, we deduce a series of consequences of our main results.
## 2. Preliminaries
In this section we fix notation and recall some definitions and results on derived equivalences.
Throughout the paper, all modules are assumed to be left modules. For a (unitary associative) ring \(\Lambda\), we denote by \(\Lambda\)-mod the category of finitely generated \(\Lambda\)-modules, by \(\Lambda\)-proj the full subcategory of \(\Lambda\)-mod consisting of projective \(\Lambda\)-modules, and by \(\mathcal{K}^{b}(\Lambda\)-proj) the bounded homotopy category of complexes over \(\Lambda\)-proj. As usual, we write \(\mathcal{D}^{b}(\Lambda)\) for the bounded derived category of \(\Lambda\)-mod.
Artin algebras \(A\) and \(B\) are _derived equivalent_ if \(\mathcal{D}^{b}(A)\) and \(\mathcal{D}^{b}(B)\) are equivalent as triangulated categories. Derived equivalences can be described by tilting complexes [14]. We recall the descriptions just for Artin algebras below.
Let \(A\) be an Artin algebra. A complex \(X^{\bullet}\) in \(\mathcal{K}^{b}(A\text{-proj})\) is called a _tilting complex_ (see [14]) if \(\text{Hom}_{\mathcal{K}^{b}(A\text{-proj})}(X^{\bullet},X^{\bullet}[i])=0\) for \(i\neq 0\) and \(X^{\bullet}\) generates \(\mathcal{K}^{b}(A\text{-proj})\) as a triangulated category.
The description of derived equivalences by tilting complexes is given by the following theorem (see [7, 10, 14, 15]).
**Theorem 2.1**.: _Suppose that \(A\) and \(B\) are Artin algebras over a commutative Artin ring \(R\). Then the following are equivalent._
(1) \(A\) _and \(B\) are derived equivalent._
(2) _There exists a tilting complex \(T^{\bullet}\in\mathcal{K}^{b}(A\text{-proj})\) such that \(B\simeq\text{End}_{\mathcal{D}^{b}(A)}(T^{\bullet})^{op}\) as algebras._
(3) _There is a triangle equivalence from \(\mathcal{K}^{b}(A\text{-proj})\) to \(\mathcal{K}^{b}(B\text{-proj})\)._
An Artin algebra \(A\) is said to be _symmetric_ if \({}_{A}A_{A}\simeq DA\) as \(A\text{-}A\)-bimodules, where \(D\) is the usual duality of the Artin algebra \(A\); _weakly symmetric_ if the injective hull and projective cover of every simple \(A\)-module are isomorphic; _Frobenius_ if \({}_{A}A\simeq DA\) as \(A\)-modules; and _self-injective_ if \({}_{A}A\) is injective. A basic self-injective algebra is a Frobenius algebra. By a _basic_ algebra we mean an Artin algebra \(A\) such that \({}_{A}A\) is a direct sum of pairwise non-isomorphic indecomposable modules.
Let \(n\) be a positive integer. We denote by \(\Sigma_{n}\) the symmetric group of permutations on \(\{1,2,\cdots,n\}\). For an object \(X\) in an additive category, \(X^{\oplus n}\) stands for the direct sum of \(n\) copies of \(X\).
## 3. Grothendieck groups of triangulated categories
In this section we study basic properties of the Grothendieck groups of triangulated categories, and their behaviors under triangle equivalences. We start with the following definition in [6, 7].
Let \(\mathcal{C}\) be a triangulated category with the shift functor [1]. Assume further that \(\mathcal{C}\) is essentially small, that is, the isomorphism classes of objects of \(\mathcal{C}\) form a set. For \(X\in\mathcal{C}\), we denote by \([X]\) the isomorphism class containing \(X\). Let \(\widetilde{\mathcal{C}}\) be the set of the isomorphism classes \([X]\) of objects
\(X\) in \(\mathcal{C}\). Let \(\mathsf{F}(\mathcal{C})\) be the free abelian group generated by all elements of \(\widetilde{\mathcal{C}}\), and let \(\mathsf{F}_{0}(\mathcal{C})\) be the subgroup of \(\mathsf{F}(\mathcal{C})\) generated by \([X]-[Y]+[Z]\) for all triangles
\[X\longrightarrow Y\longrightarrow Z\longrightarrow X[1]\]
in \(\mathcal{C}\). The _Grothendieck group_\(K_{0}(\mathcal{C})\) of \(\mathcal{C}\) is defined to be the quotient group \(\mathsf{F}(\mathcal{C})/\mathsf{F}_{0}(\mathcal{C})\). We write \(\overline{[X]}\) for the coset of \([X]\) in \(K_{0}(\mathcal{C})\).
We denote by
\[d:\widetilde{\mathcal{C}}\longrightarrow K_{0}(\mathcal{C})\]
the composition of the canonical maps \(\widetilde{\mathcal{C}}\hookrightarrow\mathsf{F}(\mathcal{C})\twoheadrightarrow K _{0}(\mathcal{C})\). Then \(d([X])=\overline{[X]}\) for any object \(X\) in \(\mathcal{C}\).
For a triangle functor \(F:\mathcal{C}\to\mathcal{D}\) of essentially small triangulated categories \(\mathcal{C}\) and \(\mathcal{D}\), one has naturally a map \(\widetilde{F}:\widetilde{\mathcal{C}}\to\widetilde{\mathcal{D}}\) defined by \(\widetilde{F}([X])=[F(X)]\) for \([X]\) in \(\widetilde{\mathcal{C}}\). Since the images of two isomorphic objects in \(\mathcal{C}\) under \(F\) are still isomorphic in \(\mathcal{D}\), the map \(\widetilde{F}\) is well defined.
If \(A\) is a ring and \(\mathcal{C}=\mathcal{K}^{b}(A\text{-proj})\), we simply write \(K_{0}(A)\) for \(K_{0}(\mathcal{C})\).
**Lemma 3.1**.: _Let \(\mathcal{C}\) be an essentially small triangulated category._
(1) _For objects \(X\) and \(Y\) in \(\mathcal{C}\),_
\[d([X\oplus Y])=d([X])+d([Y])\ \ and\ \ d([X[i]])=(-1)^{i}d([X])\ \ for\ \ i\in\mathbb{Z}.\]
(2) _The map \(d\) is surjective._
_Proof._ For objects \(X\) and \(Y\) in \(\mathcal{C}\), there is a canonical triangle \(X\to X\oplus Y\to Y\stackrel{{ 0}}{{\to}}X[1]\). Thus \(d([X\oplus Y])=d([X])+d([Y])\). In particular, for the zero object \(0\) of \(\mathcal{C}\), there holds \(d([0])=0\) in \(K_{0}(\mathcal{C})\). The triangle \(X\to 0\to X[1]\stackrel{{-\mathrm{id}x_{[1]}}}{{\to}}X[1]\) shows \(d([X[1]])=-d([X])\). This implies \(d([X[i]])=(-1)^{i}d([X])\) for \(i\in\mathbb{Z}\).
Let \(\alpha\in K_{0}(\mathcal{C})\). Without loss of generality, we may assume that \(\alpha\) is the coset of an element
\[r_{1}[X_{1}]+\cdots+r_{m}[X_{m}]+r_{m+1}[X_{m+1}]+\cdots+r_{n}[X_{n}]\]
in \(\mathsf{F}(\mathcal{C})\), where all \(X_{j}\) are objects in \(\mathcal{C}\), \(r_{j}<0\) for \(1\leqslant j\leqslant m\), and \(r_{j}>0\) for \(m+1\leqslant j\leqslant n\). Then
\[d([X_{1}[1]^{\oplus-r_{1}}\oplus\cdots\oplus X_{m}[1]^{\oplus-r_{m}}\oplus X _{m+1}^{\oplus r_{m+1}}\oplus\cdots\oplus X_{n}^{\oplus r_{n}}])=\alpha.\]
So \(d\) is surjective. \(\Box\)
Now, assume that \(A\) is a semiperfect ring, that is, every finitely generated left (or right) \(A\)-module has a projective cover, or equivalently, \(A\) has a complete orthogonal set \(\{e_{1},\cdots,e_{n}\}\) of idempotents with each \(e_{i}Ae_{i}\) a local ring.
Let \(X^{\bullet}\in\mathcal{K}^{b}(A\text{-proj})\) be of the form
\[X^{\bullet}=\qquad\cdots\longrightarrow 0\longrightarrow X^{i}\stackrel{{ d_{X}^{i}}}{{ \longrightarrow}}X^{i+1}\stackrel{{ d_{X}^{i+1}}}{{\longrightarrow}} \cdots\longrightarrow X^{i+m}\longrightarrow 0\longrightarrow\cdots\]
Denote by \({}_{\sigma<i}X^{\bullet}\) the brutal truncation of \(X^{\bullet}\) at the degree \(t\), that is, \(({}_{\sigma<i}X^{\bullet})^{j}=X^{j}\) for \(j\leqslant t\) and \(0\) otherwise. Then there is a series of triangles:
\[\Delta_{j}:\quad{}_{\sigma<j-1}X^{\bullet}[-1]\stackrel{{ f_{j}^{*}}}{{ \longrightarrow}}X^{j}[-j]\longrightarrow{}_{\sigma<j}X^{\bullet} \longrightarrow{}_{\sigma<j-1}X^{\bullet}\]
for \(i+1\leq j\leq i+m\), where \({}_{\sigma<i+m}X^{\bullet}=X^{\bullet}\) and \(f_{j}^{*}\) is defined by \(f_{j}^{j}=d_{X}^{j-1}\) and \(f_{j}^{s}=0\) for \(s\neq j\). By Lemma 3.1, \(d([X^{\bullet}])=\sum_{i\leqslant j\leqslant i+m}(-1)^{i}d([X^{j}])\).
Since \(A\) is a semiperfect ring, there are finitely many pairwise non-isomorphic, indecomposable projective \(A\)-modules. Let \(\{P_{1},\cdots,P_{n}\}\) be a complete set of pairwise non-isomorphic, indecomposable projective \(A\)-modules. For \(i\leq j\leq i+m\), we write \(X^{j}\simeq\bigoplus_{1\leq s\leq n}P_{s}^{\oplus_{I_{s}}}\) with \(t_{j_{s}}\in\mathbb{N}\), and \(\lambda_{s}:=\sum_{i\leq j\leq i+m}(-1)^{j}t_{j_{s}}\). Then \(d([X^{\bullet}])=\sum_{1\leq s\leq n}\lambda_{s}d([P_{s}])\). As \(d\) is surjective, we see that \(K_{0}(A)\) is an abelian group generated by these \(\overline{[P_{s}]}\), \(1\leq s\leq n\).
Next, we define \(\mathbf{dim}(X^{\bullet})=(\lambda_{1},\cdots,\lambda_{n})\in\mathbb{Z}^{n}\) for the complex \(X^{\bullet}\). If \(Y^{\bullet}\) is a complex in \(\mathcal{K}^{b}(A\text{-proj})\) such that \(Y^{\bullet}\simeq X^{\bullet}\) in \(\mathcal{K}^{b}(A\text{-proj})\), then \(\mathbf{dim}(Y^{\bullet})=\mathbf{dim}(X^{\bullet})\). Moreover, for any morphism \(f^{\bullet}:X^{\bullet}\to Z^{\bullet}\) in \(\mathcal{K}^{b}(A\text{-proj})\), there holds \(\mathbf{dim}(\text{cone}(f^{\bullet}))=\mathbf{dim}(Z^{\bullet})-\mathbf{ dim}(X^{\bullet})\), where \(\text{cone}(f^{\bullet})\) stands for the mapping cone of \(f^{\bullet}\). Hence we get a homomorphism of abelian groups:
\[\mathbf{dim}:K_{0}(A)\longrightarrow\mathbb{Z}^{n},\quad\overline{[X^{\bullet }]}\mapsto\mathbf{dim}(X^{\bullet}).\]
This shows that the set \(\{\mathbf{dim}(\overline{[P_{1}]}),\mathbf{dim}(\overline{[P_{2}]}),\cdots, \mathbf{dim}(\overline{[P_{n}]})\}\) forms a basis of the free abelian group \(\mathbb{Z}^{n}\), and therefore \(K_{0}(A)\) is a free abelian group generated by \(\overline{[P_{1}]},\overline{[P_{2}]},\cdots,\overline{[P_{n}]}\).
Not all Grothendieck groups of (essentially small) triangulated categories are free. For example, if \(A\) is a finite-dimensional, self-injective algebra such that the Cartan matrix of \(A\) has an elementary divisor different from \(0\) and \(1\), then the Grothendieck group of the stable module category \(A\text{-mod}\) (as a triangulated category) is not free. For more details, see [17, Section 5.7.1].
**Proposition 3.2**.: _Suppose that \(A\) and \(B\) are semiperfect rings. If \(F:\mathcal{K}^{b}(A\text{-proj})\rightarrow\mathcal{K}^{b}(B\text{-proj})\) is a triangle equivalence, then \(F\) induces a group isomorphism \(\overline{F}:K_{0}(A)\to K_{0}(B)\) such that the diagram (of maps) is commutative:_
_where \(\mathcal{K}^{b}(A\text{-proj})\) stands for the set of the isomorphism classes of objects in \(\mathcal{K}^{b}(A\text{-proj})\)._
_Proof._ Let \(Y^{\bullet}_{s}:=F(P_{s})\) in \(\mathcal{K}^{b}(B\text{-proj})\) for \(1\leq s\leq n\). We define a group homomorphism
\[\overline{F}:K_{0}(A)\longrightarrow K_{0}(B),\quad\overline{[P_{s}]}\mapsto d ([Y^{\bullet}_{s}])\;\text{ for }1\leq s\leq n.\]
Let \(X^{\bullet}\) be a complex in \(\mathcal{K}^{b}(A\text{-proj})\) of the above form with \(X^{j}:=\bigoplus_{1\leq s\leq n}P_{s}^{\oplus_{I_{s}}}\) and \(\lambda_{s}:=\sum_{j}(-1)^{j}t_{j_{s}}\). Then \(d([X^{\bullet}])=\sum_{1\leq s\leq n}\lambda_{s}d([P_{s}])\) and
\[\overline{F}d([X^{\bullet}])=\sum_{1\leq s\leq n}\lambda_{s}\overline{F}d([P_ {s}])=\sum_{1\leq s\leq n}\lambda_{s}\overline{F}(\overline{[P_{s}]})=\sum_{1 \leq s\leq n}\lambda_{s}d([Y^{\bullet}_{s}]).\]
Applying \(F\) to the triangle \(\Delta_{j}\), we have a triangle
\[F(\Delta_{j}):\quad F(\sigma_{\leq j-1}X^{\bullet})[-1]\overset{F(J^{\bullet}_ {s})}{\longrightarrow}F(X^{j})[-j]\longrightarrow F(\sigma_{\leq j}X^{ \bullet})\longrightarrow F(\sigma_{\leq j-1}X^{\bullet}),\;\;i+1\leq j\leq i+m.\]
Since \(F(\sigma_{\leq i+m}X^{\bullet})=F(X^{\bullet})\), it follows from Lemma 3.1 that
\[d([F(X^{\bullet})])=\sum_{i\leq j\leq i+m}(-1)^{j}d([F(X^{j})]).\]
Then
\[d\overline{F}([X^{\bullet}])=d([F(X^{\bullet})])=\sum_{1\leq s\leq n}\lambda _{s}d([F(P_{s})])=\sum_{1\leq s\leq n}\lambda_{s}d([Y^{\bullet}_{s}])= \overline{F}d([X^{\bullet}]).\]
Hence the above square (\(*\)) is commutative.
It remains to show that \(\overline{F}\) is bijective. In fact, we consider a quasi-inverse \(F^{-1}\) of \(F\). In this case, we have the group homomorphism \(\overline{F^{-1}}:K_{0}(B)\to K_{0}(A)\) induced by \(F^{-1}\). Then
\[\overline{F^{-1}}\,\overline{F([P_{s}])}=\overline{F^{-1}}(d([Y_{s}^{\bullet}]) )=\widehat{dF^{-1}}([Y_{s}^{\bullet}])=d([F^{-1}(Y_{s}^{\bullet}]))=d([P_{s} ])=\overline{[P_{s}]}\]
for \(1\leqslant s\leqslant n\). Hence \(\overline{F^{-1}}\,\overline{F}=\mathrm{id}_{K_{0}(A)}\). Similarly, \(\overline{F}\,\overline{F^{-1}}=\mathrm{id}_{K_{0}(B)}\). So \(\overline{F}\) is bijective.
## 4. Nakayama permutations of self-injective algebras
In this section we will prove that Nakayama permutations of derived equivalent, self-injective Artin algebras are conjugate.
Let \(A\) be an Artin algebra over a commutative Artin ring \(R\). The Nakayama functor \(\nu_{A}:A\text{-mod}\to A\text{-mod}\) is defined by \(\nu_{A}:=(DA)\otimes_{A}-\), where \(D\) is the usual duality of an Artin algebra. Clearly, \(\nu_{A}\) induces a left derived functor
\[\mathbf{L}\nu_{A}:\mathscr{D}^{b}(A)\longrightarrow\mathscr{D}^{b}(A),\]
which restricts to a triangle equivalence
\[\mathbf{L}\nu_{A}:\mathscr{K}^{b}(A\text{-proj})\longrightarrow\mathscr{K}^{ b}(A\text{-inj}),\]
where \(A\)-inj denotes the category of finitely generated injective \(A\)-modules.
Now, assume that \(A\) is a self-injective Artin algebra. Since \({}_{A}(DA)\) and \((DA)_{A}\) are projective generators, \(\nu_{A}\) is a self-equivalence on \(A\)-mod, and restricts to a self-equivalence on \(A\)-proj. Let \(\{P_{1},\cdots,P_{n}\}\) be a complete set of pairwise non-isomorphic, indecomposable projective \(A\)-modules. Then \(\nu_{A}\) induces a permutation on \(\{P_{1},\cdots,P_{n}\}\), called the Nakayama permutation of \(A\). Precisely, the _Nakayama permutation_\(\sigma_{A}\) is defined on \(\{1,\cdots,n\}\) by
\[\nu_{A}(P_{i})\simeq P_{\sigma_{A}(i)}\]
for \(i\in\{1,\cdots,n\}\). Clearly, up to conjugation, the Nakayama permutation \(\sigma_{A}\) of \(A\) is uniquely determined by \(\{P_{1},\cdots,P_{n}\}\).
Let \(B\) be another self-injective Artin algebra over \(R\), and let \(\{Q_{1},\cdots,Q_{m}\}\) be a complete set of pairwise non-isomorphic, indecomposable projective \(B\)-modules. Assume that \(A\) and \(B\) are derived equivalent. Then \(m=n\) and \(\sigma_{B}\) is again a permutation of \(\{1,\cdots,n\}\). Our first main result reveals a precise relation between \(\sigma_{A}\) and \(\sigma_{B}\).
**Theorem 4.1**.: _If \(A\) and \(B\) are derived equivalent, self-injective Artin algebras, then \(\sigma_{A}\) and \(\sigma_{B}\) are conjugate._
To prove Theorem 4.1, we first show a technical lemma on the left derived functors of Nakayama functors.
Let \(A\) be a self-injective Artin algebra. Then both \({}_{A}(DA)\) and \((DA)_{A}\) are projective. By definition, the left derived functor of the Nakayama functor \(\nu_{A}\) is given explicitly as follows:
\[\mathbf{L}\nu_{A}:\mathscr{K}^{b}(A\text{-proj})\longrightarrow\mathscr{K}^{ b}(A\text{-proj}),\ \ X^{\bullet}=(X^{i},d_{X}^{i})\mapsto\big{(}\nu_{A}(X^{i}),\nu_{A}(d_{X}^{i}) \big{)}.\]
As \(\nu_{A}\) is a self-equivalence of \(A\)-proj (or \(A\)-mod), we see that \(\mathbf{L}\nu_{A}\) is a triangle self-equivalence of \(\mathscr{K}^{b}(A\text{-proj})\). Clearly, \(\mathbf{L}\nu_{A}(P)\simeq\nu_{A}(P)\) for \(P\in A\)-proj.
**Lemma 4.2**.: _Suppose that \(A\) and \(B\) are Artin algebras. If \(F:\mathscr{D}^{b}(A)\rightarrow\mathscr{D}^{b}(B)\) is a triangle equivalence, then for any \(X^{\bullet}\) in \(\mathscr{K}^{b}(A\text{-proj})\), \(F\mathbf{L}\nu_{A}(X^{\bullet})\simeq\mathbf{L}\nu_{B}F(X^{\bullet})\) in \(\mathscr{D}^{b}(B)\) which is natural in \(X^{\bullet}\). In particular, if \(A\) and \(B\) are self-injective Artin algebras and \(F:\mathscr{K}^{b}(A\text{-proj})\rightarrow\mathscr{K}^{b}(B\text{-proj})\) is a triangle equivalence, then there is a natural isomorphism \(F\mathbf{L}\nu_{A}\simeq\mathbf{L}\nu_{B}F:\mathscr{K}^{b}(A\text{-proj}) \rightarrow\mathscr{K}^{b}(B\text{-proj})\)._
_Proof._ For \(X^{\bullet}\) in \(\mathcal{K}^{b}(A\)-proj) and \(Y^{\bullet}\) in \(\mathcal{D}^{b}(A)\), we may consider \(\operatorname{Hom}_{\mathcal{D}^{b}(A)}(X^{\bullet},Y^{\bullet})\) and
\(\operatorname{Hom}_{\mathcal{D}^{b}(A)}(Y^{\bullet},\mathbf{L}\nu_{A}(X^{ \bullet}))\) as the degree-zero homologies of the total complexes of the double complexes \(\operatorname{Hom}_{A}^{\bullet\bullet}(X^{\bullet},Y^{\bullet})\) and \(\operatorname{Hom}_{A}^{\bullet\bullet}(Y^{\bullet},\mathbf{L}\nu_{A}(X^{ \bullet}))\), respectively. It is well known that, for any \(X\) in \(A\)-proj and \(Y\) in \(A\)-mod, \(D\mathrm{Hom}_{A}(X,Y)\simeq\operatorname{Hom}_{A}(Y,\nu_{A}(X))\) which is natural in \(X\) and \(Y\). Thus \(D\mathrm{Hom}_{A}^{\bullet\bullet}(X^{\bullet},Y^{\bullet})\simeq\operatorname {Hom}_{A}^{\bullet\bullet}(Y^{\bullet},\mathbf{L}\nu_{A}(X^{\bullet}))\) naturally as double complexes for \(X^{\bullet}\in\mathcal{K}^{b}(A\)-proj) and \(Y^{\bullet}\in\mathcal{D}^{b}(A)\). Taking homology in degree zero, we obtain
\[(1)\quad D\mathrm{Hom}_{\mathcal{D}^{b}(A)}(X^{\bullet},Y^{\bullet})\simeq \operatorname{Hom}_{\mathcal{D}^{b}(A)}(Y^{\bullet},\mathbf{L}\nu_{A}(X^{ \bullet}))\]
which is natural in \(X^{\bullet}\in\mathcal{K}^{b}(A\)-proj) and \(Y^{\bullet}\in\mathcal{D}^{b}(A)\). On the other hand, as \(F\) is an equivalence, there are natural isomorphisms:
\[(2)\quad D\mathrm{Hom}_{\mathcal{D}^{b}(A)}(X^{\bullet},Y^{\bullet})\simeq D \mathrm{Hom}_{\mathcal{D}^{b}(B)}(F(X^{\bullet}),F(Y^{\bullet}))\]
and
\[(3)\quad\operatorname{Hom}_{\mathcal{D}^{b}(A)}(Y^{\bullet},\mathbf{L}\nu_{A} (X^{\bullet}))\simeq\operatorname{Hom}_{\mathcal{D}^{b}(B)}(F(Y^{\bullet}),F( \mathbf{L}\nu_{A}(X^{\bullet}))).\]
Using the \(B\)-module version of \((1)\), we have the natural isomorphism:
\[(4)\quad D\mathrm{Hom}_{\mathcal{D}^{b}(B)}(F(X^{\bullet}),F(Y^{\bullet})) \simeq\operatorname{Hom}_{\mathcal{D}^{b}(B)}(F(Y^{\bullet}),\mathbf{L}\nu_{B }(F(X^{\bullet}))).\]
Thus it follows from \((3),(1),(2)\) and \((4)\) that
\[\operatorname{Hom}_{\mathcal{D}^{b}(B)}(F(Y^{\bullet}),F(\mathbf{L}\nu_{A}(X^ {\bullet})))\simeq\operatorname{Hom}_{\mathcal{D}^{b}(B)}(F(Y^{\bullet}), \mathbf{L}\nu_{B}(F(X^{\bullet})))\]
which is natural in \(X^{\bullet}\in\mathcal{K}^{b}(A\)-proj) and \(Y^{\bullet}\in\mathcal{D}^{b}(A)\). As \(F\) is an equivalence, we obtain \(F\mathbf{L}\nu_{A}(X^{\bullet})\simeq\mathbf{L}\nu_{B}F(X^{\bullet})\) in \(\mathcal{D}^{b}(B)\) which is natural in \(X^{\bullet}\in\mathcal{K}^{b}(A\)-proj). When \(A\) and \(B\) are self-injective and when \(F\) restricts to a triangle equivalence \(\mathcal{K}^{b}(A\)-proj) \(\to\mathcal{K}^{b}(B\)-proj), we have the last statement of Lemma 4.2.
**Remark 4.3**.: Lemma 4.2 can be applied to generalize [15, Corollary 5.3] for finite-dimensional algebras over a field to the one for Artin algebras, namely an Artin algebra \(B\) derived equivalent to a symmetric Artin algebra \(A\) is itself symmetric. Indeed, let \(F:\mathcal{D}^{b}(A)\to\mathcal{D}^{b}(B)\) be a triangle equivalence. Since \(A\) is symmetric, \(DA\simeq A\) as \(A\)-\(A\)-bimodules, and therefore \(\mathbf{L}\nu_{A}\simeq\operatorname{id}\) naturally on \(\mathcal{D}^{b}(A)\). By Lemma 4.2, \(\mathbf{L}\nu_{B}F(X^{\bullet})\simeq F\mathbf{L}\nu_{A}(X^{\bullet})\simeq F (X^{\bullet})\) in \(\mathcal{D}^{b}(B)\) naturally for \(X^{\bullet}\) in \(\mathcal{K}^{b}(A\)-proj). As \(F\) is an equivalence, \(\mathbf{L}\nu_{B}\simeq\operatorname{id}\) naturally on \(\mathcal{K}^{b}(B\)-proj). Hence \(DB\simeq B\) as \(B\)-modules. If we apply the natural isomorphism \(\mathbf{L}\nu_{B}\simeq\operatorname{id}\) to morphisms \(B\to B\) in \(\mathcal{K}^{b}(B\)-proj) given by right multiplication of elements in \(B\), then the isomorphism \(DB\simeq B\) is actually an isomorphism of \(B\)-\(B\)-bimodules, and therefore \(B\) is a symmetric algebra.
Given \(\sigma\in\Sigma_{n}\), we may write \(\sigma=\sigma_{1}\sigma_{2}\cdots\sigma_{s}\) with \(\sigma_{i}\) a cyclic permutation of length \(\lambda_{i}\geq 1\), such that the contents of these \(\sigma_{i}\) are pairwise disjoint. In this case, we may assume that \(\lambda_{1}\geqslant\lambda_{2}\geqslant\cdots\geqslant\lambda_{s}\). Then \(\lambda:=(\lambda_{1},\cdots,\lambda_{s})\) is a partition of \(n\), called the _cycle type_ of \(\sigma\). It is well known that two permutations in \(\Sigma_{n}\) are conjugate if and only if they have the same cycle type.
For a unitary ring \(A\), we denote by \(M_{n}(A)\) the full \(n\times n\) matrix ring over \(A\). If \(\sigma\in\Sigma_{n}\), then _the permutation matrix_\(c_{\sigma}\) of \(\sigma\) over \(\mathbb{C}\) is the \(n\times n\) matrix with \(1\) in the \((i,\sigma(i))\)-entry for \(1\leq i\leq n\) and with \(0\) for all other entries.
The following result seems to be known. For the convenience of the reader, we provide a proof.
**Lemma 4.4**.: _Let \(\sigma_{1}\) and \(\sigma_{2}\) be permutations in \(\Sigma_{n}\). Then \(\sigma_{1}\) and \(\sigma_{2}\) are conjugate in \(\Sigma_{n}\) if and only if \(c_{\sigma_{1}}\) and \(c_{\sigma_{2}}\) are similar in \(M_{n}(\mathbb{C})\)._
_Proof._ Clearly, \(c_{\sigma_{1}}c_{\sigma_{2}}=c_{\sigma_{1}\sigma_{2}}\) and \(c_{\sigma}^{-1}=c_{\sigma^{-1}}\) in \(M_{n}(\mathbb{C})\). Thus, if \(\sigma_{1}\) and \(\sigma_{2}\) are conjugate in \(\Sigma_{n}\), then \(c_{\sigma_{1}}\) and \(c_{\sigma_{2}}\) are similar in \(M_{n}(\mathbb{C})\). Here, \(\mathbb{C}\) can be replaced by any field.
Conversely, suppose that \(c_{\sigma_{1}}\) and \(c_{\sigma_{2}}\) are similar in \(M_{n}(\mathbb{C})\). Let \(\lambda=(\lambda_{1},\cdots,\lambda_{u})\) and \(\mu=(\mu_{1},\cdots,\mu_{v})\) be the cycle types of \(\sigma_{1}\) and \(\sigma_{2}\), respectively. Since the similarity of matrices and conjugation of permutations are equivalence relations and since conjugate permutations have similar permutation matrices, we may assume that
\[\sigma_{1}=(1\;2\;\cdots\;\lambda_{1})(\lambda_{1}+1\;\lambda_{1}+2\;\cdots\; \lambda_{1}+\lambda_{2})\cdots(\sum_{1\leqslant i\leqslant u-1}\lambda_{i}+ 1\;\sum_{1\leqslant i\leqslant u-1}\lambda_{i}+2\;\cdots\;n),\]
\[\sigma_{2}=(1\;2\;\cdots\;\mu_{1})(\mu_{1}+1\;\mu_{1}+2\;\cdots\;\mu_{1}+\mu_{ 2})\cdots(\sum_{1\leqslant i\leqslant v-1}\mu_{i}+1\;\sum_{1\leqslant i \leqslant v-1}\mu_{i}+2\;\cdots\;n)\]
where the \(i\)-tuple \((a_{1}\;\cdots\;a_{i})\) means the cyclic permutation on the set \(\{a_{1},\cdots,a_{i}\}\). By computations, the characteristic polynomials of \(c_{\sigma_{1}}\) and \(c_{\sigma_{2}}\) are
\[\Phi_{1}(x)=(x^{\lambda_{1}}-1)(x^{\lambda_{2}}-1)\cdots(x^{\lambda_{u}}-1)\in \mathbb{C}[x]\;\;\mbox{and}\;\;\Phi_{2}(x)=(x^{\mu_{1}}-1)(x^{\mu_{2}}-1) \cdots(x^{\mu_{v}}-1)\in\mathbb{C}[x],\]
respectively. Since \(c_{\sigma_{1}}\) and \(c_{\sigma_{2}}\) are similar in \(M_{n}(\mathbb{C})\), we have \(\Phi_{1}(x)=\Phi_{2}(x)\), that is, they have the same eigenvalues with the same multiplicities. We show \(\lambda_{1}=\mu_{1}\). This follows from the 3 facts:
(i) All \(\lambda_{1}\)-th roots of unity are eigenvalues of \(c_{\sigma_{1}}\), while all \(\mu_{1}\)-th roots of unity are eigenvalues of \(c_{\sigma_{2}}\).
(ii) There exists a \(q\)-th root of unity different from any \(w\)-th root of unity if \(q>w\), and
(iii) \(\lambda_{1}\) and \(\mu_{1}\) are maximal in \(\lambda\) and \(\mu\), respectively.
By repeating this process, we finally get \(u=v\) and \(\lambda_{i}=\mu_{i}\) for \(1\leq i\leq u\). Hence \(\sigma_{1}\) and \(\sigma_{2}\) have the same cycle type, and therefore are conjugate in \(\Sigma_{n}\). \(\Box\)
**Proof of Theorem 4.1:** The functors \(\mathbf{L}\nu_{A}\) and \(\mathbf{L}\nu_{B}\) are triangle self-equivalences of \(\mathcal{K}^{b}(A\mbox{-proj})\) and \(\mathcal{K}^{b}(B\mbox{-proj})\), respectively. Since \(A\) and \(B\) are derived equivalent, there is a triangle equivalence \(F:\mathcal{K}^{b}(A\mbox{-proj})\to\mathcal{K}^{b}(B\mbox{-proj})\) by Theorem 2.1. Thus there is the following diagram:
where the vertical squares in the diagram are commutative by Proposition 3.2, and the top square is commutative by Lemma 4.2. We shall show that the bottom square of homomorphisms of abelian groups is also commutative, that is, \(\overline{F}\,\overline{\mathbf{L}\nu_{A}}=\overline{\mathbf{L}\nu_{B}}\, \overline{F}\).
Indeed, take \(\alpha\in K_{0}(A)\). By Lemma 3.1(2), there is a complex \(X^{\bullet}\) in \(\mathcal{K}^{b}(A\mbox{-proj})\) such that \(d([X^{\bullet}])=\alpha\). Then
\[\overline{F}\,\overline{\mathbf{L}\nu_{A}}(\alpha) =\overline{F}\,\overline{\mathbf{L}\nu_{A}}(d([X^{\bullet}]))= \overline{F}d\overline{\mathbf{L}\nu_{A}}([X^{\bullet}])=d\widehat{F} \overline{\mathbf{L}\nu_{A}}([X^{\bullet}])\] \[=d\overline{\mathbf{L}\nu_{B}}\widetilde{F}([X^{\bullet}])= \overline{\mathbf{L}\nu_{B}}d\widetilde{F}([X^{\bullet}])=\overline{\mathbf{L} \nu_{B}}\,\overline{F}d([X^{\bullet}])=\overline{\mathbf{L}\nu_{B}}\, \overline{F}(\alpha).\]
Hence the bottom square of the diagram is commutative.
Now, consider the Nakayama permutations \(\sigma_{A}\) and \(\sigma_{B}\) as elements in \(\Sigma_{n}\), which are defined by \(\nu_{A}(P_{i})\simeq P_{\sigma_{A}(i)}\) and \(\nu_{B}(Q_{i})\simeq Q_{\sigma_{B}(i)}\) for \(1\leq i\leq n\). Let \(c_{\sigma_{A}}\) and \(c_{\sigma_{B}}\) be the permutation matrices
of \(\sigma_{A}\) and \(\sigma_{B}\), respectively. The Grothendieck groups \(K_{0}(A)\) and \(K_{0}(B)\) are free abelian groups generated by these \(\overline{[P_{i}]}\) and these \(\overline{[Q_{i}]}\), respectively. Moreover, by Proposition 3.2,
\[\overline{\mathbf{L}\nu_{A}}(\overline{[P_{i}]})=\overline{\mathbf{L}\nu_{A}}( d([P_{i}]))=d\widetilde{\mathbf{L}\nu_{A}}([P_{i}])=d([\nu_{A}(P_{i})])=d([P_{ \sigma_{A}(i)}])=\overline{[P_{\sigma_{A}(i)}]}.\]
Hence, with respect to the basis \(\{\overline{[P_{1}]},\cdots,\overline{[P_{n}]}\}\), the group homomorphism \(\overline{\mathbf{L}\nu_{A}}\) has the corresponding matrix \(c_{\sigma_{A}}\). Similarly, with respect to the basis \(\{\overline{[Q_{1}]},\cdots,\overline{[Q_{n}]}\}\), the group homomorphism \(\overline{\mathbf{L}\nu_{B}}\) has the corresponding matrix \(c_{\sigma_{B}}\). Since \(\overline{F}\) is a group isomorphism by Proposition 3.2, it corresponds to an invertible matrix \(c\in M_{n}(\mathbb{C})\) with respect to the basis \(\{\overline{[P_{1}]},\cdots,\overline{[P_{n}]}\}\) of \(K_{0}(A)\) and the basis \(\{\overline{[Q_{1}]},\cdots,\overline{[Q_{n}]}\}\) of \(K_{0}(B)\). Due to \(\overline{F}\,\overline{\mathbf{L}\nu_{A}}=\overline{\mathbf{L}\nu_{B}}\, \overline{F}\), there holds \(cc_{\sigma_{A}}=c_{\sigma_{B}}c\). This means that \(c_{\sigma_{A}}\) and \(c_{\sigma_{B}}\) are similar in \(M_{n}(\mathbb{C})\). By Lemma 4.4, \(\sigma_{A}\) and \(\sigma_{B}\) are conjugate in \(\Sigma_{n}\). \(\sqcup\)\(\sqcap\)
## 5. Self-injective and Weakly symmetric algebras over a field are closed under derived equivalences
Al-Nofayee and Rickard [2] proved that, if \(A\) and \(B\) are derived equivalent, finite-dimensional algebras over an algebraically closed field and if \(A\) is self-injective, then \(B\) is self-injective. This result seems then to be extended to finite-dimensional algebras over an _arbitrary_ field by Rickard and Rouquier in [16, Corollary 3.12], but we have difficulty to understand an argument in the proof there, see the words just above [16, Corollary 3.12]: "Assume now \(H^{<0}(B)=0\). Then, viewed as an object of \(D^{b}(B),\nu(P_{8}(S))\) is concentrated in degree \(0\)".
In this section we give a different, but very elementary approach to Rickard-Rouquier's result, and we show that a finite-dimensional algebra over an arbitrary field derived equivalent to a weakly symmetric algebra is itself weakly symmetric. This is known for weakly symmetric algebras over an algebraically closed field in [3, Proposition 3.1].
An Artin ring \(R\) is called a _Frobenius ring_ if \({}_{R}R\) is injective and the socle of \({}_{R}R\) is isomorphic to the top of \({}_{R}R\).
**Lemma 5.1**.: _Let \(\Lambda\) be an Artin algebra over a Frobenius and commutative Artin ring \(R\), and let \(E\) be a commutative \(R\)-algebra such that \({}_{R}E\) is a free \(R\)-module and \({}_{E}E\) is an injective \(E\)-module. Assume that \(M\in\Lambda\)-\(\mathrm{mod}\) is a projective \(R\)-module. Then \({}_{\Lambda}M\) is injective if and only if so is the \(\Lambda\otimes_{R}E\)-module \(M\otimes_{R}E\)._
_Proof._ Let \(-^{*}=\mathrm{Hom}_{R}(-,R):\Lambda\)-\(\mathrm{mod}\to\Lambda^{op}\)-\(\mathrm{mod}\). Since \(R\) is a Frobenius ring, \(-^{*}\) is a duality by [4, Theorem 3.3]. With \(M\) also \(M^{*}\) is a finitely generated projective \(R\)-module, therefore \(\mathrm{Hom}_{R}(M^{*},R)\otimes_{R}X\simeq\mathrm{Hom}_{R}(M^{*},X)\) as \(\Lambda\)-\(\Gamma\)-bimodules for any \(R\)-\(\Gamma\)-bimodule \(X\) with \(\Gamma\) a ring. Hence there are natural isomorphisms of functors:
\[\begin{array}{rl}\mathrm{Hom}_{\Lambda\otimes_{R}E}(-,M\otimes_{R}E)&\simeq \mathrm{Hom}_{\Lambda\otimes_{R}E}(-,(M^{**})\otimes_{R}E)\\ &\simeq\mathrm{Hom}_{\Lambda\otimes_{R}E}(-,\mathrm{Hom}_{R}(M^{*},E))\\ &\simeq\mathrm{Hom}_{E}(M^{*}\otimes_{\Lambda}(-)_{E},E)\qquad\text{(by adjoint isomorphism)}\\ &=\mathrm{Hom}_{E}(-,E)\circ(M^{*}\otimes_{\Lambda}-)\end{array}\]
Thus if \({}_{\Lambda}M\) is injective, then it follows from the duality \(-^{*}\) that \(M^{*}\) is a projective right \(\Lambda\)-module, and therefore \(\mathrm{Hom}_{\Lambda\otimes_{R}E}(-,M\otimes_{R}E)\) is a composition of two exact functors. Thus \(\mathrm{Hom}_{\Lambda\otimes_{R}E}(-,M\otimes_{R}E)\) is itself an exact functor and \(M\otimes_{R}E\) is an injective \(\Lambda\otimes_{R}E\)-module.
Conversely, suppose that \({}_{\Lambda\otimes_{R}E}(M\otimes_{R}E)\) is injective. By [5, Corollary IX.2.4a], the \(\Lambda\)-module \(M\otimes_{R}E\) is injective. Assume that \(\{x_{i}\mid i\in I\}\) is an \(R\)-basis of \(E\) for some indexing set \(I\). We take a
fixed element \(0\in I\). Then \(E=x_{0}R\oplus\bigoplus_{i\in I\setminus 0}x_{i}R\) and
\[{}_{\Lambda}M\otimes_{R}E\simeq M\otimes_{R}\big{(}\bigoplus_{i\in I}x_{i}R \big{)}\simeq M\otimes_{R}(x_{0}R)\oplus M\otimes_{R}\big{(}\bigoplus_{i\in I \setminus 0}x_{i}R\big{)}\simeq M\oplus M\otimes_{R}\big{(}\bigoplus_{i\in I \setminus 0}x_{i}R\big{)}\]
as \(\Lambda\)-modules. Hence \({}_{\Lambda}M\) is injective. \(\Box\)
The following is an immediate consequence of Lemma 5.2.
**Corollary 5.2**.: _Let \(\Lambda\) be a finite-dimensional algebra over a field \(k\), and let \(E/k\) be an extension of fields. Then \(\Lambda\) is self-injective if and only if so is the tensor product \(\Lambda\otimes_{k}E\) of the \(k\)-algebras \(\Lambda\) and \(E\)._
The following result is observed by Rickard and Rouquier in [16].
**Corollary 5.3**.: _[_16_, Corollary 3.12]_ _Suppose that \(A\) and \(B\) are finite-dimensional algebras over an arbitrary field such that they are derived equivalent. If \(A\) is self-injective, then so is \(B\)._
_Proof._ Assume that \(A\) and \(B\) are finite-dimensional algebras over a field \(k\). Let \(\overline{k}\) be an algebraically closed field of \(k\). Since \(A\) and \(B\) are derived equivalent, \(A\otimes_{k}\overline{k}\) and \(B\otimes_{k}\overline{k}\) are derived equivalent by [15, Theorem 2.1]. Suppose that \(A\) is self-injective. By Corollary 5.2, \(A\otimes_{k}\overline{k}\) is self-injective. It is easy to see that \(A\otimes_{k}\overline{k}\) and \(B\otimes_{k}\overline{k}\) are finite-dimensional algebras over \(\overline{k}\) because \(\dim_{\overline{k}}(A\otimes_{k}\overline{k})=\dim_{k}(A)\). Now, the \(\overline{k}\)-algebra \(B\otimes_{k}\overline{k}\) is self-injective by [2, Theorem 2.1] which states that finite-dimensional self-injective algebras over an algebraically closed field are preserved under derived equivalences. It then follows from Corollary 5.2 that \(B\) is a self-injective algebra. \(\Box\)
Derived equivalences preserve finite-dimensional symmetric algebras over an arbitrary field [15, Corollary 5.3]. We point out that this is true also for weakly symmetric algebras over an arbitrary field. For weakly symmetric algebras over an algebraically closed field, this was proved in [3].
**Corollary 5.4**.: _Suppose that \(A\) and \(B\) are finite-dimensional algebras over an arbitrary field such that they are derived equivalent. If \(A\) is weakly symmetric, then so is \(B\)._
_Proof._ A finite-dimensional, self-injective algebra \(\Lambda\) is weakly symmetric if and only if the Nakayama permutation of \(\Lambda\) is the identity map. This follows from the definition of the Nakayama functor \(\nu_{\Lambda}\).
Suppose that \(A\) and \(B\) are derived equivalent. Further, assume that \(A\) is weakly symmetric. Then \(A\) is self-injective. By Corollary 5.3, \(B\) is also self-injective. By assumption, \(A\) is weakly symmetric, that is, the Nakayama permutation of \(A\) is the identity map. So the Nakayama permutation of \(B\) is also the identity map by Theorem 4.1. Hence \(B\) is weakly symmetric. \(\Box\)
**Corollary 5.5**.: _Suppose that \(A,B,\Lambda\) and \(\Gamma\) are finite-dimensional algebras over a field \(k\). Assume that \(A\) and \(\Lambda\) are derived equivalent and that \(B\) and \(\Gamma\) are derived equivalent. If both \(A\) and \(B\) are weakly symmetric (or self-injective), then the tensor product of \(\Lambda\otimes_{k}\Gamma\) is also weakly symmetric (or self-injective)._
_Proof._ If \(A\) and \(B\) are self-injective, then so is the tensor product \(A\otimes_{k}B\). Since \(\nu_{A\otimes_{k}B}(P\otimes_{k}Q)\simeq\nu_{A}(P)\otimes_{k}\nu_{B}(Q)\) as \(A\otimes_{k}B\)-modules for any \(P\) in \(A\)-proj and \(Q\) in \(B\)-proj, it follows from \(A\) and \(B\) being weakly symmetric that \(A\otimes_{k}B\) is weakly symmetric. As derived equivalences are preserved under taking tensor products (see [15]), we see that \(A\otimes_{k}B\) and \(\Lambda\otimes_{k}\Gamma\) are derived equivalent. Now, Corollary 5.5 follows from Corollary 5.3 and Corollary 5.4. \(\Box\)
Let \(\mathcal{C}\) be an additive category, \(\mathcal{D}\) a full subcategory of \(\mathcal{C}\), and \(X\) an object in \(\mathcal{C}\). A morphism \(f:D\to X\) in \(\mathcal{C}\) is called a right \(\mathcal{D}\)-approximation of \(X\) if \(D\in\mathcal{D}\) and the induced map \(\operatorname{Hom}_{\mathcal{C}}(D^{\prime},f):\operatorname{Hom}_{\mathcal{C }}(D^{\prime},D)\to\operatorname{Hom}_{\mathcal{C}}(D^{\prime},X)\) is surjective for every object \(D^{\prime}\in\mathcal{D}\). Dually, there is defined the left \(\mathcal{D}\)-approximation of \(X\).
A sequence
\[X\stackrel{{ f}}{{\longrightarrow}}M\stackrel{{ g}}{{ \longrightarrow}}Y\]
in \(\mathcal{C}\) is called a \(\mathcal{D}\)_-split sequence_ if \(M\in\mathcal{D}\), \(f\) is both a kernel of \(g\) and a left \(\mathcal{D}\)-approximation of \(X\), and \(g\) is both a cokernel of \(f\) and a right \(\mathcal{D}\)-approximation of \(Y\).
For an object \(M\) in \(\mathcal{C}\), \(\operatorname{add}(M)\) stands for the full subcategory of \(\mathcal{C}\) consisting of all objects isomorphic to direct summands of direct sums of finitely many copies of \(M\).
As a consequence of Corollary 5.3 and Corollary 5.4 together with [9, Theorem 3.5] and [15, Corollary 5.3], we get the following.
**Corollary 5.6**.: _Let \(M\) be an object of an additive \(k\)-category \(\mathcal{C}\) with \(k\) a field. If \(X\to M^{\prime}\to Y\) is an \(\operatorname{add}(M)\)-split sequence in \(\mathcal{C}\), then \(\operatorname{End}_{\mathcal{C}}(X\oplus M)\) is a self-injective (symmetric, weakly symmetric) algebra if and only if so is \(\operatorname{End}_{\mathcal{C}}(Y\oplus M)\)._
Let \(A\) be an Artin algebra. A complex \(T^{\bullet}\) in \(\mathcal{K}^{b}(A\text{-proj})\) is called a _basic complex_ if it is a direct sum of pairwise non-isomorphic, indecomposable complexes in \(\mathcal{K}^{b}(A\text{-proj})\). A complex \(X^{\bullet}\) in \(\mathcal{K}^{b}(A\text{-proj})\) is said to be _radical_ if all differentials of \(X^{\bullet}\) are radical homomorphisms.
**Corollary 5.7**.: _If \(A\) is a finite-dimensional, self-injective algebra over a field, then, for any basic tilting complex \(X^{\bullet}\), \(\mathbf{L}\nu_{A}(X^{\bullet})\simeq X^{\bullet}\) in \(\mathcal{K}^{b}(A\text{-proj})\)._
_Proof._ Let \(X^{\bullet}\) be a basic tilting complex and \(B:=\operatorname{End}_{\mathcal{K}^{b}(A\text{-proj})}(X^{\bullet})^{op}\). Then \(B\) is a basic self-injective algebra by Corollary 5.3, and therefore \(B\) is a Frobenius algebra. By definition, \({}_{B}(DB)\simeq{}_{B}B\) as \(B\)-modules. Moreover, there is a triangle equivalence \(F:\mathcal{K}^{b}(B\text{-proj})\to\mathcal{K}^{b}(A\text{-proj})\) such that \(F({}_{B}B)=X^{\bullet}\) (see [14]). Then it follows from Lemma 4.2 that
\[\mathbf{L}\nu_{A}(X^{\bullet})=\mathbf{L}\nu_{A}(F({}_{B}B))\simeq F\mathbf{L} \nu_{B}({}_{B}B)=F({}_{B}(DB))\simeq F({}_{B}B)=X^{\bullet}\]
in \(\mathcal{K}^{b}(A\text{-proj})\). \(\Box\)
**Corollary 5.8**.: _Suppose that \(A\) and \(B\) are finite-dimensional algebras over a field such that they are derived equivalent. If \(A\) is self-injective and its Nakayama permutation \(\sigma_{A}\) is transitive (that is, \(\sigma_{A}\) has only one orbit), then \(A\) and \(B\) are Morita equivalent._
_Proof._ Without loss of generality, we assume that \(B\) is basic. Then there is a basic tilting complex \(X^{\bullet}\) in \(\mathcal{K}^{b}(A\text{-proj})\) such that \(B\simeq\operatorname{End}_{\mathcal{K}^{b}(A\text{-proj})}(X^{\bullet})^{op}\) by Theorem 2.1. Further, by [8, (a), p.112], we may assume that the complex \(X^{\bullet}\) is radical. Now it suffices to prove that \(X^{\bullet}\) is concentrated in a single degree because this will imply that \(X^{\bullet}\) is a projective generator, and therefore \(A\) and \(B\) are Morita equivalent.
Indeed, assume that \(X^{\bullet}\) is of the form (up to shift)
\[X^{\bullet}=\qquad\cdots\longrightarrow 0\longrightarrow X^{0}\stackrel{{ d^{0}_{X}}}{{ \longrightarrow}}X^{1}\stackrel{{ d^{1}_{X}}}{{\longrightarrow}} \cdots\longrightarrow X^{m}\longrightarrow 0\longrightarrow\cdots\]
with \(X^{0}\neq 0\neq X^{m}\). Suppose \(m\neq 0\). Since the Nakayama permutation of \(A\) is cyclic, there is a number \(n\) such that each indecomposable projective \(A\)-module is isomorphic to a direct summand of the terms of \(\bigoplus_{1\leq s\leq n}\mathbf{L}\nu^{s}_{A}(X^{\bullet})\) in degrees \(0\) and \(m\). By Corollary 5.7,
\((X^{\bullet})^{\oplus n}\) in \(\mathcal{K}^{b}(A\)-proj). Since both \(\bigoplus_{1\leqslant s\leqslant n}\mathbf{L}\nu_{A}^{s}(X^{\bullet})\) and \((X^{\bullet})^{\oplus n}\) are radical complexes, it follows from [8, (b), p.113] that \(\bigoplus_{1\leqslant s\leqslant n}\mathbf{L}\nu_{A}^{s}(X^{\bullet})\simeq(X^ {\bullet})^{\oplus n}\) as complexes. Then each indecomposable projective \(A\)-module is isomorphic to a direct summand of \((X^{0})^{\oplus n}\) and \((X^{m})^{\oplus n}\). Thus \(\text{Hom}_{\mathcal{K}^{b}(A\text{-proj})}((X^{\bullet})^{\oplus n},(X^{ \bullet})^{\oplus n}[m])\neq 0\). This contradicts to the fact that \((X^{\bullet})^{\oplus n}\) is a tilting complex. Hence \(m=0\) and \(X^{\bullet}\) has only one nonzero term.
**Acknowledgements**: The research work was supported partially by the National Natural Science Foundation of China (Grant 12031014 and 12226314). The authors thank Professor Wei Hu from Beijing Normal University for comments on the primary version of the manuscript, and Yiping Chen from Wuhan University for pointing out the reference [16].
|
2308.11133 | Learning the solution operator of a nonlinear parabolic equation using
physics informed deep operator network | This study focuses on addressing the challenges of solving analytically
intractable differential equations that arise in scientific and engineering
fields such as Hamilton-Jacobi-Bellman. Traditional numerical methods and
neural network approaches for solving such equations often require independent
simulation or retraining when the underlying parameters change. To overcome
this, this study employs a physics-informed DeepONet (PI-DeepONet) to
approximate the solution operator of a nonlinear parabolic equation.
PI-DeepONet integrates known physics into a deep neural network, which learns
the solution of the PDE. | Daniel Sevcovic, Cyril Izuchukwu Udeani | 2023-08-22T02:27:39Z | http://arxiv.org/abs/2308.11133v1 | Learning the solution operator of a nonlinear parabolic equation using physics informed deep operator network
###### Abstract
This study focuses on addressing the challenges of solving analytically intractable differential equations that arise in scientific and engineering fields such as Hamilton-Jacobi-Bellman. Traditional numerical methods and neural network approaches for solving such equations often require independent simulation or retraining when the underlying parameters change. To overcome this, this study employs a physics-informed DeepONet (PI-DeepONet) to approximate the solution operator of a nonlinear parabolic equation. PI-DeepONet integrates known physics into a deep neural network, which learns the solution of the PDE.
Keywords:Deep learning, PI-DeepONet, Nonlinear parabolic equation
## 1 Introduction
It is well-known that various nonlinear parabolic equations arise from various applied problems in industry. However, most of these differential equations are analytically intractable. Classical methods, such as the finite volume method, the finite difference method, and spectral methods, have been widely used to solve such equations. The corresponding finite-dimensional algebraic systems are often solved by iterative methods. Although these methods are efficient and well-studied, they require a lot of memory space and time, leading to high computational costs. Furthermore, a slight change in the input parameter leads to a new numerical simulation. To overcome these challenges, many researchers have replaced traditional numerical discretization methods with artificial neural networks (ANNs) to approximate the PDE solution. Recently, deep neural networks (DNNs) have been widely used to solve classical applied mathematical problems, including PDEs, utilizing machine learning and artificial intelligence approaches [1]. Due to significant nonlinearities, convection dominance, or shocks, some PDEs are difficult to solve using standard numerical approaches. To this end, deep learning has recently emerged as a new paradigm of scientific computing thanks to the universal approximation theorem and the great expressivity of neural networks [7]. Recent studies have shown that deep learning is a promising method for building metamodels for fast predictions of dynamic systems. In
particular, neural networks (NNs) have been shown to represent the underlying nonlinear input-output relationship in complex systems. In an attempt to approximate the solution of PDEs, one can employ the deep Galerkin method [1] involving DNNs to solve nonlinear PDEs. More recently, Lu et al. [5] introduced an efficient technique called physics-informed neural networks (PINN) to approximate the solution of PDEs. Although PINNs are faster than traditional numerical methods, they also have some limitations; e.g., a slight change in the underlying parameters could result in the retraining of the model. To overcome the shortcoming of PINNs, Lu et al. [4] further introduced the concept of DeepONet, which is an NN-based model that can learn linear and nonlinear PDE solution operators with a small generalization error via the universal approximation theorem for operators. DeepONet consists of two parts: a deep neural network that learns the solution of the PDE and an operator network that enforces the PDE at each iteration. The operator network acts as a constraint to ensure that the neural network outputs satisfy the underlying PDE. DeepONet maps input functions with infinite dimensions to output functions belonging to infinite-dimensional space. It can efficiently and accurately solve PDE with any initial and boundary conditions without retraining the network. PI-DeepONet approximates the PDE solution operator using two networks: one network that encodes the discrete input function space (branch net) and one that encodes the domain of the output functions (trunk net) (cf. [4]). It can effectively approximate the solution of different PDEs without requiring a large amount of training data by introducing a regularization mechanism that biases the output of DeepONet models to ensure physical consistency. PI-DeepONet can efficiently solve parametric linear and nonlinear PDEs compared to other variants of PINN since it can take source term parameters (including other parameters) as input variables. It can also break the curse of dimensionality in the input space, making it more suitable than other traditional approaches. Inspired by the above development and studies, we apply the PI-DeepONet approach for solving the following parabolic equation.
\[\partial_{\tau}\varphi-\partial_{x}^{2}\alpha(\varphi)=g(\tau,x),\;(\tau,x) \in\Omega\equiv(0,T)\times(-L,L). \tag{1}\]
For simplicity, we consider zero initial and boundary conditions for the solution \(\varphi(\tau,x)\). Here \(g\) is the source term. This model equation arises from the Hamilton-Jacobi-Bellman (HJB) equation describing the stochastic optimization problem (see Sevcovic and Kilianova [2] and Sevcovic and Udeani [6]). The diffusion function \(\alpha\) is the value function arising from a convex parametric optimization problem (see Sevcovic and Kilianova [2] and Kilianova and Trnovska [3] for details).
## 2 Methodology of PI-DeepONet
In this section, we introduce and discuss the methodology of PI-DeepONet. Consider the following equation:
\[\mathcal{F}(g,\varphi)=0,\]
where \(\mathcal{F}\) is a differential operator for the governing PDE of some underlying physics laws, \(g\) denotes its source term, and \(\varphi\) is its solution. The differential equation (2) is assumed to have zero initial and boundary conditions. Note that the same idea can be applied to any initial and boundary conditions. Let \(G:g\to G(g)\) be an operator between two infinite-dimensional function spaces where \(g\) and \(G(g)\) are two functions. This mapping is called the solution operator of equation (2), which can be evaluated at a random location \(y\). In learning an operator in a more general setting, the inputs usually consist of two independent parts: the input function \(g\) and the location variable (s) \(y\). This learning can be done directly using traditional neural networks such as feedforward neural networks (FNN), recurrent neural networks (RNNs), convolutional neural networks (CNNs), or combining the two inputs as a single network input \((i.e.,\{g,y\})\). Meanwhile, it is not necessarily advisable to directly use RNNs or CNNs since the input does not have a definite structure. Therefore, it is recommended to use FNNs as the baseline model. Furthermore, the DeepONet consists of branch and trunk nets. The branch net takes \(g\) as the input function evaluated at a collection of fixed sensors \(\{x_{i}\}_{i=1}^{m}\) and outputs a feature embedding of \(q\) dimensions. The trunk net takes \(y\) as input and also outputs a feature embedding of \(q\) dimensions. Note that the dimensions of \(y\) and \(g\) need not be the same, indicating that \(g\) and \(y\) need not be treated as a single input like traditional NN. In general, the DeepONet network for learning an operator takes \(g\) and \(y\) as inputs and outputs \(G(g)(y)\), which is obtained by taking the dot product of two subnetworks. The dot product of the outputs of the two subnets plays a crucial role in determining how well the learned solution operator aligns with the actual solution of the PDE. It measures the similarity or alignment between the two networks' outputs. This helps to improve the accuracy of the learned solution operator. Consequently, the PI-DeepONet is trained by minimizing the loss function \(\mathcal{L}(\theta)\) (see 3) over all the input-output triplets \(\{g,y,G(g)(y)\}\), where \(\theta\) is the set of the weight matrix and the bias vector in the networks. The first goal is to find such an approximator \(G_{\theta}(g)\), but thanks to the universal approximation theorem for operator [7, Theorem 5], which guarantees the existence of such function, i.e., \(G_{\theta}(g)(y)\approx G(g)(y)=\varphi(y)\in\mathbb{R}\). The final objective is to find the best parameters that minimize the loss function (3) using suitable optimization techniques. The universal approximation theorem shows the stacked and unstacked DeepONet. The stacked network has one trunk net and \(P\) stacked branch nets, whereas the unstacked network has one trunk net and one branch net, which are fully independently connected. For more details, see T. Chen and H. Chen [7]. Fig. 1a shows the schematics of an unstacked DeepONet. In this study, we use an unstacked DeepONet to solve a parametric parabolic equation arising from portfolio selection problems.
## 3 Problem formulation
To employ PI-DeepONet to solve the nonlinear parabolic equation (1), we first define an operator that maps the input function to the PDE solution as \(G(g)=\varphi\)
The novelty of DeepONet is that it takes any arbitrary source term function as the input variables, making it more suitable than the PINN approach. Since \(\varphi\) is also a function, we can evaluate it at some point, say \(y\), to obtain \(G(g)(y)=\varphi(y)\). In our application, \(y=(\tau,x)\) denotes the point in the computational domain \(\Omega\) where the network predicts the solution of the PDE (1). In general, the branch (with \(g\) as input function) and trunk (with \(y\) as input variable) networks are given by \(\mathcal{B}(\mathbf{g}(\mathbf{\tilde{x}}))=\mathbf{c}\cdot\sigma(\mathbf{W}_ {\mathcal{B}}\cdot\mathbf{g}(\mathbf{\tilde{x}})+\mathbf{b}_{\mathcal{B}})\) and \(\mathcal{T}(y)=\sigma(\mathbf{W}_{\mathcal{T}}\cdot\mathbf{y}+\mathbf{b}_{ \mathcal{T}})\), respectively. Here, \(\mathbf{\tilde{x}}=(\tau,\mathbf{x})\); \(c\) is some positive constant; \(\sigma\) is the activation function; \(\mathbf{W}_{\mathcal{B}}\) and \(\mathbf{W}_{\mathcal{T}}\) represent the weight matrices of branch and trunk networks, respectively; \(\mathbf{b}_{\mathcal{B}}\) and \(\mathbf{b}_{\mathcal{T}}\) represent the bias vector of branch and trunk networks, respectively.
Now, letting \(g^{i},i=1,\ldots,N\), be any given input function representing the source term in (1), then equation (1) becomes \(g^{i}=\partial_{\tau}\varphi^{i}-\partial_{x}^{2}\alpha(\varphi^{i})\). According to [7, Theorem 5], there exists \(G_{\theta}(g^{i})\) such that \(G_{\theta}(g^{i})(y)\approx G(g^{i})(y)=\varphi^{i}(y)\). For a fixed \(i\), the approximator in the DeepONet solution operator is the dot product of the outputs of the branch and trunk networks, i.e., \(G_{\theta}(g^{i})(y)=\mathcal{B}(\mathbf{g}(y))\cdot\mathcal{T}(y)\). Hence, \(g^{i}\approx\partial_{\tau}G_{\theta}(g^{i})(y)-\partial_{x}^{2}\alpha(G_{ \theta}(g^{i})(y))\). Therefore, the physics loss evaluated at the \(Q\) collocation points in the interior of the domain is
\[\mathcal{L}_{Physics}(\theta)=\frac{1}{NQ}\sum_{i=1}^{N}\sum_{j=1}^{Q}|R^{i}_{ \theta}(y^{i}_{r,j})-g^{i}(x^{i}_{r,j})|^{2}.\]
Here, \(R^{i}_{\theta}(y^{i}_{r,j})=\partial_{\tau}G_{\theta}(g^{i})(y^{i}_{r,j})- \partial_{x}^{2}\alpha(G_{\theta}(g^{i})(y^{i}_{r,j}))\) represents the residual that satisfies the underlying PDE, and \(y^{i}_{r,j}=(\tau^{i}_{r,j},x^{i}_{r,j})\) denotes the collocation points where the PDE is evaluated. Next, we use the zero boundary and initial conditions to obtain the second loss as follows:
\[\mathcal{L}_{Operator}(\theta)=\frac{1}{NP}\sum_{i=1}^{N}\sum_{k=1}^{P}|G_{ \theta}(g^{i})(y^{i}_{g,k})-G(g^{i})(y^{i}_{g,k})|^{2}\]
where \(y^{i}_{g,k}=(\tau^{i}_{g,k},x^{i}_{g,k})\) denotes points from the initial and boundary conditions. Hence, the total loss becomes
\[\mathcal{L}(\theta)=\mathcal{L}_{Physics}(\theta)+\mathcal{L}_{Operator}( \theta). \tag{3}\]
Figure 1: Schematics of DeepONet a), and physics informed DeepONet b)
It follows that by minimizing the loss function (3) the network can effectively predict the solution of the HJB equation. Fig. 1b shows the schematics of physics-informed DeepONet connected in a feedforward manner.
## 4 Results and Discussion
The PI-DeepONet exhibits infinitesimal optimization and generalization errors, as it is easy to train and generalizes well to unseen data. In our approach, we did not use any input-output data, rather we only used the zero boundary and initial conditions. We approximate the PDE solution operator using branch and trunk nets. As a test example, we consider the diffusion function \(\alpha(\varphi)=\varphi^{2}\). First, the input function of the branch net is discretized in a finite-dimensional space using a finite number of points called sensors. Then, the discretized input function is evaluated at fixed sensors to obtain point-wise evaluations. The trunk net takes the spatial and temporal coordinates and evaluates the solution operator to obtain the loss function. To generate our training data, we randomly sample \(N=500\) source term functions as input functions of the trunk net from a zero mean Gaussian process with an exponential quadratic kernel having a 0.2-length scale. The kernel function defines the covariance between two points in the process as a function of the distance between them. The parameter \(l>0\) determines how quickly the covariance between two points decays as the distance between them increases. In this study, we set \(l=0.2\). A smaller length scale results in a higher correlation between nearby points, whereas a larger length scale results in a lower correlation between nearby points. Then, the selected input functions are evaluated at \(m=100\) points as input sensors. The \(m\) outputs of the source term functions are sent to the branch network. Next, we select the \(P=100\) output sensors from the initial and boundary conditions, which are sent to the trunk nets. Our operator is then approximated by computing the dot product between the branch and trunk networks, and the corresponding operator loss is computed. After that, we select \(Q=100\) collocation points inside the domain, and the error related to the underlying physics is computed. Finally, the total loss is evaluated by combining the two losses, which are minimized using the adaptive moment estimation (ADAM) optimizer with a learning rate of \(10^{-3}\). Similarly, the test set is generated using the same approach. In Fig. 2, we compare a solution obtained by a physics-informed DeepONet method using the Relu activation function for 10000 iterations with a numerical solution constructed by means of the finite difference numerical method.
## 5 Conclusions
In this study, we employed a physics-informed DeepONet to approximate the solution operator of a parametric parabolic equation arising from portfolio selection problems. The input function of the branch net was discretized in a finite-dimensional space using a fixed number of sensors. The discretized input functions were evaluated at fixed sensors to obtain point-wise evaluations. The
operator was approximated by computing the dot product between the branch and trunk networks, and the corresponding operator loss is computed. We applied the physics-informed DeepONet to solve the model nonlinear parabolic equation obtained from the Hamilton-Jacobi-Bellman equation for solving optimal stochastic dynamic optimization problem.
**Acknowledgments.** The research was supported by the APVV-20-0311 (C.U.) and VEGA 1/0611/21 (D.S.) projects.
|
2310.12670 | Fault-Tolerant Hybrid-Parallel Training at Scale with Reliable and
Efficient In-memory Checkpointing | To efficiently scale large model (LM) training, researchers transition from
data parallelism (DP) to hybrid parallelism (HP) on GPU clusters, which
frequently experience hardware and software failures. Existing works introduce
in-memory checkpointing optimizations that snapshot parameters to device memory
for rapid failure recovery. However, these methods introduce severe resource
competition between checkpointing and training, which can work under DP but can
hardly scale under resource-intensive HP. To ensure low checkpointing overhead
for hybrid-parallel training, this paper introduces a distributed in-memory
checkpointing system with near-zero in-memory saving overhead. It strives from
two aspects to mitigate the on-host resource competition caused by in-memory
checkpointing: (1) It introduces Hierarchical Asynchronous Snapshotting
Coordination in the checkpoint saving stage. This approach uses three-level
asynchronous on-device scheduling to enhance parallelism between snapshotting
and training, thereby minimizing snapshotting overhead. (2) It proposes Hybrid
In-memory Checkpoint Protection to enhance checkpoint completeness during
hardware failures. Unlike methods that require inter-node communications, which
may block training under HP, it creates intra-node redundancy with efficient
resource utilization, protecting training against hardware failures with
minimal overhead. With these methods, this work enables fast restart for failed
HP training with Distributed In-memory Checkpoint Loading, bypassing
inefficiencies in NFS reads. In our evaluation, we achieve zero in-memory
checkpoint saving overhead on Frontier while training Llama-2-34B on 256 MI250X
devices (512 GPUs). | Yuxin Wang, Xueze Kang, Shaohuai Shi, Xin He, Zhenheng Tang, Xinglin Pan, Yang Zheng, Xiaoyu Wu, Amelie Chi Zhou, Bingsheng He, Xiaowen Chu | 2023-10-19T11:59:01Z | http://arxiv.org/abs/2310.12670v4 | # Reliable and Efficient In-Memory Fault Tolerance of
###### Abstract
Extensive system scales (i.e. thousands of GPU/TPUs) and prolonged training periods (i.e. months of pretraining) significantly escalate the probability of failures when training large language models (LLMs). Thus, efficient and reliable fault-tolerance methods are in urgent need. Checkpointing is the primary fault-tolerance method to periodically save parameter snapshots from GPU memory to disks via CPU memory. In this paper, we identify the frequency of existing checkpoint-based fault-tolerance being significantly limited by the storage I/O overheads, which results in hefty re-training costs on restarting from the nearest checkpoint.
In response to this gap, we introduce an in-memory fault-tolerance framework for large-scale LLM pretraining. The framework boosts the efficiency and reliability of fault tolerance from three aspects: (1) Reduced Data Transfer and I/O: By asynchronously caching parameters, i.e., sharded model parameters, optimizer states, and RNG states, to CPU volatile memory, Our framework significantly reduces communication costs and bypasses checkpoint I/O. (2) Enhanced System Reliability: Our framework enhances parameter protection with a two-layer hierarchy: snapshot management processes (SMPs) safeguard against software failures, together with Erasure Coding (EC) protecting against node failures. This double-layered protection greatly improves the survival probability of the parameters compared to existing checkpointing methods. (3) Improved Snapshotting Frequency: Our framework achieves more frequent snapshotting compared with asynchronous checkpointing optimizations under the same saving time budget, which improves the fault tolerance efficiency.
In our testbed, Our framework achieves over 14\(\times\) faster parameter saving compared to state-of-the-art asynchronous checkpointing methods. Empirical results demonstrate that Our framework minimizes the overhead of fault tolerance of LLM pretraining by effectively leveraging redundant CPU resources.
**Keywords:** Fault Tolerance, Checkpoint Optimization, Large Language Model, 3D parallelism.
## 1 Introduction
Researchers have been focusing on scaling large language models (LLMs) [1] like GPT [2, 3], T5 [4], Megatron [5], BART [6], LLAMA [7], and OPT [8]. The pretraining of LLMs requires extensive GPU resources in 3D parallel training [5] and is error-prone due to hardware limitations, infrastructure issues, and experimental instability [8]. The frequent failures lead to longer training time, resulting in a huge gap between a theoretical carbon cost estimate that assumes no hardware failures or training instabilities and the real-life
LLM pretraining development. For instance, the OPT-175B training [8] employs significant computational resources, i.e., 992 80GB A100 GPUs. However, the training process experienced frequent terminations and required numerous restarts, resulting in over **105** restarts over **60** GPU days [8]. The longest healthy training period was only **2.8** days. Hardware issues [9], e.g., overheating and power failures, together with software failures [9], e.g., MPI [10] errors and checkpointing errors [9], Each fault in GPU clusters causes training interruptions and loss of all parameters in volatile GPU memory [9].
To enhance fault tolerance, checkpointing is often used to periodically save parameters [11], allowing system recovery from recent states. A detailed procedure of checkpointing consists of three steps: (1) Snapshotting: copy data from GPU to CPU memory, including model parameters, optimizer states, and any other relevant information required to resume the training process; (2) Serialization: convert the data into byte-stream representation, which can be easily stored and transmitted; (3) Persisting: save the serialized data to storage, such as a remote disk server (as heavyweight checkpointing) or a local disk (as lightweight checkpointing). In practice, the above checkpointing steps consume considerable GPU idling time and greatly increase the carbon cost. Figure 1 demonstrates the overheads of fault tolerance, including (1) checkpoint saving time \(O_{save}\), (2) restarting time \(O_{restart}\), which includes rescheduling time \(O_{sch}\), checkpoint loading time \(O_{load}\), and the lost computation time \(O_{lost}\) from the latest checkpoint, which is the most expensive depending on checkpointing intervals. For LLM pretraining, we have \(O_{restart}=O_{lost}+O_{sch}+O_{load}\). The \(O_{save}\) and \(O_{restart}\) could be severe in practice. For instance, they constitute up to **77%** of GPT-2's training time on 64 EC2 spot instances, as reported in [12]. However, reducing \(O_{save}\) and \(O_{restart}\) in checkpointing is a formidable challenge for LLM pretraining.
Checkpointing optimizations typically fall into two methodologies: lossy and lossless. Each, however, confronts limitations when applied to LLMs. While sophisticated lossy checkpointing techniques can achieve data compression rates as high as 90%[13], the pronounced sensitivity to parameter variations of LLMs [14] render them incompatible with lossy strategies. Conversely, lossless methods like asynchronous and
Figure 1: The overhead of checkpointing \(O_{save}\) and restarting \(O_{restart}\) are expensive in LLM training. On failure, training may lose thousands of GPU hours when it restarts from an old iteration.
Figure 2: An example of the workflow of REFT in a four-node system before and after a single node failure.
distributed checkpointing [15, 16, 17] may suit expansive checkpoints. However, the huge data transfer potentially saturates the network bandwidth between switches and GPU nodes. This becomes even more severe when large-scale LLMs undergo checkpointing to cloud storage for scalability and reliability merits [11], hindering the effectiveness of such checkpoint-based optimizations. For example, in the pretraining of Megatron [5], **13 terabytes** of checkpoints will be uploaded to the parallel storage system, occupying **53 seconds** of the network bandwidth during a single checkpointing. The I/O speed and bandwidth occupation greatly hinder the checkpoint frequency even in asynchronous checkpointing.
Given these challenges, it is essential to explore alternative approaches. One promising direction is snapshotting parameters to CPU memory, which is considerably faster than traditional methods that require storage I/O [15][16][18][19]. This raises the possibility of leveraging volatile CPU memory to provide rapid in-memory fault tolerance. However, due to the property of volatile memory, snapshotted parameters are not persistent. They will be released upon termination of the training process. To mitigate this challenge, we propose REFT, a fault-tolerance framework that utilizes volatile CPU memory to protect snapshots independently from the training processes. REFT decouples GPU and CPU processes with snapshot management processes (SMPs) and recovery strategies to protect parameters from software failures. It also employs erasure coding [20] to safeguard the parameters, enabling the preservation of parameters in the event of hardware failures. Overall, REFT exhibits the following capabilities for improving system reliability and minimizing fault-tolerance overheads:
* The system capitalizes on the full potential of parallel device-to-host communication and storage I/O capabilities to minimize saving overhead of checkpointing. It achieves swift parameter storage through asynchronous snapshotting distributed across all 3D parallel ranks, thereby minimally affecting the training process.
* The system employs a two-layer in-memory workflow to safeguard parameters. As depicted in Figure 2, the primary layer elastically preserves parameters on SMPs, mitigating risks from software failures. Meanwhile, the secondary layer shields parameter shards on SMPs using the erasure coding strategy, defending against single-node hardware malfunctions at each pipeline stage.
* By increasing snapshotting frequencies, the system optimizes the saving overhead and reduces the cumulative fault-tolerance overhead.
To the best of our knowledge, existing distributed asynchronous checkpoint optimizations [15][16] are built for data parallelism only. REFT is the first in-memory fault-tolerant approach for 3D parallel training. We build REFT on PyTorch [21] and evaluate it on LLM models by pretraining on a six-node NVIDIA V100 GPU cluster. Our experimental setup involved pretraining in data parallel configurations and in 3D parallel configurations. As benchmarks, we consider two leading asynchronous checkpointing optimizations as baselines: (1) CheckFreq, i.e., fully asynchronous checkpointing with overlapped device-to-host copy and storage I/O *[15]. While CheckFreq is tailored for data parallelism, it can be adapted to various parallel training forms; and (2) TorchSnapshot, i.e., asynchronous checkpointing with paralleled storage I/O [16], exclusively designed for data parallelism.
Footnote *: Several optimizations in CheckFreq unrelated to the efficiency of asynchronous checkpointing are excluded from our baseline benchmarks.
In our design, REFT outperforms the above approaches [15][16] in efficiency and reliability. In data parallelism experiments, compared with TorchSnapshot, REFT delivers up to \(14\times\) enhanced saving speed compared to TorchSnapshot in weak scaling.
The upcoming sections will provide both theoretical and practical illustrations of REFT's functionality and advantages.
## 2 Pretraining v.s. Failures
This section presents the background and related work on 3D parallel pretraining and its failures. Additionally, it discusses the limitations of existing fault-tolerant strategies specifically for 3D parallel pretraining.
### 3D-Parallel Pretraining
3D Parallelism is the common distributed deep learning (DL) training method [5] to scale up the training by combining three different parallelism approaches: data parallelism (DP), tensor model parallelism (TP), and pipeline parallelism (PP). **DP**[22, 23, 24, 25] is the most commonly used parallel method. It replicates the model across the devices (e.g., GPUs or TPUs) and splits the dataset into a set of subsets. **TP**[26, 27, 28, 29] is for distributing the computation of large tensors across multiple devices. It handles large models that cannot fit within a single device's memory. Each device performs its own computations simultaneously, collaboratively handling the workload. Meanwhile, **PP**[30, 31, 11] divides the model into multiple stages, each containing one or more layers. These stages are assigned to different devices, where each processes the intermediate results from the previous device and then passes its output to the next device.
**Communication Types** Considering the communication costs, in LLM pretraining, DP and PP are often assigned intra-nodes, and TP assigned inter-nodes [5]. For **DP**, each device trains its replicated model on a distinct subset concurrently and employs the all-reduce method [23] to synchronize model parameters through inter-node communication. As for **TP**, the communication overhead chiefly arises from the gathering of intra-node intermediate values (encompassing both activations and gradients) between devices. Such exchanges are transmitted over GPU interconnects, e.g., NVLink [32] or PCIe. Similarly, **PP** mandates the transmission of intermediate values among GPUs during both forward and backward passes, mainly leveraging inter-node communications.
**Hardware Utilization** During pretraining, 3D parallel computing attains significant communication bandwidth and maximizes GPU memory utilization. Such operations significantly reduce the strain on CPU resources since the GPUs efficiently manage the bulk of computational tasks. Figure 3 demonstrates that during 3D parallel pretraining, the CPU predominantly manages tasks with a minimal workload. The surplus CPU resources grant us the ability to utilize both CPU memory and computational capacities for fault tolerance.
**Failure Types** As GPU systems expand to encompass thousands of GPUs, the survival probability over time diminishes rapidly, typically aligning with either Gamma [13] and Weibull distributions [33]. This decline is attributed to the fact that a failure in a single node necessitates a complete system restart from the closest checkpoint. In real-world scenarios, the system might experience failures every few hours [8], leading to substantial overheads due to restarts and recomputations from the relevant checkpoint. GPU systems mainly encounter two categories of failures: hardware and software failures [8][9][34]. Hardware failures, influenced by factors like temperature fluctuations, power disruptions, storage inconsistencies, etc., remain predominant during LLM pretraining [8][34]. The constant inter-device communication coupled with high memory demands also exerts significant stress on software components such as MPI [10] and PyTorch memory management [21].
Figure 3: An example of observed average CPU and GPU average resource utilization during 3D parallel pretraining (2 DP, 4 TP, 3 PP) of OPT-2.7B [8] on six 32GB 4\(\times\)V100 GPU server; CPU: Intel(R) Xeon(R) Silver 4114 @2.20GHz; CPU memory: 256GB.
### Fault Tolerance
**Asynchronous Checkpointing and Elasticity** Fault tolerance strategies for distributed computing have been rigorously explored, particularly emphasizing the optimization of checkpointing methodologies suitable for data centers. Expedited checkpointing diminishes the overhead associated with parameter storage, facilitating more frequent checkpoint storage and thereby minimizing recomputation during restarts. Hierarchical or asynchronous methods like Lamport's algorithm and the Scalable Checkpoint/Restart (SCR) library have been proposed to enhance checkpointing speeds [35, 36, 19]. Notable asynchronous checkpointing techniques include CheckFreq [15] and TorchSnapshot [16].
While CheckFreq introduces asynchronous computation and communication, TorchSnapshot refines this by endorsing parameter sharding among data parallel groups. Also, to diminish the overhead of rescheduling, current distributed frameworks offer elastic training APIs facilitating status monitoring, failure identification, and automated restarts [37]. However, the checkpointing and elasticity schemes predominantly cater to data-parallel checkpointing.
**Limitations** Regrettably, even with the elastic training mechanism, existing checkpoint methods don't cater easily to 3D parallel training.
* _Inefficiency:_ existing checkpointing methods must save parameters to remote storage for unified checkpoint management in case of node failures.
* _Non-dedicated Implementation:_ state-of-the-art checkpointing schemes are built for DP only, e.g., PyTorch DDP [21]. An efficient asynchronous checkpointing method for 3D parallelism must not interfere with inter and intra-node communications while achieving high saving speed. Existing checkpointing schemes do not take the above properties of 3D parallel training into consideration.
In the next section, we will introduce the overview of REFT, an innovative fault tolerance framework utlizing redundant CPU memory to offer in-memory fault tolerance optimized for rapid parameter storage during 3D parallel training.
## 3 Design Overview
REFT is designed to co-work with Torchelastic [37], enabling elastic restart of training from a proximate checkpoint in the event of failures. We specifically use redundant CPU memory to amplify the reliability and efficiency of fault tolerance in three ways:
* _Sharded and Parallel Snapshotting:_ This employs parallel device-to-host communication across all ranks asynchronously to augment snapshotting speed, with the minimum PCIe bandwidth utilization;
* _Snapshot Management Process (SMPs)_: Snapshots are promptly transferred to and maintained within SMPs rather than consistently in storage, significantly elevating the snapshotting frequency. SMPs remain resilient to software failures that may disrupt training processes;
* _Erasure Coding (EC) Protection_: It ensures the security of snapshots on SMPs against a constrained number of node failures.
The elastic workflow of REFT during hardware failure is depicted in Figure 2. When all nodes are in a HEALTHY state, each node independently transfers tensors (step 1) in compact buckets to the CPU's shared memory (step 2). Subsequently, these tensors are stored in the buffers on the volatile CPU memory provided by SMP (step 3). Tensors duplicated for redundancy are then sent to erasure coding to achieve hardware failure tolerance (step 4). All intermediary tensors are released after use to preserve GPU and CPU memory. In the event that a node fails, all parameters on the GPU memory will be released. A substitute node will be introduced elastically [37], and retrieve the decoded parameters from functioning nodes (step 5). Finally, training restart from the reconstructed parameters. Also, if the failure simply be restricted to the software level, training can directly resume using the parameters stored on the SMPs. However, in rare events where the number of failed nodes surpasses the protection capacity of EC, training will revert to a pre-existing checkpoint.
The subsequent section will offer an in-depth design overview. An analytical study in the next section will elucidate how REFT improves reliability using the aforementioned techniques.
Design Details
### Sharded and Parallel Snapshotting
In a high-level overview, REFT leverages redundant CPU memory to provide efficient and reliable parameter protections. The subsequent sections introduce its detailed design. We first introduce how REFT swiftly snapshots sharded parameters with minimum system interference. Then we illustrate hierarchical fault tolerance protections to safeguard in-memory parameters. Lastly, we theoretically analyze that REFT greatly increases the survival probability and saving frequency compared to any checkpointing method.
As highlighted in Section 2, pretraining LLMs involves high-capacity GPU computing, intensive inter-node communication, and substantial GPU memory allocation. In terms of efficiency, REFT aims to (1) maximize the utilization of the distributed parallelized device-to-host communication and storage I/O of the entire system, and (2) implement asynchronous snapshotting with minimal data transfer, thereby causing minimal disruption to LLM pretraining. To achieve the objectives, REFT executes parameter sharding across ranks in 3D parallelism. The sharding includes two steps: parameter sharding and parallel snapshotting.
**Intra-Pipeline-Stage Sharding** Sharding, or partitioning, refers to the process of allocating distinct partitions of model parameters prior to snapshotting. As is shown in Figure 5, a sharding group (\(SG\)) is assigned as the same PP stage across all DP paths. Then, during pretraining, if only DP is employed, REFT partitions model parameters across DP paths. Each path will be assigned one shard of the parameters. If PP is also incorporated, REFT facilitates parameter partitioning within each PP stage across DP groups. As depicted in Figure 5, four TP partitions are evenly distributed over four GPUs, and the model is divided into \(n\)\(SG\)s. Each \(SG\) is required to cache only one copy (for both snapshot and checkpoint) during training. It should be noted that the number of \(SG\) equals the number of PP stages. Hence, REFT partitions parameters equally across all nodes in the same \(SG\) stage, with each node subsequently snapshotting the parameter partition.
**Parallel Snapshotting** Parameters to be divided in the \(n\)-th \(SG\) are denoted as \(\mathcal{W}^{\setminus}\). Given a total of \(m\) DP paths, i.e., nodes within the same PP stage, each stage on a DP path fragments \(\frac{\mathcal{W}^{\setminus}}{m}\) parameters. The shards spanning DP paths are orthogonal and identical in size, thereby reducing data transmission on each node by a factor of \(m\) during snapshotting. In this way, parameters on all GPUs on all nodes could be snapshotted in parallel with no I/O bottlenecks. Note that string parameters, such as arguments of the optimizer and RNG states of pretraining, will merely be duplicated. Of course, partitioned parameters are not safe in memory. We will further illustrate how to protect and repair parameters from other nodes in
Figure 4: This figure provides an intuitive comparison among fully asynchronous checkpointing (Async-ckpt, represented by CheckFreq), sharded asynchronous checkpointing (Async-shackpt, represented by TorchSnapshot), and REFT in a synchronous parallel training setting. In Async-ckpt, the asynchronous snapshotting could not be fully overlapped with forward and backward periods (Fwd & Bwd). In REFT and Async-shackpt, when parameter sharding shortens the snapshotting time, the snapshotting overhead is possibly to be eliminated. Still, all persisting processes in the above methods encounter the same I/O time (persisting time), which limits the maximum snapshotting frequency of Async-shackpt and Async-ckpt. The actual snapshotting frequency of Async-shackpt and Async-ckpt will be further limited by the checkpointing failure rates, storage limitations, etc. However, in REFT, the snapshotting is allowed to happen multiple times before a persisting process. This is because with fast in-memory fault tolerance, the snapshotting frequency of REFT does not encounter the above limitations.
subsequent sub-sections.
**Minimal Interference** As introduced in Section 2, PP necessitates intensive inter-node communications and intra-node weight stashing [38], while TP and data loading instigates restricted intra-node communication during forward and backward passes [5], thereby claiming portions of the PCIe bandwidth. Asynchronous snapshotting also exerts pressure on GPU and CPU memory by duplicating tensors on GPU memory and occupying PCIe bandwidths. We introduce a _Tiny-buckets Snapshotting_ strategy. This approach snapshots the sharded parameters in small buckets, mitigating PCIe interference between GPUs and optimizing GPU memory consumption. The bucket size is determined by the maximum throughput within the constraints of available GPU memory. The potential interference of REFT with training processes will be further examined in Section 6.2.
### Snapshot Management Process (SMP)
Parameters within the training processes are vulnerable, as the failure of a single node or training process can lead to the disappearance of all parameters across all nodes. We designed the SMP to safeguard the parameters on each node. With independent multiprocessing management, the life cycle of the SMP is irrelevant to the training processes. This enhances the efficiency and reliability of fault tolerance by enabling: (1) Caching intra-node parameters, (2) protection of parameters via redundant backups, and (3) reconstruction and checkpointing of parameters in the event of system failures. As a lightweight multiprocessing application, the SMP maintains low program failure rates from a low-level design perspective.
**Hierarchical Parameter Management** REFT provides a hierarchical saving mechanism based on the storage location and manages the parameters with the SMP on all nodes in a non-blocking manner.
* REFT-Sn stores parameters on CPU memory with two-layer protection against failures.
* REFT-Ckpt persists snapshotted parameters to disks only when the user requests to save checkpoint on failures or at pre-determined frequencies.
We will further present analytical results to demonstrate how REFT optimizes the frequency of snapshotting and minimizes the frequency of checkpointing to reduce the overheads of fault tolerance. The right combination of frequencies will ensure fault tolerance in LLM training under a pre-determined failure rate.
Figure 5: An \(SG\) refers to a sharding group, in which the parameters of the assigned partitions will be saved. In this example, we have TP intra-node and PP inter-node, which is a widely accepted 3D parallel setting [5, 8]. The model partition across all TP and PP in the same DP is different. 3D parallel pretraining with sharding groups. Micro batches will be fed from the left side into the LLM. The activations will go through the forward passes and backward passes. All nodes in the same state in PP formulate a \(SG\), e.g., all \(PP_{0}\) nodes formulate \(SG_{0}\).
**Shared Memory** Established in-memory storage frameworks, such as Redis and RamCloud [39], are tailored for smaller and more frequent data I/O as opposed to the voluminous tensor bucket I/O prevalent in our system. Other in-memory storage, e.g., the tmpfs system [40], the saving speed is constrained by the serialization speed. In our design, as we present in Figure 6, parameters are asynchronously snapshotted from the GPU to the CPU shared memory for two primary reasons: (1) Communication through shared memory is rapid. Firstly, as the pretraining process writes tensors to a shared memory area, all processes sharing this memory can asynchronously access the content. Secondly, it prevents the substantial overhead of serialization, especially during checkpointing. (2) Parameters can be securely read and managed via the shared memory on the SMP. In the event of a failed training process on one healthy node, parameters on all nodes are released from the GPU memory. However, parameters on SMPs remain protected, barring node failures.
**Multi Snapshots** In REFT-Sn, snapshots are protected independently from pretraining processes on redundant CPU memory. Normally, it maintains at least one clean copy and a dirty snapshot on the SMP. The number of clean copies is limited by the assigned CPU memory to avoid CPU OOM. The clean and dirty snapshots are divided to maintain parameter consistency under the following conditions:
* _Saving:_ The dirty snapshot will accept the flushed parameters from shared memory. When the saving is complete on the dirty snapshot, the clean snapshot will be replaced by the new copy of the dirty snapshot, as is shown in Figure 6. This cycle prevents parameter inconsistency that may hurt LLM convergence performance. After snapshotting, the SMP may further save the parameters to local or remote disk servers at large intervals.
* _Loading:_ When the pretraining processes suddenly fail/stop, the SMP will receive the signal and persist the latest clean snapshots to the storage and synchronize. If the failure is hardware level, which causes nodes to restart or shut down, the SMP will manage to reconstruct the parameters from redundant backups and save them to the cloud storage for restarting. If the reconstruction fails, the training will have to restart from an existing checkpoint. The detailed snapshot protection and reconstruction methods will be illustrated in the following subsections.
**Elastic Functionality** SMPs operate based on the received rendezvous status for TorchElastic [37] of all nodes. (1) Upon training initiation, SMPs will receive a HEALTHY signal, triggering the launch of buffers
Figure 6: Data flow of REFT. Parameters from various GPUs are sharded and asynchronously snapshotted to the CPU server. All parameters within a clean snapshot maintain consistency. During parameter protection, REFT-Sn transfers data from the GPU to the CPU shared memory before shifting it to the data structure on the dirty snapshot tensor by tensor. Following this, the dirty snapshot is re-designated as a clean snapshot, and redundant parities are computed on it. Please note that these nodes maintain a connection with the cloud storage.
for data structures of parameter shards. Training processes receive all-reduced parameters from SMPs in the same \(SG\), or from the latest checkpoint. Once it receives the SNAP signal for snapshotting to begin, SMP begins to receive parameters asynchronously. The shards will be flagged CLEAN when all tensors are updated and copied from the shared memory. (2) During training, REFT periodically snapshots parameters to the CPU shared memory, in small buckets, in accordance with the assigned sharding strategy. The snapshotting process is executed asynchronously with the training process in a blocking style for each iteration. (3) On failure, SMPs receive an UNHEALTHY (i.e. software failure) or OFFLINE (node failure) signal from the affected node. It synchronizes the node status and decides whether parameters require recovery, don't, or are considered unrecoverable. SMPs in a faulty \(SG\) will conduct the RAIM5 decoding if necessary.
### Erasure Coding (EC) Protection
Storing parameters on CPU memory can also be susceptible to hardware failures that corrupt the stored data. So, we propose to leverage erasure coding (EC) to enhance the reliability of REFT.
Redundant Array of Independent Disk 5 (RAID5) [20] offers a reliable EC method that promotes fault tolerance for data on disks by capitalizing on resource redundancy. Disk array structures amalgamate into a virtual logical disk, with the stored data dispersed across various hard drives. Each hard disk allocates storage space for computation and fault tolerance. When a hard disk in RAID5 malfunctions or becomes unreadable, the remaining disks are recalculated to retrieve the data on the defective disk. Taking inspiration from RAID5, we design a Redundant Array of Independent Memory 5 (RAIM5) to leverage the redundancy of CPU memory for data protection. Figure 7 illustrates an example of RAIM5 in a system with four nodes. RAIM5 encompasses two stages: encoding and decoding.
**RAIM5 Encoding:** As shown in Figure 7, parameters are sharded per PP stage into distinct groups on each node. To prevent inter-node communication from interrupting the PP communications, we snapshot \(a_{0}\), \(a_{1}\), \(a_{2}\) on separate GPUs and \(b_{2}\), \(c_{1}\) and \(d_{0}\) on a distinct GPU. After copying them to shared memory, REFT compute the parity \(p_{a}\) of \(a_{0}\), \(a_{1}\), \(a_{2}\) by encoding the parity unit with XOR calculations as \(p_{a}=a_{0}\oplus a_{1}\oplus a_{2}\) on CPU. Subsequently, REFT releases the memory occupation of \(a_{0}\), \(a_{1}\), \(a_{2}\). \(b_{2}\), \(c_{1}\), \(d_{0}\) and \(p_{a}\) are persisted on the CPU memory of \(node_{0}\). The same RAIM5 process is executed for other nodes. Notably, RAIM5 necessitates redundant parameter snapshotting that doubles the snapshotting parameter size, as is shown in Figure 4.
RAIM5 collaborates with REFT-Sn with two techniques: 1) It minimizes interference of training by conducting RAIM5 Encoding on the same node, thereby minimizing the snapshotting overhead. 2) The sharding of models among the virtual logical nodes follows a heuristic approach. For instance, when there are merely three nodes in a Data Parallelism (DP) configuration, parameters will be sharded unevenly while
Figure 7: An example of RAIM5 on a system with four nodes. Each node contains 4 GPUs and a CPU with sufficient memory.
ensuring the encoding of parities.
**RAIM5 Decoding:** Assume \(node_{0}\) is offline. The system retrieves \(b_{2}\) using the the subtraction decoder: \(b_{2}=p_{b}\oplus b_{0}\oplus b_{1}\). The same procedure applies for \(c_{1}\) and \(d_{0}\) in the event of other nodes failing. As common failures (e.g., CUDA faults, PyTorch data loader faults, etc.) do not result in a machine shutdown, REFT-Sn will remain effective for the majority of the time. When a node in an SG experiences offline failure, REFT can restore the parameters on the node with XOR calculations. In case multiple nodes within an \(SG\) go offline, we need to employ REFT-Ckpt to retrieve the parameters.
RAIM5 offers protection as long as no more than one node failure occurs within each \(SG\). In our design, REFT shall greatly improve the snapshotting frequency, thereby reducing the loss of GPU work on restarting from a stale checkpoint. In the next sub-section, We will utilize this conclusion to schedule the frequencies of snapshotting and checkpointing.
### Implementation
We have implemented REFT on PyTorch v1.10.1. REFT is a pluggable framework designed for LLM pretraining on PyTorch. Communication between training processes and the SMP is accomplished through the Python multiprocessing library. REFT employs coroutines for the asynchronous snapshotting of parameters. As previously discussed, the majority of REFT's communications are device-to-host in shards, resulting in small extra communication interference for 3D parallelism. Regarding snapshot protection, RAIM5 is deployed byte-wise on the CPU.
## 5 System Reliability Analysis
**Assumption 1**.: _In a multi-node GPU system, failure probabilities of all nodes are independently distributed. The Time-to-Failure (TTF) conforms to a Weibull distribution as is in various muliti-node GPU system failure modeling [33][42]._
Time-to-failure (TTF) refers to the duration of time that a system or component operates reliably before experiencing a failure. We assign failure rates as \(\lambda_{fail}\). Given that the system has been in pretraining for time \(t\) and Assumption 1, the cumulative probability of survival can be represented as.
\[P=e^{-\lambda_{fail}t^{c}}, \tag{1}\]
Figure 8: We compare REFT’s parameter survival probability with checkpointing on a 3072-GPU system with 6 DP paths, similar to Megatron. With a hardware failure rate of 0.0001, a software failure rate of 0.00001, and varying shape parameters (c = 1.0, 1.3, 1.5, 2.0, sampled from [41]). REFT significantly boosts survival probability. For example, given a survival threshold of 0.9 and a parameter c = 1.3, REFT necessitates a checkpoint only once every 16.22 days thanks to the erasure coding that improves the survival rate in Equation 3. In contrast, checkpoint-based methods call for one every 0.5 days. This implies that with REFT, parameters can persist safely in the volatile CPU memory for 16.22 days, whereas without REFT, they last just around 0.5 days before becoming unsafe. Note that the numbers in this figure are based on assumptions of the failure rate. The actual failure rate could be larger and require more frequent checkpointing, based on the observation in [8].
where \(c\) is the shape parameter of the Weibull distribution. And we have \(P_{k}=e^{-k\lambda_{fail}t^{c}}\). Suppose there are \(k/n\) SGs in the \(k\) node system. As is shown in Figure 4, the frequency of snapshotting in REFT is allowed to be much higher than checkpointing. We refer to the newly snapshotted parameters as the current parameters. The surviving probability of REFT \(P_{re-survive}\) (_re_ stands for REFT.) describes that REFT successfully protects current parameters from vanishing under the condition that all SMPs are healthy and at most one node fails in an \(SG\) of \(n\) nodes. \(P_{re-survive}\) is independent from training processes. In our design, REFT can protect the training from software failures. Then, the overall probability of the parameters surviving system failure with REFT can be expressed as:
\[P_{re-survive}=(P_{s}^{n}+n(1-P_{s})P_{s}^{(n-1)})^{\frac{k}{n}}P_{re}^{k} \tag{2}\]
Here, \(P_{s}\) is the cumulative probability of a single-node survival from hardware failures, and \(P_{re}\) is the probability of a single-node SMP program failure. \(1-P_{s}\) refers to the probability that no failure on a node at time \(t\). The survival probability of REFT itself \(P_{re}^{k}\) could be seen as 1 compared with the failure rate of training nodes.
The probability of a pretraining surviving without REFT but with any checkpoint-based fault tolerance is based on all nodes being healthy:
\[P_{ck-survive}=P_{s}^{k}P_{tr}^{k} \tag{3}\]
where \(P_{tr}\) represents the cumulative probability of a single-node survival from software failures. The \(P_{ck-survive}\) and \(P_{re-survive}\) over time \(t\) is depicted in Figure 8. We can safely conclude that REFT significantly enhances the survival probability by safeguarding parameters in SMPs.
## 6 Evaluations
### Evaluation Setups
**Testbed** We evaluate REFT on a six-node GPU server, each node with four 32GB V100 GPUs. Despite the limited computing resources available (24 V100 GPUs in total), we managed to evaluate the efficiency and reliability of REFT on this server with careful experimental settings. The specific hardware configurations are detailed in Table 1. These nodes are connected to a unified cloud storage system, each with a network bandwidth of 10 Gbps.
**Baselines** For data parallel pretraining experiments, we contrast REFT-Ckpt and REFT-Sn with the following fully asynchronous checkpointing baselines:
* CheckFreq (Fully Asynchronous Checkpointing): This methodology conducts device-to-host copy and storage I/O asynchronously, independent of GPU training [15]. However, as shown in Figure 4, this method could also induce significant checkpointing overhead. It works in both DP and PP pretraining experiments.
* TorchSnapshot (Sharded Asynchronous Checkpointing): An embodiment of the state-of-the-art lossless asynchronous checkpointing techniques, this method shards data along DP paths [16]. It showcases fully parallel I/O performance during the pretraining of smaller LLMs using DP. Specifically designed for checkpointing within DP, it lacks support for 3D parallelism. In this research, REFT-Ckpt can be perceived as the pioneering extension of sharded asynchronous checkpointing from DP to 3D parallelism.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Server} & \multirow{2}{*}{CPU} & PCIe & CPU & \#GPUs* \\ & & Bwd & Mem & \#nodes \\ \hline \hline \multirow{3}{*}{V100} & Intel(R) & \multirow{3}{*}{15.7GB/s} & \multirow{3}{*}{512 GB} & \multirow{3}{*}{4*6} \\ & Xeon(R) & & & \\ & Silver 4114 & & & \\ & @2.20GHz & & & \\ \hline \end{tabular}
\end{table}
Table 1: Hardware Specifications
**Models and Datasets** We evaluate the performance of REFT by pretraining OPT [8] models across various scales, including standard configurations of OPT-125M, OPT-350M, OPT-1.3B, and OPT-2.7B. These configurations provide a broad spectrum, facilitating the evaluation of workloads with parameter sizes ranging from small to large. The data corpus employed for these models is the Wikipedia dataset [43].
For weak scaling experiments, we pretrain OPT-125M and OPT-350M using 1, 4, 12, and 24 DP paths. For strong scaling experiments, given hardware constraints, we pretrain OPT-1.3B and OPT-2.7B under four PP configurations: (1) 1 PP paths, 1 DP paths, and 4 TP paths; (2) 2 PP paths, 1 DP paths, and 4 TP paths; (3) 4 PP paths, 1 DP paths, and 4 TP paths and (4) 6 PP paths, 1 DP paths, and 4 TP paths. We use PP-1, PP-2, PP-4 and PP-6 to represent the configurations.
**Evaluation Metrics** We assess REFT using five average metrics: (1) Saving Speed (measured in GB/second), including the speed of individual steps and the comprehensive saving process; (2) Overheads (measured in seconds); (3) Saving Intervals, indicative of the permissible saving frequency under overall overhead limitations. Each metric is measured distinctly.
Given that REFT is designed for lossless fault tolerance of synchronous parallel training, this paper does not include model convergence results. For all OPT models, we use the Adam [44] optimization algorithm, which introduces triple extra parameters to save. In each experiment, we choose a batch size to maximize GPU memory use in the training system. During testing, each experiment is executed for 1,000 iterations, excluding a preliminary 100-iteration warm-up. Specifically, when assessing the restarting and recomputation overheads in weak scaling, we deliberately disrupt the machine between two consecutive snapshots/checkpoints ten times to compute the average overheads. The experiments on strong scaling do not include RAIM5 due to GPU resource limitations. We provide a comprehensive evaluation in the next subsection.
### Preliminary Results
In this section, we show the evaluation results of REFT, where we split the benefits of REFT-Sn and REFT-Ckpt.
**Micro-benchmarks**
Figure 9 delineates the advantages of REFT over CheckFreq and TorchSnapshot on a single node in DP. In our benchmarks involving four GPUs snapshotting synthetic parameters totaling 20GB, REFT-Sn, REFT-Ckpt, and TorchSnapshot demonstrate a snapshotting speed from device to host (_d2h_) that is more than \(3\times\) swifter than CheckFreq. The snapshotting speed of REFT is slower than TorchSnapshot out of minimum bandwidth utilization consideration. In terms of the overall performance (_perf_), REFT-Sn outperforms TorchSnapshot and REFT-Ckpt by a significant margin. This is because parameters snapshotted from GPU to CPU undergo a delay while awaiting transfer into the CPU-shared memory or the storage. The _IO_ speeds including serialization and cloud storage I/O in TorchSnapshot and REFT-Ckpt lag behind the shared-memory communication (_sha-mem comm_) in REFT-Sn.
Figure 9: Micro-benchmark of REFT on a single node.
**Saving Speed and Overhead** In this study, we evaluate the saving efficiency through weak scaling (DP) and strong scaling (TP and PP) of OPT models in pretraining deployments.
_a. Weak Scaling:_ As Figure 1 illustrates, REFT-Sn demonstrates considerably superior scaling efficiency with an increasing number of DP paths, as compared to the CheckFreq and TorchSnapshot methods. Specifically, it delivers an \(18.74\times\) scaling efficiency when scaling from one DP path (DP-1) to 24 DP paths (DP-24) on OPT-350M and achieves a remarkable saving speed that is \(14.11\times\) of TorchSnapshot and \(106.02\times\) of CheckFreq. The inferior saving speed of REFT-Ckpt over TorchSnapshot is especially designed to copy tensors in smaller buckets to avoid communication competition with training.
_b. Strong Scaling:_
**Restarting and Recomputation Overhead** During the DP-6 weak scaling experiment, we simulate single-node failures by killing an SMP and halting training on one node. The training process on the affected node restarts elastically, following the methods outlined in [37]. SMPs on the remaining nodes fail to connect to the downed SMP, triggering the restoration of parameters and saving to a checkpoint. The training process then reloads the checkpoint and resumes. Compared to loading from a checkpoint, REFT takes approximately \(3.21\times\) (over 100s in total) longer to restore in terms of parameter loading. This time investment is relatively modest when considering the amount of GPU computation time saved. In this case, while REFT takes 58 seconds to load parameters, it offsets over 10 minutes of recomputation from the previews checkpoint. This is especially evident when the snapshotting frequency is set high.
**Discussions** The comparison of snapshotting of REFT over I/O-based checkpointing method in saving performance is comprehensive: First, as demonstrated both theoretically and practically, REFT ensures the
Figure 11: The saving overhead of fault tolerance methods in strong scaling.
Figure 10: The saving speed of fault tolerance methods in strong scaling.
safety of parameters in CPU volatile memory under multiple node failure conditions. Additionally, there is a checkpointing procedure, REFT-Ckpt, to further protect the parameters against extensive failures, which does not impede training processes.
REFT exhibit extraordinary performance in efficiency and reliability. With swift snapshotting, it can achieve optimal saving intervals and minimum fault-tolerance overhead. It is well worth mentioning that the reduced recomputation overhead on restarting also is as important as the saving efficiency, which is neglected by some papers. Still, there are some limitations of the work.
Memory UsageREFT utilizes at most \(3\times\) the storage of the optimizer and model parameters in the CPU memory, allocated for 1) the snapshotting buffer, and 2) SMP protection (comprising both clean and dirty snapshots). As an illustration, in our testbed with OPT-2.7B during 6-way data parallelism (DP) pretraining, the peak CPU memory usage is only 20.45GB, encompassing the data loader cache. The incorporation of parameter sharding effectively mitigates the CPU memory footprint on individual nodes.
LimitationsDue to restricted GPU resources, we were unable to perform large-scale pretraining. Despite this, we have managed to prove the efficiency and elasticity of REFTwith limited GPU resources. Also, the interference of asynchronous fault tolerance over training could be further mitigated on NVIDIA DGX servers with NVLink. Since 3D parallel training requires intensive intra-node and inter-node GPU communications, the interference of which could be significantly reduced on such training systems that leverage optimized GPU-to-GPU communication topologies.
## 7 Related Work
### Synchronous and Asynchronous Pipeline Parallelism
During pretraining, pipeline parallelism may update parameters either synchronously or asynchronously. While asynchronous pipeline parallelism diminishes the bubble size in the pipeline, it does so at the expense of accuracy. For instance, PipeDream [30] updates parameters using gradients from various iterations within the pipeline. In contrast, synchronous pipeline parallelisms, as seen in Megatron [5] and OPT [8], perform synchronous parameter updates across DP paths. REFT facilitates consistent parameter snapshotting in synchronous pipeline parallelism during both forward and backward passes within the same iteration.
### Pretraining on Heterogeneous Accelerators
Frameworks such as Deepspeed [11] offload calculations and parameters to the CPU, alleviating the load on the GPU and its memory. The memory optimization technology - ZeRO (Zero Redundancy Optimizer), greatly promotes large model training capabilities by expanding scale, improving speed, controlling costs, and improving availability. However, such heterogeneous optimizations cater to pretraining deployments with constrained GPU resources, particularly when users are less concerned about training duration or carbon footprint. Conversely, REFT is tailored for GPU-based LLM pretraining, optimized for scenarios with ample GPU nodes, ensuring optimal training speeds for extensive model sizes.
### Directions of Fault Tolerance
There are many works using hierarchical or asynchronous methods to accelerate checkpointing. [35, 36, 35, 19].
Also, previous work on recommendation model training [45] explores the possibility of snapshot-based fault tolerance with promising results. Also, researchers are working on optimizing the distributed checkpointing performance [44, 38, 46, 47, 17] One orthogonal direction to REFT is lossy checkpoint [48, 13][48] formulate the faulty with a concept of perturbation. The partial recovery is based on the fact that a part of the parameters is located in one server. Prioritized checkpoints save the parameters that have changed the most since they were previously saved.
## 8 Conclusion and Future Work
Training Large Language Models (LLMs) is a resource-intensive task on GPU clusters, necessitating substantial computational and storage resources. However, the probability of failure increases with the size of the cluster,
introducing progressive overheads for large-scale training.
This paper introduces REFT, the pioneering fault-tolerance framework that utilizes volatile CPU memory and parallel communication to facilitate efficient fault tolerance of LLM pretraining. Thorough evaluation reveals that REFT greatly reduces parameter saving overheads. On failures, it preferentially rebuilds parameters from redundant parities on healthy nodes, thereby minimizing GPU work loss.
In the future will involve scaling REFT to more advanced and larger clusters. Additionally, optimizing Byzantine faults during LLM training presents an interesting challenge [8]. As the rapid development of LLM continues, we anticipate additional challenges in optimizing its reliability and efficiency. Our hope is that REFT will inspire more researchers to contribute to the development of reliable and sustainable systems for LLMs. |
2303.15027 | A Survey on Causal Discovery Methods for I.I.D. and Time Series Data | The ability to understand causality from data is one of the major milestones
of human-level intelligence. Causal Discovery (CD) algorithms can identify the
cause-effect relationships among the variables of a system from related
observational data with certain assumptions. Over the years, several methods
have been developed primarily based on the statistical properties of data to
uncover the underlying causal mechanism. In this study, we present an extensive
discussion on the methods designed to perform causal discovery from both
independent and identically distributed (I.I.D.) data and time series data. For
this purpose, we first introduce the common terminologies used in causal
discovery literature and then provide a comprehensive discussion of the
algorithms designed to identify causal relations in different settings. We
further discuss some of the benchmark datasets available for evaluating the
algorithmic performance, off-the-shelf tools or software packages to perform
causal discovery readily, and the common metrics used to evaluate these
methods. We also evaluate some widely used causal discovery algorithms on
multiple benchmark datasets and compare their performances. Finally, we
conclude by discussing the research challenges and the applications of causal
discovery algorithms in multiple areas of interest. | Uzma Hasan, Emam Hossain, Md Osman Gani | 2023-03-27T09:21:41Z | http://arxiv.org/abs/2303.15027v4 | # A Survey on Causal Discovery Methods for Temporal and Non-Temporal Data
###### Abstract
Causal Discovery (CD) is the process of identifying the cause-effect relationships among the variables of a system from data. Over the years, several methods have been developed primarily based on the statistical properties of data to uncover the underlying causal mechanism. In this study, we present an extensive discussion on the methods designed to perform causal discovery from both independent and identically distributed (i.i.d.) data and time series data. For this purpose, we first introduce the common terminologies in causal discovery, and then provide a comprehensive discussion of the algorithms designed to identify the causal edges in different settings. We further discuss some of the benchmark datasets available for evaluating the performance of the causal discovery methods, available tools or software packages to perform causal discovery readily, and the common metrics used to evaluate these methods. We also test some common causal discovery algorithms on different benchmark datasets, and compare their performances. Finally, we conclude by presenting the common challenges involved in causal discovery, and also, discuss the applications of causal discovery in multiple areas of interest.
## 1 Introduction
The identification of the cause-effect relationships among the variables of a system from the corresponding data is called Causal Discovery (CD). A major part of causal analysis involves unfolding the _cause and effect relationships_ among the entities in complex systems that can help us build better solutions in health care, earth science, politics, business, education, and many other diverse areas (Peyrot (1996), Nogueira et al. (2021)). The _causal explanations_ precisely the causal factors obtained from a causal analysis play an important role in decision-making and policy formulation as well as to foresee the consequences of interventions without actually doing them. Causal discovery algorithms enable the _discovery of the underlying causal structure_ given a set of observations. The underlying causal structure also known as a causal graph (CG) is a representation of the cause-effect relationships between the variables in the data (Pearl (2009)). Causal graphs represent the causal relationships with directed arrows from the cause to the effect. Discovering the
causal relations, and thereby, the estimation of their effects would enable us to understand the underlying _data generating mechanism_ (DGM) better, and take necessary interventional actions. However, traditional Artificial Intelligence (AI) applications rely solely on predictive models, and often ignore causal knowledge. Systems without the knowledge of causal relationships often cannot make rational and informed decisions (Marwala (2015)). The result may be devastating when correlations are mistaken for causation. Because two variables can be highly correlated, and yet not have any causal influence on each other. There may be a third variable often called a latent confounder or hidden factor that may be causing both of them (see Figure 2 (a)). Thus, _embedding the knowledge of causal relationships_ in black-box AI systems is important to improve their explainability and reliability (Dubois & Prade (2020), Ganguly et al. (2023)). In multiple fields such as healthcare, politics, economics, climate science, business, and education, the ability to understand causal relations can facilitate the formulation of better policies with a greater understanding of the data.
The gold standard to discover the cause-effect relationships is to perform randomized control trials (RCTs) (Hariton & Locascio (2018)). However, RCTs are often infeasible to conduct due to high costs and some ethical reasons (Chen et al. (2021b), Hasan & Gani (2022)). As a result, over the last few decades, researchers have developed a variety of methods to unravel causal relations from purely observational data (Glymour et al. (2019), Vowels et al. (2021)). These methods are often based on some assumptions about the data and the underlying mechanism. The _outcome_ of any causal discovery method is a causal graph or a causal adjacency matrix where the cause and effect relations among the entities or variables are represented. The structure of a causal graph is often similar to a _directed acyclic graph (DAG)_ where directed edges from one variable to another represent the cause-effect relationship between them. Figure 2 (b) represents a causal graph showing the factors that are responsible for causing Cancer. This type of structural representation of the underlying data generating mechanism is beneficial for understanding how the system entities interact with each other.
There exist different approaches for performing causal discovery from data under different settings or assumptions. Some approaches are designed particularly for _independent and identically distributed (i.i.d) data_ (Spirtes et al. (2000b), Chickering (2002)) i.e. non-temporal data while others are focussed on _time series data_ (Runge et al. (2019), Hyvarinen et al. (2010)) or temporal data. There are also approaches that consider _prior knowledge incorporation_ for recovering the causal relationships (Mooij et al. (2020), Hasan & Gani (2022)). Although there exist some surveys (see Table 1) on causal discovery approaches (Heinze-Deml et al. (2018), Glymour et al. (2019), Guo et al. (2020), Vowels et al. (2021), Assaad et al. (2022b)), none of these present a comprehensive review of the different approaches designed for structure recovery from both
Figure 1: Causal Discovery: Identification of a causal graph from data.
Figure 2: (a) Latent confounder (\(L\)) influences both Smoking (\(S\)) and Cancer (\(C\)), and (b) a CG depicting the causes and effects of cancer (Korb & Nicholson (2010)).
i.i.d and time series data. Also, these surveys do not discuss the approaches that perform causal discovery in the presence of background knowledge. Hence, the goal of this survey is to provide an overview of the wide range of existing approaches for performing causal discovery under different settings. We discuss prominent methods based on the different approaches such as conditional independence testing, score functions usage, functional causal models, continuous optimization strategy, prior knowledge infusion, and miscellaneous ones also. These methods primarily differ from each other based on the core approach they follow. Apart from introducing the different causal discovery approaches and algorithms for i.i.d. and time series data, we also discuss the different tools, metrics and benchmark datasets used for performing CD, and the challenges and applications of CD in a wide range of areas.
To summarize, the structure of this paper is as follows: _First_, we provide a brief introduction to the common terminologies in the field of causal discovery (section 2). _Second_, we discuss the wide range of causal discovery approaches that exist for both i.i.d (section 3) and time-series data (section 4). _Third_, we briefly overview the common evaluation metrics (section 5) and datasets (section 6) used for evaluating the causal discovery approaches. _Fourth_, we list the different technologies and open-source software (section 8) available for performing causal discovery. _Fifth_, we discuss the challenges (section 9.1) and applications (section 9.2) of causal discovery in multiple areas such as healthcare, business, social science, economics, and so on. _Lastly_, we conclude by discussing the scopes of improvement in future causal discovery research, and the importance of causality in improving the existing predictive AI systems which can thereby impact informed and reliable decision making in different areas of interest (section 10).
## 2 Preliminaries of Causal Discovery
In this section, we briefly discuss the important terminologies and concepts that are widely used in causal discovery. Some common notations used to explain the terminologies are presented in Table 2.
### Graphical Models
A graph _G = (V, E)_ consists of a set of vertices (nodes) \(V\) and a set of edges \(E\) where the edges represent the relationships among the vertices. Figure 3 (a) represents a graph \(G\) with vertices \(V=[X,Y,Z]\) and edges \(E=[(X,Y),(X,Z),(Z,Y)]\). There can be different types of edges in a graph such as directed edges (\(\rightarrow\)), undirected edges (-), bi-directed edges (\(\leftrightarrow\)), etc. (Colombo et al. (2012)). A graph that consists of only undirected edges (-) between the nodes which represent their adjacencies is called a _skeleton graph_
\begin{table}
\begin{tabular}{|c|l|c|c|} \hline
**Survey** & **Focused Approaches** & **I.I.D. Data** & **Time Series Data** \\ \hline Heinze-Deml et al. (2018) & Constraint, Score, Hybrid \& FCM- & ✓ & \(\times\) \\ & based approaches. & & & \\ \hline Glymour et al. (2019) & Traditional Constraint-based, Score-based, \& FCM-based approaches. & ✓ & \(\times\) \\ \hline Guo et al. (2020) & Constraint-based, Score-based, \& FCM-based approaches. & ✓ & \(\times\) \\ \hline Vowels et al. (2021) & Continuous Optimization-based. & ✓ & \(\times\) \\ \hline Assaad et al. (2022b) & Constraint-based, Score-based, FCM-based, etc. approaches for time series data. & \(\times\) & ✓ \\ \hline This study & Constraint-based, Score-based, FCM-based, Hybrid-based, Continuous-Optimization-based, Prior-Knowledge-based, and Miscellaneous. & & \\ \hline \end{tabular}
\end{table}
Table 1: Comparison among the existing surveys for causal discovery approaches. A discussion on the different approaches can be found in section 3.
\(S_{g}\). This type of graph is also known as an _undirected graph_ (Figure 3 (b)). A graph that has a mixture of different types of edges is known as a _mixed graph_\(M_{g}\) (Figure 3 (c)). A _path_\(p\) between two nodes \(X\) and \(Y\) is a sequence of edges beginning from \(X\) and ending at \(Y\). A _cycle_\(c\) is a path that begins and ends at the same vertex. A graph with no cycle \(c\) is called an _acyclic graph_. And, a graph in which the edges \(E\) are directed (\(\rightarrow\)), and no cycle among the edges is allowed is a _directed acyclic graph_ (DAG). In a DAG \(G\), a directed path from \(X\) to \(Y\) implies that \(X\) is an ancestor of \(Y\), and \(Y\) is a descendant of \(X\). The graph \(G\) in Figure 3 (a) is a DAG as it is acyclic, and consists of directed edges.
There can be different kinds of DAGs based on the type of edges they contain. A class of DAG known as _partially directed acyclic graph_ (PDAG) contains both directed (\(\rightarrow\)) and undirected (-) edges. The mixed graph of Figure 3 (c) is also a PDAG. A _completed PDAG_ (CPDAG) consists of directed (\(\rightarrow\)) edges that exist in every DAG \(G\) having the same conditional dependencies, and undirected (-) edges that are reversible in \(G\). An extension of DAGs that retain many of the significant properties that are associated with DAGs is known as _ancestral graphs_ (AGs). Two different DAGs may lead to the same ancestral graph (Richardson & Spirtes (2002a)). Often there are hidden confounders and selection biases in real-world data. Ancestral graphs can represent the data-generating mechanisms that may involve latent confounders and/or selection bias, without explicitly modeling the unobserved variables. There exist different types of ancestral graphs. A _maximal ancestral graph_ (MAG) is a mixed graph that can have both directed (\(\rightarrow\)) and bidirectional (\(\leftrightarrow\)) edges (Richardson & Spirtes (2002b)). A _partial ancestral graph_ (PAG) can have four types of edges such as directed (\(\rightarrow\)), bi-directed (\(\leftrightarrow\)), partially directed (o\(\rightarrow\)), and undirected (\(-\)) (Spirtes (2001)). That is, edges in a PAG can have three kinds of endpoints: \(-\), o, or \(>\). An ancestral graph without bi-directed edges (\(\leftrightarrow\)) is a DAG (Triantafillou & Tsamardinos (2016)).
### Causal Graphical Models
A _causal graphical model_ (CGM) or _causal graph_ (CG) is a DAG \(G\) that represents a joint probability distribution \(P\) over a set of random variables \(X=(X_{1},X_{2},\ldots,X_{d})\) where \(P\) is Markovian with respect to \(G\). In a CGM, the nodes represent variables X, and the arrows represent causal relationships between them. The joint distribution \(P\) can be factorized as follows where \(pa(x_{i},G)\) denotes the parents of \(x_{i}\) in \(G\).
\begin{table}
\begin{tabular}{|c|c|} \hline
**Notation** & **Description** \\ \hline \(G\) & A graph or DAG or ground-truth graph \\ \hline \(G^{\prime}\) & An estimated graph \\ \hline \(X,Y,Z,W\) & Observational variables \\ \hline \(X\)\(\rightarrow\)\(Y\) & An unoriented or undirected edge between \(X\) and \(Y\) \\ \hline \(X\)\(\rightarrow\)\(Y\) & A directed edge from \(X\) to \(Y\) where \(X\) is the cause and Y is the effect \\ \hline \(X\)\(\not\)\(Y\) & Absence of an edge or causal link between \(X\) and \(Y\) \\ \hline \(X\)\(\rightarrow\)\(Z\)\(\leftarrow\)\(Y\) & V-structure or Collider where \(Z\) is the common child of \(X\) and \(Y\) \\ \hline \(\perp\)\(\perp\)\(Y\)\(|\)\(Z\) & Independence or d-separation \\ \hline \end{tabular}
\end{table}
Table 2: Common notations.
Figure 3: (a) A graph \(G\), (b) its _skeleton_ graph \(S_{g}\), and (c) a _mixed graph_\(M_{g}\) with directed and undirected edges.
\[P(x_{1},\ldots,x_{d})=\prod_{i=1}^{n}P(x_{i}|pa(x_{i},G)) \tag{1}\]
Causal graphs are often used to study the underlying data-generating mechanism in real-world problems. For a dataset \(D=(D_{1},D_{2},...,D_{n})\) where each data point \(D\) is a vector of values over variables \(X\), causal graphs encode the cause-effect relationships among the variables using directed edges (\(\rightarrow\)) from cause to the effect. Most of the time causal graphs take the form of a DAG. In Figure 3 (a), \(X\) is the cause that effects both \(Y\) and \(Z\) (i.e. \(Y\gets X\to Z\)). Also, \(Z\) is a cause of \(Y\) (i.e. \(Z\to Y\)). The mechanism that enables the estimation of a causal graph from a dataset \(D\) is called _causal discovery (CD)_ (Figure 1). The outcome of any causal discovery algorithm is a causal graph \(G\) where the directed edges (\(\rightarrow\)) represent the cause-and-effect relationship between the variables \(X\) in \(D\). However, some causal discovery approaches have different forms of graphs (PDAGs, CPDAGs, ancestral graphs, etc.) as the output causal graph. Table 3 presents the output causal graphs of some common CD approaches which are discussed in section 3.
#### 2.2.1 Key Structures in Causal Graphs
There are three fundamental _building blocks_ (key structures) commonly observed in the graphical models or causal graphs, namely, _Chain, Fork_, and _Collider_. Any graphical model consisting of at least three variables is composed of these key structures. We discuss these basic building blocks and their implications in dependency relationships below.
**Definition 1** (Chain): _A chain \(X\to Y\to Z\) is a graphical structure or a configuration of three variables \(X\), \(Y\), and \(Z\) in graph \(G\) where \(X\) has a directed edge to \(Y\) and \(Y\) has a directed edge to \(Z\) (see Figure 4 (a)). Here, \(X\) causes \(Y\) and \(Y\) causes \(Z\), and \(Y\) is called a mediator._
**Definition 2** (Fork): _A fork \(Y\gets X\to Z\) is a triple of variables \(X\), \(Y\), and \(Z\) where one variable is the common parent of the other two variables. In Figure 4 (b), the triple (\(X\), \(Y\), \(Z\)) is a fork where \(X\) is a common parent of \(Y\) and \(Z\)._
**Definition 3** (Collider/V-structure): _A v-structure or collider \(X\to Z\gets Y\) is a triple of variables \(X\), \(Y\), and \(Z\) where one variable is a common child of the other two variables which are non-adjacent. In Figure 4 (c), the triple (\(X\), \(Y\), \(Z\)) is a v-structure where \(Z\) is a common child of \(X\) and \(Y\), but \(X\) and \(Y\) are non-adjacent in the graph. Figure 4 (d) is also a collider with a descendant \(W\)._
#### 2.2.2 Conditional Independence in Causal Graphs
Testing for _conditional independence_ (CI) between the variables is one of the most important techniques to find the causal relationships among the variables. Conditional independence between two variables \(X\) and \(Y\) results when they are independent of each other given a third variable \(Z\) (i.e. \(X\perp\!\!\!\perp Y\mid Z\)). In the case of
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Algorithms** & **DAG** & **PDAG** & **CPDAG** & **MAG** & **PAG** \\ \hline PC & & & ✓ & & \\ \hline FCI & & & & & ✓ \\ \hline RFCI & & & & & ✓ \\ \hline GES & & & ✓ & & \\ \hline GIES & & ✓ & & & \\ \hline MMHC & ✓ & & & & \\ \hline LiNGAM & ✓ & & & & \\ \hline NOTEARS & ✓ & & & & \\ \hline GSMAG & & & & ✓ & \\ \hline \end{tabular}
\end{table}
Table 3: List of some CD algorithms with their output causal graphs. A detailed discussion of the algorithms is in section 3. The cells with ✓ represent the type of graph produced by the corresponding algorithm.
causal discovery, CI testing allows deciding if any two variables are causally connected or disconnected. An important criterion for CI testing is the _d-separation_ criterion which is formally defined below.
**Definition 4** (d-separation): _(Pearl (1988)) A path \(p\) in \(G\) is blocked by a set of nodes \(N\) if either_
1. \(p\) _contains a chain of nodes_ \(X\to Y\to Z\) _or a fork_ \(X\gets Y\to Z\) _such that the middle node_ \(Y\) _is in_ \(N\)_,_
2. \(p\) _contains a collider_ \(X\to Y\gets Z\) _such that the collision node_ \(Y\) _is not in_ \(N\)_, and no descendant of_ \(Y\) _is in_ \(N\)_._
_If \(N\) blocks every path between two sets of nodes \(X\) and \(Y\), then \(X\) and \(Y\) are d-separated, conditional on N, and thus are independent conditional on N, written \(X\perp\!\!\!\perp Y\mid N\)._
Here, \(d\) stands for _directional_. The d-separation criterion provides a set of rules to check if two variables are independent when conditioned on a set of variables. The conditioning variable can be a single variable or a set of variables. For example, in Figure 4 (b), using the d-separation criterion it can be checked whether \(X\) and \(Y\) are d-separated (independent) or not by conditioning on \(Z\) (i.e if \(X\perp\!\!\!\perp Y\mid Z\)). However, two variables with a directed edge (\(\rightarrow\)) between them are always dependent. The set of testable implications provided by _d-separation_ can be benchmarked with the available data \(D\). If a graph \(G\) might have been generated from a dataset \(D\), then _d-separation_ tells us which variables in \(G\) must be independent conditional on other variables. If every _d-separation_ condition matches a conditional independence in data, then no further test can refute the model (Pearl (1988)). If there is at least one path between \(X\) and \(Y\) that is unblocked, then they are _d-connected_. If two variables are _d-connected_, then they are most likely dependent (except intransitive cases) (Pearl (1988)). The d-separation or conditional independence between the variables in the **key structures** (Figure 4) or building blocks of causal graphs follow some rules which are discussed below:
1. _Conditional Independence in Chains:_ If there is only one unidirectional path between variables \(X\) and \(Z\) (Figure 4 (a)), and \(Y\) is any variable or set of variables that intercept that path, then \(X\) and \(Z\) are conditionally independent given \(Y\), i.e. \(X\perp\!\!\!\perp Z\mid Y\).
2. _Conditional Independence in Forks:_ If a variable \(X\) is a common cause of variables \(Y\) and \(Z\), and there is only one path between \(Y\) and \(Z\), then \(Y\) and \(Z\) are independent conditional on \(X\) (i.e. \(Y\perp\!\!\!\perp Z\mid X\)) (Figure 4(b)).
3. _Conditional Independence in Colliders:_ If a variable \(Z\) is the collision node between two variables \(X\) and \(Y\) (Figure 4(c)), and there is only one path between \(X\) and \(Y\), then \(X\) and \(Y\) are unconditionally independent (i.e. \(X\perp\!\!\!\perp Y\)). But, they become dependent when conditioned on \(Z\) or any descendants of \(Z\) (Figure 4(d)).
#### 2.2.3 Markov Equivalence in Causal Graphs
A set of causal graphs having the same set of conditional independencies is known as a _Markov equivalence class_ (MEC). Two DAGs that are Markov equivalent have the _(i) same skeleton_ (the underlying undirected
Figure 4: Building blocks in causal graphical models.
graph) and (ii) _same v-structures (colliders)_ (Verma & Pearl (2022)). That is, all DAGs in a MEC share the same edges, regardless of the direction of those edges, and the same colliders whose parents are not adjacent. _Chain_ and _Fork_ share the same independencies, hence, they belong to the same MEC (see Figure 5).
**Definition 5** (Markov Blanket): _For any variable X, its Markov blanket (MB) is the set of variables such that X is independent of all other variables given MB. The **members** in the Markov blanket of any variable will include all of its **parents, children, and spouses**._
Markov equivalence in different types of DAGs may vary. A _partial_ DAG (PDAG) a.k.a an essential graph (Perkovic et al. (2017)) can represent an equivalence class of DAGs. Each equivalent class of DAGs can be uniquely represented by a PDAG. A _completed_ PDAG or CPDAG represents the union (over the set of edges) of Markov equivalent DAGs, and can uniquely represent an MEC (Malinsky & Spirtes (2016b)). More specifically, in a CPDAG, an undirected edge between any two nodes \(X\) and \(Y\) indicate that some DAG in the equivalence class contains the edge \(X{\rightarrow}Y\) and some DAG may contain \(Y{\rightarrow}X\). Figure 6 shows a CPDAG and the DAGs (\(G\) and \(H\)) belonging to an equivalence class.
Markov equivalence in the case of ancestral graphs works as follows. A _maximal ancestral graph_ (MAG) represents a DAG where all hidden variables are marginalized out and preserve all conditional independence relations among the variables which are true in the underlying DAG. That is, MAGs can model causality and conditional independencies in causally insufficient systems (Triantafillou & Tsamardinos (2016)). _Partial ancestral graphs_ (PAGs) represent an equivalence class of MAGs where all common edge marks shared by all members in the class are displayed, and also, circles for those marks that are uncommon are presented. PAGs represent all of the observed d-separation relations in a DAG. Different PAGs that represent distinct equivalence classes of MAGs involve different sets of conditional independence constraints. An MEC of MAGs can be represented by a PAG (Malinsky & Spirtes (2016b)).
### Structural Causal Models
Pearl (2009) defined a _class of models_ for formalizing structural knowledge about the _data-generating process_ known as the _structural causal models (SCMs)_. The SCMs are valuable tools for reasoning and decision
Figure 5: Markov Equivalence in Chains and Fork.
Figure 6: DAGs \(G\) and \(H\) belong to the same MEC. The leftmost graph is a CPDAG of \(G\) and \(H\) with an undirected edge \((-)\) between \(X\) and \(Z\), and the rest of the edges same as in \(G\) and \(H\).
making in the causal analysis since they are capable of representing the underlying causal story of data (Kaddour et al. (2022)).
**Definition 6** (Structural Causal Model): _Pearl (2009); A structural causal model is a 4-tuple \(M=\langle U,V,F,P(u)\rangle\), where_
* \(U\) _is a set of background variables (also called exogenous) that are determined by factors outside the model._
* \(V\) _is a set_ \(\{V_{1},V_{2},\ldots,V_{n}\}\) _of endogenous variables that are determined by variables in the model, viz. variables in_ \(U\cup V\)_._
* \(F\) _is a set of functions_ \(\{f_{1},f_{2},\ldots,f_{n}\}\) _such that each_ \(f_{i}\) _is a mapping from the respective domains of_ \(U_{i}\cup PA_{i}\) _to_ \(V_{i}\) _and the entire set_ \(F\) _forms a mapping from_ \(U\) _to_ \(V\)_. In other words, each_ \(f_{i}\) _assigns a value to the corresponding_ \(V_{i}\in V\)_,_ \(v_{i}\gets f_{i}(pa_{i},u_{i}),\) _for_ \(i=1,2,\ldots n\)_._
* \(P(u)\) _is a probability function defined over the domain of_ \(U\)_._
Each SCM \(M\) is associated with a _causal graphical model_\(G\) that is a DAG, and a set of functions \(f_{i}\). _Causation in SCMs_ can be interpreted as follows: a variable \(Y\) is directly caused by \(X\) if \(X\) is in the function \(f\) of \(Y\). In other words, each \(f_{i}\) assigns a value to the corresponding \(V_{i}\in V\), \(v_{i}\gets f_{i}(pa_{i},u_{i})\), for \(i=1,2,\ldots n\). In the SCM of Figure 7, \(X\) is a direct cause of \(Y\) as \(X\) appears in the function that assigns \(Y\)'s value. That is, if a variable \(Y\) is the child of another variable \(X\), then \(X\) is a direct cause of \(Y\). In Figure 7, \(U_{X}\), \(U_{Y}\) and \(U_{Z}\) are the exogenous variables; \(X\), \(Y\) and \(Z\) are the endogenous variables, and \(f_{Y}\)\(\&\)\(f_{Z}\) are the functions that assign values to the variables in the system. Any variable is _an exogenous variable_ if \((a)\) it is an unobserved or unmeasured variable and \((b)\) it cannot be a descendant of any other variables. While every _endogenous variable_ is a descendant of at least one exogenous variable.
### Causal Assumptions
Often, the available data provide only partial information about the underlying causal story. Hence, it is essential to make some assumptions about the world for performing causal discovery (Lee and Honavar (2020)). Following are the common assumptions usually made by causal discovery algorithms.
* _Causal Markov Condition (CMC):_ The causal Markov assumption states that a variable \(X\) is independent of every other variable (except its descendants) conditional on all of its direct causes (Scheines (1997)). That is, the CMC requires that every variable in the causal graph is independent of its non-descendants conditional on its parents (Malinsky and Spirtes (2016a)). In Figure 8, \(W\) is the only descendant of \(X\). As per the CMC, \(X\) is independent of \(Z\) conditioned on its parent \(Y\) (\(X\perp\!\!\!\perp Z\mid Y\)).
* _Causal Faithfulness Condition (CFC):_ The faithfulness assumption states that except for the variables that are d-separated in a DAG, all other variables are dependent. More specifically, for a set of variables \(V\) whose causal structure is represented by a DAG \(G\), no conditional independence holds unless entailed by the causal Markov condition (Ramsey et al. (2012)). That is, the CFC a.k.a the Stability
Figure 7: A Structural Causal Model (SCM) with causal graph \(G\) and functions \(f_{Y}\)\(\&\)\(f_{Z}\).
conditions: For every three disjoint sets of variables \(X\), \(Y\), and \(Z\), if \(X\) and \(Y\) are not d-separated by \(Z\) in the causal DAG, then \(X\) and \(Y\) are not independent conditioned on \(Z\)(Ramsey et al. (2012)). The faithfulness assumption may fail in certain scenarios. For example, it fails whenever there exist two paths with equal and opposite effects between variables. It also fails in systems with deterministic relationships among variables, and also, when there is a failure of transitivity along a single path (Weinberger (2018)).
3. _Causal Sufficiency:_ The causal sufficiency assumption states that there exist no latent/hidden/unobserved confounders, and all the common causes are measured. Thus, the assumption of causal sufficiency is satisfied only when all the common causes of the measured variables are measured. This is a strong assumption as it restricts the search space of all possible DAGs that may be inferred. However, real-world datasets may have hidden confounders which might frequently cause the assumption to be violated in such scenarios. Algorithms that violate the causal sufficiency assumption may observe degradation in their performance. The causal insufficiency in real-world datasets may be overcome by leveraging domain knowledge in the discovery pipeline. The CMC tends to fail for a causally insufficient set of variables.
4. _Acyclicity:_ It is the most common assumption which states that _there are no cycles in a causal graph_. That is, a graph needs to be acyclic in order to be a causal graph. As per the acyclicity condition, there can be no directed paths starting from a node and ending back to itself. This resembles the structure of a directed acyclic graph (DAG). A recent approach (Zheng et al. (2018)) has formulated a new function (Equation 2) to enforce the acyclicity constraint during causal discovery. The weighted adjacency matrix \(W\) in Equation 2 is a DAG if it satisfies the following condition where \(\circ\) is the Hadamard product, \(e^{W\circ W}\) is the matrix exponential of \(W\circ W\), and \(d\) is the total number of vertices. \[h(W)=tr(e^{W\circ W})-d=0\] (2)
5. _Data Assumptions:_ There can be different types of assumption about the data. Data may have linear or nonlinear dependencies, and can be continuously valued or discrete valued in nature. Data can be independent and identically distributed (i.i.d) or the data distribution may shift with time (e.g. time-series data). Also, the data may belong to different noise distributions such as Gaussian, Gumbel, or Exponential noise. Furthermore, the existence of selection bias, missing variables, and hidden confounders are some common assumptions about data.
## 3 Causal Discovery Algorithms for I.I.D. Data
Causal graphs are essential as they represent the underlying causal story embedded in the data. There are two very common approaches to recovering the causal structure from observational data, _i) Constraint-based_(Spirtes et al. (2000b), Spirtes (2001), Colombo et al. (2012)) and _ii) Score-based_(Chickering (2002)). Among the other types of approaches, _functional causal models (FCMs)-based_(Shimizu et al. (2006),Hoyer et al. (2008)) approaches and _hybrid_ approaches (Tsamardinos et al. (2006)) are noteworthy. Recently, some _gradient-based_ approaches have been proposed based on neural networks Aboidun et al. (2018) and a modified definition of the acyclicity constraint (Zheng et al. (2018), Yu et al. (2019), Lachapelle et al. (2019), etc.). Other approaches include the ones that prioritize the use of _background knowledge_ and provides ways to incorporate prior knowledge and experts' opinion into the search process (Wang et al. (2020); Sinha and Ramsey (2021)). In this section, we provide an overview of the causal discovery algorithms for i.i.d data based on the different types of approaches mentioned above. The algorithms primarily distinguish from each other based on the core approach they follow to perform causal discovery. We further discuss noteworthy similar approaches specialized for non-i.i.d or time series data in section 4.
Figure 8: Illustration of the causal Markov condition (CMC) among four variables.
### Constraint-based
Constraint-based approaches conduct _conditional independence (CI)_ tests between the variables to check for the presence or absence of edges. These approaches infer the conditional independencies within the data using the _d-separation criterion_ to search for a DAG that entails these independencies (Triantafillou & Tsamardinos (2016)). Let us consider the graph in Figure 10 (a). After testing the conditional independencies, it is found that \(X\perp\!\!\!\perp W\mid Z\) and \(Y\perp\!\!\!\perp W\mid Z\). Hence, the edges \(X\)-\(W\) and \(Y\)-\(W\) are removed (Figure 10 (b)). Such CI tests are commonly used by constraint-based approaches to detect which variables are d-separated and which are d-connected. Different types of CI tests used by constraint-based causal discovery approaches are listed in Table 4.
#### 3.1.1 Pc
The Peter-Clark (PC) algorithm (Spirtes et al. (2000b)) is one of the oldest constraint-based algorithms for causal discovery. To learn the underlying causal structure, this approach depends largely on conditional independence (CI) tests. This is because it is based on the concept that two statistically independent variables are not causally linked. The outcome of a PC algorithm is a CPDAG. It learns the CPDAG of the underlying DAG in three steps: _Step 1 - Skeleton identification, Step 2 - V-structures determination, and Step 3 - Propagation of edge orientations._ It starts with a fully connected undirected graph using every variable in the dataset, then eliminates unconditionally and conditionally independent edges (skeleton detection), finds and orients the v-structures or colliders (i.e. \(\mathrm{X}\rightarrow\mathrm{Y}\leftarrow\mathrm{Z}\)) based on the d-separation set of node pairs, and finally orients the remaining edges based on two aspects: i) availability of no new v-structures, and ii) not allowing any cycle formation. The assumptions made by the PC algorithm include acyclicity, causal faithfulness, and causal sufficiency. It is computationally more feasible for sparse graphs. An implementation of this algorithm can be found in the CDT repository ([https://github.com/ElementAI/causal_discovery_toolbox](https://github.com/ElementAI/causal_discovery_toolbox)) and
Figure 10: (a) Graph before CI testing, and (b) After CI testing.
Figure 9: Taxonomy of some causal discovery approaches for i.i.d. data.
also, in the gCastle toolbox (Zhang et al. (2021a)). A number of the constraint-based approaches namely FCI, RFCI, PCMCI, PC-stable, etc. use the PC algorithm as a backbone to perform the CI tests.
#### 3.1.2 Fci
The Fast Causal Inference (FCI) algorithm (Spirtes et al. (2000a)) is a variant of the PC algorithm which can infer conditional independencies and learn causal relations in the presence of many arbitrary latent and selection variables. As a result, it is accurate in the large sample limit with a high probability even when there exists hidden, and selection bias. The first step of the FCI algorithm is similar to the PC algorithm where it starts with a complete undirected graph to perform the skeleton determination. After that, it requires additional tests to learn the correct skeleton and has additional orientation rules. In the worst case, the number of conditional independence tests performed by the algorithm grows exponentially with the number of variables in the dataset. This can affect both the speed and the accuracy of the algorithm in the case of small data samples. To improve the algorithm, particularly in terms of speed, there exist different variants such as the RFCI (Colombo et al. (2012)) and the Anytime FCI (Spirtes (2001)) algorithms.
#### 3.1.3 Anytime FCI
Anytime FCI (Spirtes (2001)) is a modified version of the FCI algorithm which takes into consideration _selection bias_ (Berk (1983)) in data. The number of CI tests required by FCI makes it infeasible if the model has a large number of variables. Moreover, when the FCI requires independence tests conditional on a large set of variables, the accuracy decreases for a small sample size. The outer loop of the FCI algorithm performs independence tests conditional on the increasing size of variables. In the anytime FCI algorithm, the authors showed that this outer loop can be stopped anytime during the execution for any smaller variable
\begin{table}
\begin{tabular}{|c|l|c|} \hline & **Conditional Independence Test** & **Ref.** \\ \hline
1. & Conditional Distance Correlation (CDC) test & Wang et al. (2015) \\ \hline
2. & Momentary Conditional Independence (MCI) & Runge et al. (2019) \\ \hline
3. & Kernel-based CI test (KCIT) & Zhang et al. (2012) \\ \hline
4. & Randomized Conditional Correlation Test (RCoT) & Strobl et al. (2019) \\ \hline
5. & Generative Conditional Independence Test (GCIT) & Bellot \& van der Schaar (2019) \\ \hline
6. & Model-Powered CI test & Sen et al. (2017) \\ \hline
7. & Randomized Conditional Independence Test (RCIT) & Strobl et al. (2019) \\ \hline
8. & Kernel Conditional Independence Permutation Test & Doran et al. (2014) \\ \hline
9. & Gaussian Processes and Distance Correlation-based (GPDC) & Rasmussen et al. (2006) \\ & CI Test & \\ \hline
10. & Conditional mutual information estimated with a k-nearest & Runge (2018) \\ & neighbor estimator (CMIKnn) & \\ \hline \end{tabular}
\end{table}
Table 4: Types of conditional independence (CI) tests. Please refer to the study Runge (2018) for a detailed discussion on CI tests.
Figure 11: Step-by-step workflow of the PC (Spirtes et al. (2000b)) algorithm.
size. As the number of variables in the conditional set reduces, anytime FCI becomes much faster for the large sample size. More importantly, it is also more reliable on limited samples since the statistical tests with the lowest power are discarded. To support the claim, the authors provided proof for the change in FCI that guarantees good results despite the interruption. The result of the interrupted anytime FCI algorithm is still valid, but as it cannot provide answers to most questions, the results could be less informative compared to the situation if it was allowed to run uninterrupted.
#### 3.1.4 Rfci
Really Fast Causal Inference (RFCI) (Colombo et al. (2012)) is a much faster variant of the traditional FCI for learning PAGs that uses fewer CI tests than FCI. Unlike FCI, RFCI assumes that causal sufficiency holds. To ensure soundness, RFCI performs some additional tests before orienting v-structures and discriminating paths. It conditions only on subsets of the adjacency sets and unlike FCI, avoids the CI tests given subsets of possible d-separation sets which can become very large even for sparse graphs. As a result, the number of these additional tests and the size of their conditioning sets are small for sparse graphs which makes RFCI much faster and computationally feasible than FCI for high-dimensional sparse graphs. Also, the lower computational complexity of RFCI leads to high-dimensional consistency results under weaker conditions than FCI.
#### 3.1.5 Pc-stable
The independence tests in the original PC method are prone to errors in the presence of a few samples. Additionally, because the graph is updated dynamically, maintaining or deleting an edge incorrectly will affect the neighboring sets of other nodes. As a result, the sequence in which the CI tests are run will affect the output graph. Despite the fact that this order dependency is not a significant issue in low-dimensional situations, it is a severe problem in high-dimensional settings. To solve this problem, Colombo et al. (2014) suggested changing the original PC technique to produce a stable output skeleton that is independent of the input dataset's variable ordering. This approach, known as the stable-PC algorithm, queries and maintains the neighbor (adjacent) sets of every node at each distinct level. Since the conditioning sets of the other nodes are unaffected by an edge deletion at one level, the outcome is independent of the variable ordering. They demonstrated that this updated version greatly outperforms the original algorithm in high-dimensional settings while maintaining the original algorithms' low-dimensional settings performance. However, this modification lengthens the algorithm's runtime even more by requiring additional CI checks to be done at each level. The R-package pcalg contains the source code for PC-stable.
#### 3.1.6 Rrcd
To learn the causal structure from relational data, Lee and Honavar (2020) developed a reliable method called RRCD (Robust Relational Causal Discovery). For establishing and orienting causal linkages in such a situation, existing techniques rely on _relational conditional independence_ (RCI) oracle queries. However, using relational data to identify RCI creates numerous distinct difficulties for RCI testing. Existing CI tests are either not appropriate for RCI or have little power, making them unable to identify RCI. When used on small samples, even a well-designed RCI test might not be trustworthy enough. Early on during the algorithm's execution, incorrect RCI test results might misguide the algorithm. A generic RCI test may produce insufficient findings by not taking into consideration the unique properties of a certain relational dataset. In this study the authors demonstrated how a CI test created for i.i.d. data can be successfully used to test for RCI against relational data. The _Relational Causal Markov Condition_ (RCMC) (Maier (2014)), which states that a relational variable must be independent of its non-descendants given its direct causes, enables the test to correctly establish relational conditional independence. However, when independence does not hold, the relational data's non-i.i.d.-ness aids the test in rejecting independence. Python implementation of RRCD is available at [https://github.com/sanghack81/RRCD](https://github.com/sanghack81/RRCD).
### Score Function-based
Score-based causal discovery algorithms search over the space of all possible DAGs to find the graph that best explains the data. Typically, any score-based approach has two main components: _(i) a search strategy_ - to explore the possible search states or space of candidate graphs \(G\), and _(ii) a score function_ - to assess the candidate causal graphs. The search strategy along with a score function helps to optimize the search over the space of all possible DAGs. More specifically, a score function \(S(G,D)\) maps causal graphs \(G\) to a numerical score, based on how well \(G\) fits a given dataset \(D\). A commonly used score function to select causal models is the Bayesian Information Criterion (BIC) (Schwarz (1978a)) which is defined below:
\[S_{BIC}=-2*loglikelihood+k*log(n), \tag{3}\]
where \(n\) is the sample size used for training and \(k\) is the total number of parameters. The lower the BIC score, the better the model. BDeu, BGe, MDL, etc. (see Table 5) are some of the other commonly used score functions. After evaluating the quality of the candidate causal graphs with a score function, the score-based methods output one or more causal graphs achieving the highest score (Huang et al. (2018b)). We discuss some of the well-known approaches in this category below.
#### 3.2.1 Ges
Greedy Equivalence Search (GES) (Chickering (2002)) is one of the oldest score-based causal discovery algorithms that perform a greedy search over the space of equivalence classes of DAGs. Each search state is represented by a CPDAG where some insert and delete operators allow for single-edge additions and deletions respectively. Primarily GES works in two phases: i) Forwards Equivalence Search (FES), and ii) Backward Equivalence Search (BES). In the first phase, FES starts with an empty CPDAG (no-edge model), and greedily adds edges by taking into account every single-edge addition that could be performed to every DAG in the current equivalence class. After an edge modification is done to the current CPDAG, a score function is used to score the model. If the new score is better than the current score, only then the modification is allowed. When the forward phase reaches a local maximum, the second phase, BES starts where at each step, it takes into account all single-edge deletions that might be allowed for all DAGs in the current equivalence class. The algorithm terminates once the local maximum is found in the second phase. Implementation of GES is available at the following Python packages: Causal Discovery Toolbox or CDT
Figure 12: General components of a score-based causal discovery approach.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Score Function/Criterion** & **Ref.** \\ \hline Minimum description length (MDL) & Schwarz (1978b) \\ \hline Bayesian information criterion (BIC) & Schwarz (1978a) \\ \hline Akaike information criterion (AIC) & Akaike (1998) \\ \hline Bayesian Dirichlet equivalence score (BDeu) & Buntine (1991) \\ \hline Bayesian metric for Gaussian networks (BGe) & Geiger \& Heckerman (1994) \\ \hline Factorized normalized maximum likelihood (RML) & Silander et al. (2008) \\ \hline \end{tabular}
\end{table}
Table 5: Some commonly used score functions for causal discovery. Please refer to the study Huang et al. (2018a) for a detailed discussion of the score functions.
(Kalainathan & Goudet (2019)) and gCastle (Zhang et al. (2021a)). GES assumes that the score function is decomposable and can be expressed as a sum of the scores of individual nodes and their parents. A summary workflow of GES is shown in Figure 13.
\[S(G,D)=\sum_{i=1}^{d}s(x_{i},pa(x_{i},G)) \tag{4}\]
#### 3.2.2 Fgs
Fast Greedy Search (FGS) (Ramsey (2015)) is another score-based method that is an optimized version of the GES algorithm (Chickering (2002)). This optimized algorithm is based on the faithfulness assumption and uses an alternative method to reduce scoring redundancy. An ascending list \(L\) is introduced which stores the score difference of arrows. After making a thorough search, the first edge \(X\to Y\) is inserted into the graph and the graph pattern is reverted. For variables that are adjacent to \(X\) or \(Y\) with positive score differences, new edges are added to \(L\). This process in the forward phase repeats until the \(L\) becomes empty. Then the reverse phase starts, filling the list \(L\) and continuing until \(L\) is empty. This study considered the experiment where GES was able to search over 1000 samples with 50,000 variables in 13 minutes using a 4-core processor and 16GB RAM computer. Following the new scoring method, FGS was able to complete the task with 1000 samples on 1,000,000 variables for sparse models in 18 hours using a supercomputer having 40 processors and 384GB RAM at the Pittsburgh Supercomputing Center. The code for FGS is available on GitHub as a part of the Tetrad project: [https://github.com/cmu-phil/tetrad](https://github.com/cmu-phil/tetrad).
#### 3.2.3 Sges
Selective Greedy Equivalence Search (SGES) (Chickering & Meek (2015)) is another score-based causal discovery algorithm that is a restrictive variant of the GES algorithm (Chickering (2002)). By assuming perfect generative distribution, SGES provides a polynomial performance guarantee yet maintains the asymptotic accuracy of GES. While doing this, it is possible to keep the algorithm's large sample guarantees by ignoring all but a small fraction of the backward search operators that GES considered. In the forward phase, SGES uses a polynomial number of insert operation calls to the score function. In the backward phase, it consists of only a subset of delete operators of GES which include, consistent operators to preserve GES's consistency over large samples. The authors demonstrated that, for a given set of graph-theoretic complexity features, such as maximum-clique size, the maximum number of parents, and v-width, the number of score assessments by SGES can be polynomial in the number of nodes and exponential in these complexity measurements.
#### 3.2.4 Rl-Bic
RL-BIC is a score-based approach that uses _Reinforcement Learning (RL)_ and a BIC score to search for the DAG with the best reward (Zhu et al. (2019)). For data-to-graph conversion, it uses an _encoder-decoder architecture_ that takes observational data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates a BIC score function and two penalty terms for enforcing acyclicity. The _actor-critic RL algorithm_ is used as a _search strategy_ and the final output is the causal graph that achieves the best reward among all the generated graphs. The approach is applicable to small and medium graphs of up to 30 nodes. However, dealing with large and very-large graphs is still a challenge for it. This study mentions that their future work involves developing a more efficient and effective score function
Figure 13: Stages in the GES (Chickering (2002)) algorithm.
since computing scores is much more time-consuming than training NNs. The original implementation of the approach is available at: [https://github.com/huawei-noah/trustworthyAI](https://github.com/huawei-noah/trustworthyAI).
#### 3.2.5 Triplet A*
Lu et al. (2021) uses the _A* exhaustive search_ (Yuan and Malone (2013)) combined with an optimal BIC score that requires milder assumptions on data than conventional CD approaches to guarantee its asymptotic correctness. The optimal BIC score combined with the exhaustive search finds the MEC of the true DAG if and only if the true DAG satisfies the optimal BIC Condition. To gain scalability, they also developed an approximation algorithm for complex large systems based on the A* method. This extended approach is named Triplet A* which can scale up to more than 60 variables. This extended method is rather general and can be used to scale up other exhaustive search approaches as well. Triplet A* can particularly handle linear Gaussian and non-Gaussian networks. It works in the following way. Initially, it makes a guess about the parents and children of each variable. Then for each variable \(X\) and its neighbors \((Y,Z)\), it forms a cluster consisting of \(X,Y,Z\) with their direct neighbors and runs an exhaustive search on each cluster. Lastly, it combines the results from all clusters. The study shows that empirically Triplet A* outperforms GES for large dense networks.
### Functional Causal Model (FCM)-based
Functional Causal Model (FCM) based approaches describe the causal relationship between variables in a specific functional form. FCMs represent variables as a function of their parents (direct causes) together with an independent noise term \(E\) (see Equation 5) (Zhang et al. (2015)). FCM-based methods can distinguish among different DAGs in the same equivalence class by imposing additional assumptions on the data distributions and/or function classes (Zhang et al. (2021b)). Some of the noteworthy FCM-based causal discovery approaches are listed below.
\[X=f(PA_{X})+E \tag{5}\]
Figure 14: Components of the RL-BIC approach.
Figure 15: A functional causal model (FCM) with four variables.
#### 3.3.1 L:NGAM
Linear Non-Gaussian Acyclic Model (LiNGAM) aims to discover the causal structure from observational data under the assumptions that the data generating process is linear, there are no unobserved confounders, and noises have non-Gaussian distributions with non-zero variances (Shimizu et al. (2006)). It uses the statistical method known as independent component analysis (ICA) (Comon (1994)), and states that when the assumption of **non-Gaussianity** is valid, the complete causal structure can be estimated. That is, the causal direction is identifiable if the variables have a linear relation, and the noise (\(\varepsilon\)) distribution is non-Gaussian in nature. Figure 16 depicts three scenarios where when \(X\) and \(\varepsilon\) are Gaussian (case 1), the predictor and regression residuals are independent of each other. For the other two cases, \(X\) and \(\varepsilon\) are non-Gaussian, and we see that for the regression in the anti-causal or backward direction (\(X\) given \(Y\)), the regression residual and the predictor are not independent as earlier. That is, for the non-Gaussian cases, independence between regression residual and predictor occurs only for the correct causal direction.
There are 3 properties of a LiNGAM. _First_, the variables \(x_{i}=x_{1},x_{2},...,x_{n}\) are arranged in a causal order \(k(i)\) such that the cause always preceedes the effect. _Second_, each variable \(x_{i}\) is assigned a value as per the Equation 6 where \(e_{i}\) is the noise/disturbance term and \(b_{ij}\) denotes the causal strength between \(x_{i}\) and \(x_{j}\). _Third_, the exogenous noise \(e_{i}\) follows a non-Gaussian distribution, with zero mean and non-zero variance, and are independent of each other which implies that there is no hidden confounder. Python implementation of the LiNGAM algorithm is available at [https://github.com/cdt15/lingam](https://github.com/cdt15/lingam) as well as in the gCastle package (Zhang et al. (2021)). Any standard ICA algorithm which can estimate independent components of many different distributions can be used in LiNGAM. However, the original implementation uses the FastICA (Hyvarinen (1999)) algorithm.
\[x_{i}=\sum_{k(j)<k(i)}b_{ij}x_{j}+e_{i} \tag{6}\]
Figure 16: Causal asymmetry between two variables having a linear relation (Glymour et al. (2019)). Here, the causal direction is from \(X\) to \(Y\). A total of three scenarios are depicted where both \(X\) and \(\varepsilon\) follow the i) Gaussian, ii) Uniform, or iii) Super-Gaussian distribution for each of the scenarios.
#### 3.3.2 Anm
ANM (Hoyer et al. (2008)) performs causal discovery with additive noise models and provides a generalization of the linear non-Gaussian causal discovery framework to deal with nonlinear functional dependencies where the variables have an additive noise. It mentions that nonlinear causal relationships typically help to break the symmetry between the observed variables and help in the identification of causal directions. ANM assumes that the data generating process of the observed variables is as per the Equation 7 where a variable \(x_{i}\) is a function of its parents and the noise term \(e_{i}\) which is an independent additive noise. An implementation of ANM is available in the gCastle package (Zhang et al. (2021)).
\[x_{i}=f(PA_{x_{i}})+e_{i} \tag{7}\]
#### 3.3.3 Pnl
Post-nonlinear (PNL) acyclic causal model with additive noise (Zhang & Hyvarinen (2010)) is a highly realistic model where each observed continuous variable is made up of additive noise-filled nonlinear functions of its parents, followed by a nonlinear distortion. The influence of sensor distortions, which are frequently seen in practice, is taken into account by the second stage's nonlinearity. A two-step strategy is proposed to separate the cause from the effect in a two-variable situation, consisting of restricted nonlinear ICA followed by statistical independence tests. In the Pot-luck challenge, the proposed model PNL was able to effectively separate causes from effects to solve the "CauseEffectPairs" (Mooij & Janzing (2010)) issue.
#### 3.3.4 Direct-LiNGAM
Shimizu et al. (2011) proposed DirectLiNGAM, a direct method for learning a linear non-Gaussian structural equation model (SEM) which is a direct method to estimate causal ordering and connection strengths based on non-Gaussianity. This approach estimates a causal order of variables by successively reducing each independent component from given data in the model which is completed in steps equal to the number of the variables in the model. Once the causal order of variables is identified, their connection strengths are estimated using conventional covariance-based methods such as least squares and maximum likelihood approaches. If the data strictly follows the model i.e. if all the model assumptions are met and the sample size is infinite, it converges to the right solution within a small number of steps. If some prior knowledge on a part of the structure is available, it suggests using those for more efficient learning. Doing so will reduce the number of causal orders and connection strengths to be estimated. Its implementation can be found at: [https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle](https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle).
#### 3.3.5 Sam
Kalainathan et al. (2018) proposed the algorithm known as _Structural Agnostic Modeling_ (SAM) that uses an _adversarial learning_ approach to find the causal graphs. Particularly, it searches for an FCM using _Generative Adversarial Neural-networks (GANs)_ and enforces the discovery of sparse causal graphs through adequate regularization terms. A learning criterion that combines distribution estimation, sparsity, and acyclicity constraints are used to enforce the end-to-end optimization of the graph structure and parameters through stochastic gradient descent. SAM leverages both conditional independencies and distributional asymmetries in the data to find the underlying causal mechanism. It aims to achieve an optimal complexity/fit trade-off while modeling the causal mechanisms. SAM enforces the acyclicity constraint of a DAG using the function in Equation 8 where, \(A\) is the adjacency matrix of the ground-truth graph \(G\), and \(d\) denotes the total number of nodes in \(G\). The latest implementation of SAM is available in the CDT package (Kalainathan & Goudet (2019)). Also, an older version of SAM is available at [https://github.com/Diviyan-Kalainathan/SAM](https://github.com/Diviyan-Kalainathan/SAM).
\[\sum_{i=1}^{d}=\frac{\operatorname{tr}A^{i}}{i!}=0 \tag{8}\]
#### 3.3.6 Cgnn
Causal Generative Neural Networks (CGNN) is an FCM-based framework that uses _neural networks (NNs)_ to learn the joint distribution of the observed variables (Goudet et al. (2018)). Particularly, it uses a generative model that minimizes the _maximum mean discrepancy_ (MMD) between the generated and observed data. CGNN has a high computational cost. However, it proposes an approximate learning criterion to scale the computational cost to linear complexity in the number of observations. This framework can also be used to simulate interventions on multiple variables in the dataset. An implementation of CGNN in Pytorch is available at [https://github.com/FenTechSolutions/CausalDiscoveryToolbox](https://github.com/FenTechSolutions/CausalDiscoveryToolbox).
### Continuous Optimization-based
Some of the recent studies in causal discovery formulate the structure learning problem as a continuous optimization task using the least squares objective and an algebraic characterization of DAGs (Zheng et al. (2018), Ng et al. (2020)). Specifically, the combinatorial structure learning problem has been transformed into a continuous one and solved using gradient-based optimization methods (Ng et al. (2019)). These methods leverage gradients of an objective function with respect to a parametrization of a DAG matrix. Apart from the usage of well-studied gradient-based solvers, they also leverage GPU acceleration which has changed the nature of the task (Ng et al. (2020)). Furthermore, to accelerate the task they often employ deep learning models that are capable of capturing complex nonlinear mappings (Yu et al. (2019)). As a result, they usually have a faster training time as deep learning is known to be highly parallelizable on GPU, which gives a promising direction for causal discovery with gradient-based methods (Ng et al. (2019)). In general, these methods are more global than other approximate greedy methods. This is because they update all edges at each step based on the gradient of the score and as well as based on the acyclicity constraint.
#### 3.4.1 Notears
DAGs with NO TEARS (Zheng et al. (2018)) is a recent breakthrough in the causal discovery that formulates the structure learning problem as a purely continuous constrained optimization task. It leverages an algebraic characterization of DAGs and provides a novel characterization of acyclicity that allows for a smooth global search, in contrast to a combinatorial local search. The full form of the acronym NOTEARS is Non-combinatorial Optimization via Trace Exponential and Augmented lagRangian for Structure learning which particularly handles linear DAGs. It assumes a linear dependence between random variables and thus models data \(D\) as a structural equation model (SEM). To discover the causal structure, it imposes the proposed acyclicity function as a constraint (Equation 10) combined with a weighted adjacency matrix \(W\) with least squares loss. The algorithm aims to convert the traditional combinatorial optimization problem into a continuous constrained optimization task by leveraging an algebraic characterization of DAGs via the trace exponential acyclicity function as follows:
\[\min_{\begin{subarray}{c}W\in\mathbb{R}^{d\times d}\\ \text{subject to }G(W)\in DAGs\end{subarray}}F(W)\quad\iff\min_{ \begin{subarray}{c}W\in\mathbb{R}^{d\times d}\\ \text{subject to }h(W)=0\end{subarray}}F(W)\, \tag{9}\]
where \(G(W)\) is a graph with \(d\) nodes induced by the weighted adjacency matrix \(W\), \(F:\mathbb{R}^{d\times d}\rightarrow\mathbb{R}\) is a regularized score function with a least-square loss \(\ell\), and \(h:\mathbb{R}^{d\times d}\rightarrow\mathbb{R}\) is a smooth function over real matrices that enforces acyclicity. Overall, the approach is simple and can be executed in about 50 lines of Python code. Its implementation in Python is publicly available at [https://github.com/xunzheng/notears](https://github.com/xunzheng/notears). The acyclicity function proposed in NOTEARS is as follows where \(\circ\) is the Hadamard product and \(e^{A}\) is the matrix exponential of A.
\[h(W)=\text{tr}(e^{W\circ W})-d=0 \tag{10}\]
#### 3.4.2 GraN-Dag
Gradient-based Neural DAG Learning (GraN-DAG) is a score-based structure learning approach that uses _neural networks (NNs)_ to deal with non-linear causal relationships (Lachapelle et al. (2019)). It uses a
stochastic gradient method to train the NNs for improving scalability and allowing implicit regularization. It formulates a _novel characterization of acyclicity_ for NNs based on NOTEARS (Zheng et al. (2018)). To ensure acyclicity in non-linear models, it uses an argument similar to NOTEARS and applies it first at the level of neural network paths and then at the graph paths level. For regularization, GraN-DAG uses a procedure called _preliminary neighbors selection_ (PNS) to select a set of potential parents for each variable. It uses a final pruning step to remove the false edges. The algorithm works well mostly in the case of non-linear Gaussian additive noise models. An implementation of GraN-DAG can be found at [https://github.com/kurowasan/GraN-DAG](https://github.com/kurowasan/GraN-DAG).
#### 3.4.3 Gae
Graph Autoencoder Approach (GAE) is a gradient-based approach to causal structure learning that uses a _graph autoencoder framework_ to handle nonlinear structural equation models (SEMs) (Ng et al. (2019)). GAE is a special case of the causal additive model that provides an alternative generalization of NOTEARS for handling nonlinear causal relationships. GAE is easily applicable to vector-valued variables. The architecture of GAE consists of a variable-wise encoder and decoder which are basically multi-layer perceptrons (MLPs) with shared weights across all variables \(X_{i}\). The encoder-decoder framework allows the reconstruction of each variable \(X_{i}\) to handle the nonlinear relations. The final goal is to optimize the reconstruction error of the GAE with \(l_{1}\) penalty where the optimization problem is solved using the augmented Lagrangian method (Nemirovsky (1999)). The approach is competitive in terms of scalability as it has a near linear training time when scaling up the graph size up to 100 nodes. Also, in terms of time efficiency, GAE performs well with an average training time of fewer than 2 minutes even for graphs of 100 nodes. Its implementation can be found at [https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle](https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle).
#### 3.4.4 Dag-Gnn
DAG Structure Learning with Graph Neural Networks (DAG-GNN) is a graph-based deep generative model that tries to capture the sampling distribution faithful to the ground-truth DAG (Yu et al. (2019)). It leverages variational inference and a parameterized pair of _encoder-decoders_ with specially designed _graph neural networks (GNN)_. Particularly, it uses _Variational Autoencoders (VAEs)_ to capture complex data distributions and sample from them. The weighted adjacency matrix \(W\) of the ground-truth DAG is a learnable parameter with other neural network parameters. The VAE model naturally handles various data types both continuous and discrete in nature. In this study, the authors also propose a _variant of the acyclicity function_ (Equation 11) which is more suitable and practically convenient for implementation with the existing deep learning methods. In the acyclicity function, \(d=\) the number of nodes, \(\alpha\) is a hyperparameter, and \(I\) is an identity matrix. An implementation of the DAG-GNN algorithm is available at [https://github.com/fishmoon1234/DAG-GNN](https://github.com/fishmoon1234/DAG-GNN).
\[\mathrm{tr}[(I+\alpha W\circ W)^{d}]-d=0 \tag{11}\]
#### 3.4.5 Golem
Gradient-based Optimization of DAG-penalized Likelihood for learning linear DAG Models (GOLEM) is a _likelihood-based_ causal structure learning approach with _continuous unconstrained optimization_ (Ng et al. (2020)). It studies the asymptotic role of the sparsity and DAG constraints for learning DAGs in both linear Gaussian and non-Gaussian cases. It shows that when the optimization problem is formulated using a likelihood-based objective instead of least squares (used by NOTEARS), then instead of a hard DAG constraint, applying only soft sparsity and DAG constraints is enough for learning the true DAG under mild assumptions. Particularly, GOLEM tries to optimize the score function in Equation 12 w.r.t. the weighted adjacency matrix \(B\) representing a directed graph. Here, \(L(B;x)\) is the maximum likelihood estimator (MLE), \(R_{sparse}(B)\) is a penalty to encourage sparsity (i.e. fewer edges), and \(R_{DAG}(B)\) is the penalty that enforces DAGness on \(B\).
\[S(B;x)=L(B;x)+R_{sparse}(B)+R_{DAG}(B) \tag{12}\]
In terms of denser graphs, GOLEM seems to outperform NOTEARS since it can reduce the number of optimization iterations which makes it robust in terms of scalability. With gradient-based optimization and GPU acceleration, it can easily handle thousands of nodes while retaining high accuracy. An implementation of GOLEM can be found at the gCastle (Zhang et al. (2021)) repository.
#### 3.4.6 Carefl
Causal Autoregressive Flows (CAREFL) uses _autoregressive flow models_(Huang et al. (2018)) for causal discovery by interpreting the ordering of variables in an autoregressive flow based on structural equation models (SEMs) (Khemakhem et al. (2021)). In general, SEMs define a generative model for data based on causal relationships. CAREFL shows that particularly _affine flows_ define a new class of causal models where the noise is modulated by the cause. For such models, it proves a new causal identifiability result that generalizes additive noise models. To learn the causal structure efficiently, it selects the ordering with the highest test log-likelihood and reports a measure of causal direction based on the likelihood ratio for non-linear SEMs. Autoregressive flow models also enable CAREFL to evaluate interventional queries by fixing the interventional variable while sampling from the flow. Moreover, the invertible property of autoregressive flows facilitates counterfactual queries as well. Code implementation of CAREFL is available at [https://github.com/pimonti/carefl](https://github.com/pimonti/carefl).
#### 3.4.7 DAG-NoCurl
DAG-NoCurl also known as DAGs with No Curl uses a two-step procedure for the causal DAG search (Yu et al. (2021)). At first, it finds an initial cyclic solution to the optimization problem and then employs the _Hodge decomposition_(Bhatia et al. (2012)) of graphs to learn an acyclic graph by projecting the cyclic graph to the gradient of a potential function. The goal of this study is to investigate how the causal structure can be learned without any explicit DAG constraints by directly optimizing the DAG space. To do so, it proposes the method DAG-NoCurl based on the graph Hodge theory that implicitly enforces the acyclicity of the learned graph. As per the Hodge theory on graphs (Lim (2020)), a DAG is a sum of three components: _a curl-free, a divergence-free,_ and _a harmonic component_. The curl-free component is an acyclic graph that motivates the naming of this approach. An implementation of the method can be found here: [https://github.com/fishmoon1234/DAG-NoCurl](https://github.com/fishmoon1234/DAG-NoCurl).
#### 3.4.8 Enco
Efficient Neural Causal Discovery without Acyclicity Constraints (ENCO) _uses both observational and interventional data_ by modeling a probability for every possible directed edge between pairs of variables (Lippe et al. (2021)). It formulates the graph search as an optimization of independent edge likelihoods, with the edge orientation being modeled as a separate parameter. This approach guarantees convergence when interventions on all variables are available and do not require explicitly constraining the score function with respect to acyclicity. However, the algorithm works on partial intervention sets as well. Experimental results suggest that ENCO is robust in terms of scalability, and is able to detect latent confounders. When applied to large networks having 1000 nodes, it is capable of recovering the underlying structure due to the benefit of its low-variance gradient estimators. The source code of ENCO is available at this site: [https://github.com/phlippe/ENCO](https://github.com/phlippe/ENCO).
#### 3.4.9 Corl
Ordering-based Causal Discovery with Reinforcement Learning (CORL) formulates the ordering search problem as a _multi-step Markov decision process_ (MDP) to learn the causal graph (Wang et al. (2021)). It implements the ordering generating process with an _encoder-decoder architecture_ and finally uses RL to optimize the proposed model based on the reward mechanisms designed for each order. A generated ordering is then processed using variable selection to obtain the final causal graph. According to the empirical results, CORL performs better than existing RL-based causal discovery approaches. This could be because CORL does not require computing the matrix exponential term with O(\(d^{3}\)) cost because of using ordering search. CORL is
also good in terms of scalability and has been applied to graphs with up to 100 nodes. The gCastle package contains an implementation of CORL.
#### 3.4.10 Mcsl
Masked Gradient-based Causal Structure Learning (MCSL) (Ng et al. (2022)) utilizes a reformulated structural equation model (SEM) for causal discovery using gradient-based optimization that leverages the _Gumbel-Softmax approach_ (Jang et al. (2016)). This approach is used to approximate a binary adjacency matrix and is often used to approximate samples from a categorical distribution. MCSL reformulates the SEM with additive noises in a form parameterized by the binary graph adjacency matrix. It states that, if the original SEM is identifiable, then the adjacency matrix can be identified up to super-graphs of the true causal graph under some mild conditions. For experimentation, MCSL uses multi-layer perceptrons (MLPs), particularly having 4-layers as the model function which is denoted as MCSL-MLP. An implementation of the approach can be found in the gCastle package.
### Knowledge-based
Over the years, some causal discovery approaches have been developed to model casual relationships by the incorporation of background knowledge obtained from several sources including experts' opinions, domain knowledge, prior evidence, relevant literature, etc. Infusion of informative priors can enhance model efficiency and also overcome the weaknesses of smaller datasets (Mooij et al. (2020)). Often some prior knowledge is available in most domains that may come from different sources. For example, in medicine, most cases have prior knowledge about symptoms, diseases, and treatments which can be obtained from clinical literature or knowledge bases. Additional causal relationships may become identifiable with the incorporation of background knowledge (Hasan & Gani (2022)). Even specifying one variable as the cause of another, can further refine the set of DAGs, thereby increasing the number of identifiable causal relationships (Wang et al. (2020)). Some studies (Adib et al. (2022), Gani et al. (2023)) highlight the importance of human-in-the-loop, and recommend taking into account domain experts' opinions to verify the graphs produced by different causal discovery algorithms. Below we list some notable knowledge-based approaches to causal discovery.
#### 3.5.1 C-Mcmc
Constrained MCMC (C-MCMC) introduces _prior knowledge_ into the _Markov chain Monte Carlo (MCMC)_ algorithm for structure learning (Xu et al. (2015)). C-MCMC uses the following _three types of prior knowledge_: the existence of parent nodes, absence of parent nodes, and distribution knowledge including the conditional probability distribution (CPD) of edges and the probability distribution (PD) of nodes. All prior knowledge should be given by domain experts. Existence knowledge means that for any node \(X_{i}\), a node-set \(pa(X_{i})\) includes all parent nodes of \(X_{i}\). The absence of knowledge means that for a node \(X_{i}\), a node-set \(pa(X_{i})\) does not include any parent node of \(X_{i}\). PD/CPD knowledge means that the PD of a node and the CPD of an edge are known. Considering that the prior knowledge may not be consistent and reliable, a confidence lambda is assigned by domain experts on each of the prior knowledge that ranges from 0 to 1.
Figure 17: Graph optimization mechanism of ENCO.
This denotes the certainty level of prior knowledge. A _lambda_ value of 1 indicates very high confidence in this knowledge.
#### 3.5.2 Jci
Joint Causal Inference (JCI) is a knowledge-based causal discovery approach that combines data from multiple datasets from different contexts (Mooij et al. (2020)). Particularly, JCI is a _causal modeling framework_ rather than a specific algorithm, and it can be implemented using any causal discovery algorithm that can take into account some background knowledge. The main idea of JCI is to first, consider auxiliary context variables that describe the context of each data set, then, pool all the data from different contexts, including the values of the context variables, into a single data set, and finally apply standard causal discovery methods to the pooled data, incorporating appropriate background knowledge on the causal relationships involving the context variables. The framework is simple and easily applicable as it deals with latent confounders, and cycles (if the causal discovery method supports this), and various types of interventions in a unified way. The JCI framework also facilitates analysis of data from almost arbitrary experimental designs which allow researchers to trade off the number and complexity of experiments to be done with the reliability of the analysis for the purpose of causal discovery.
#### 3.5.3 Fci with Tiered Background Knowledge
Andrews et al. (2020) show that the Fast Causal Inference (FCI) algorithm (Spirtes et al. (2000a)) is sound and complete with tiered background knowledge (TBK). By _tiered background knowledge_, it means any knowledge where the variables may be partitioned into two or more mutually exclusive and exhaustive subsets among which there is a known causal order. Tiered background knowledge may arise in many different situations, including but not limited to instrumental variables, data from multiple contexts and interventions, and temporal data with contemporaneous confounding. The proof that FCI is complete with TBK suggests that the algorithm is able to find all of the causal relationships that are identifiable from tiered background knowledge and observational data under the typical assumptions.
#### 3.5.4 Pkcl
Wang et al. (2020) propose an algorithm, **P**rior-**K**nowledge-driven Local **C**ausal Structure **L**earning (PKCL), to discover the underlying causal mechanism between _bone mineral density_ (BMD) and its factors from clinical data. It first discovers the neighbors of the target variables and then detects the MaskingPCs to eliminate their effect. After that, it finds the spouse of target variables utilizing the neighbors set. This way the skeleton of the causal network is constructed. In the global stage, PKCL leverages the _Markov blanket (MB)_ sets learned in the local stage to learn the global causal structure in which prior knowledge is incorporated to guide the global learning phase. Specifically, it learns the causal direction between feature variables and target variables by combining the constraint-based and score-based structure search methods. Also, in the learning phase, it automatically adds casual direction according to the available prior knowledge.
#### 3.5.5 Kg2Causal
Kg2Causal (Sinha and Ramsey (2021)) uses a large-scale general-purpose biomedical knowledge graph as a prior for data-driven causal discovery. With a set of observed nodes in a dataset and some relationship edges between the nodes derived from a knowledge graph, Kg2Causal uses the knowledge graph-derived edges
Figure 18: Workflow of the JCI framework.
to guide the data-driven discovery of a causal graph. The main ideas of this approach are first, mapping each variable in the dataset to a node in the knowledge graph, and querying relationships between them; next, extracting a subgraph containing the connected variables with edges between them; and then this edge set is used as prior knowledge to guide an optimizing scoring step for inferring the causal graph. An implementation of Kg2Causal is available at [https://github.com/meghasin/Kg2Causal](https://github.com/meghasin/Kg2Causal) in R programming language.
#### 3.5.6 Kcrl
Prior **K**nowledge-based **C**ausal Discovery Framework with **R**einforcement **L**earning a.k.a. KCRL (Hasan and Gani (2022)) is a framework for causal discovery that utilizes prior knowledge as constraints and penalizes the search process for violation of these constraints. This utilization of background knowledge significantly improves performance by reducing the search space, and also, enabling a faster convergence to the optimal causal structure. KCRL leverages reinforcement learning (RL) as the search strategy where the RL agent is penalized each time for the violation of any imposed knowledge constraints. In the KCRL framework (Figure 19), at first, the observational data is fed to an RL agent. Here, data-to-adjacency matrix conversion is done using an encoder-decoder architecture which is a part of the RL agent. At every iteration, the agent produces an equivalent adjacency matrix of the causal graph. A comparator compares the generated adjacency matrix with the true causal edges in the prior knowledge matrix \(P_{m}\), and thereby, computes a penalty \(p\) for the violation of any ground truth edges in the produced graph. Each generated graph is also scored using a standard scoring function such as BIC. A reward \(R\) is estimated as a sum of the BIC score \(S_{BIC}\), the penalty for acyclicity \(h(W)\), and \(\beta\) weighted prior knowledge penalty \(\beta p\). Finally, the entire process halts when the stopping criterion \(S_{c}\) is reached, and the best-rewarded graph is the final output causal graph. Although originally KCRL was designed for the healthcare domain, it can be used in any other domain for causal discovery where some prior knowledge is available. Code for KCRL is available at [https://github.com/UzmaHasan/KCRL](https://github.com/UzmaHasan/KCRL).
\[R=S_{BIC}+\beta p+h(W) \tag{13}\]
### Hybrid Approaches
There are some approaches that are based on the combination of constraint-based and score-based causal discovery approaches. These approaches integrate conditional independence testing (CI) along with score functions to design a hybrid approach for causal discovery.
Figure 19: Workflow of the KCRL framework.
#### 3.6.1 Mmhc
Max-Min Hill Climbing (MMHC) is a hybrid causal discovery technique that incorporates the concepts from both score-based and constraint-based algorithms (Tsamardinos et al. (2006)). A challenge in causal discovery is the identification of causal relationships within a reasonable time in the presence of thousands of variables. MMHC can reliably learn the causal structure in terms of time and quality for high-dimensional settings. MMHC is a two-phase algorithm that assumes faithfulness. In the first phase, MMHC uses Max-Min Parents and Children (MMPC) (Tsamardinos et al. (2003)) to initially learn the skeleton of the network. In the second phase, using a greedy Bayesian hill-climbing search, the skeleton is oriented. In the sample limit, MMHC's skeleton identification phase is reliable, but the orientation phase offers no theoretical assurances. From the results of the conducted experiments in this study, MMHC outperformed PC (Spirtes et al. (2000b)), Sparse Candidate (Friedman et al. (2013)), Optimal Reinsertion (Moore and Wong (2003)), and GES (Chickering (2002)) in terms of computational efficiency. Considering the quality of reconstruction, MMHC performs better than all the above-mentioned algorithms except for GES when the sample size is 1000. The authors also proved the correctness of the results. The implementation of MMHC is available at [http://www.dsl-lab.org/supplements/mmhcpaper/mmhcindex.html](http://www.dsl-lab.org/supplements/mmhcpaper/mmhcindex.html) as part of Causal Explorer 1.3, a library of Bayesian network learning and local causal discovery methods.
#### 3.6.2 Fritl
To discover causal relationships in linear and non-Gaussian models, Chen et al. (2021a) proposed a hybrid model named FRITL. FRITL works in the presence or absence of latent confounders by incorporating independent noise-based techniques and constraint-based techniques. FRITL makes causal Markov assumption, causal faithfulness assumption, linear acyclic non-Gaussianity assumption, and one latent confounder assumption. In the _first phase_ of FRITL, the FCI (Spirtes et al. (2000a)) algorithm is used to generate asymptotically accurate results. Unfortunately, relatively few unconfounded direct causal relations are normally determined by the FCI since it always reveals the presence of confounding factors. In the _second phase_, FRITL identifies the unconfounded causal edges between observable variables within just those neighboring pairings that have been influenced by the FCI results. The _third stage_ can identify confounders and the relationships that cause them to affect other variables by using the Triad condition (Cai et al. (2019)). If further causal relationships remain, _Independent Component Analysis_ (ICA) is finally applied to a notably reduced group of graphs. The authors also theoretically proved that the results obtained from FRITL are efficient and accurate. FRITL produces results that are in close accord with neuropsychological opinion and in exact agreement with a causal link that is known from the experimental design when applied to real functional magnetic MRI data and the SACHS (Sachs et al. (2005)) dataset.
#### 3.6.3 Hcm
Most of the causal discovery algorithms are applicable only to either discrete or continuous data. However, in reality, we often have to work with mixed-type data (e.g., shopping behavior of people) which don't receive enough attention in causal discovery. Li et al. (2022) proposed the approach _Hybrid Cavusal Discovery on Mixed-type Data (HCM)_ to identify causal relationships with mixed variables. HCM works under the causal faithfulness and causal Markov assumption. HCM has three phases where in the _first phase_, the skeleton graph is learned in order to limit the search space. To do this, they used the PC-stable (Colombo et al. (2014)) approach along with their proposed Mixed-type Randomized Causal Independence Test (MRCIT) which can handle mixed-type data. They also introduced a generalized score function called Cross-Validation based Mixed Information Criterion (CVMIC). In the _second phase_, starting with an empty DAG, they add edges to the DAG based on the highest CVMIC score. In order to reduce false positives, the learned causal structure is
Figure 20: Different stages of the FRITL model.
pruned using MRCIT once again in the _final phase_ with a slightly bigger conditional set. They compared their approach with other causal discovery approaches for mixed data and showed HCM's superiority. However, they didn't consider any unobserved confounders in the dataset which allows for further improvement. They made the code available on the following GitHub site: [https://github.com/DAMO-DI-ML/AAAI2022-HCM](https://github.com/DAMO-DI-ML/AAAI2022-HCM).
### Miscellaneous Approaches
Apart from the types of approaches mentioned before, there are some causal discovery approaches that use some specialized or unique techniques to search for the graph that best describes the data. Some of them are listed below.
#### 3.7.1 Sada
One of the biggest limitations of the traditional causal discovery methods is that these models cannot identify causal relations when the problem domain is large or there is a small number of samples available. To solve this problem, Cai et al. (2013) proposed a _Split-and-Merge_ causal discovery method named SADA which assumes causal faithfulness. Even in situations when the sample size is substantially less than the total number of variables, SADA can reliably identify the causal factors. SADA divides the main problem into two subproblems and works in three phases. Initially, SADA separates the variables of the causal model into two sets \(V_{1}\), and \(V_{2}\) using a causal cut set \(C\) where all paths between \(V_{1}\), and \(V_{2}\) are blocked by \(C\). This partitioning is continued until the variables in each subproblem are less than some threshold. In the next phase, any arbitrary causal algorithm is applied to both subproblems and the causal graphs are generated. Here, they used LiNGAM (Shimizu et al. (2006)) as the causal algorithm. Then these graphs are merged in the final step. But to handle the conflicts while merging, they only kept the most significant edge and eliminated the others whenever there existed multiple causal paths between two variables in the opposite direction. They compared the performance of SADA against baseline LiNGAM (without splitting and merging), and the results showed that SADA achieved better performance in terms of the metrics precision, recall, and F1 score. The authors also provided theoretical proof that the results generated by SADA are accurate, effective, and complete.
#### 3.7.2 Etio
ETIO is a versatile _logic-based_ causal discovery algorithm specialized for business applications (Borboudakis & Tsamardinos (2016)). Its features include i) the ability to utilize prior causal knowledge, ii) addressing selection-bias, hidden confounders, and missing values in data, and iii) analyzing data from pre and post-interventional distribution. ETIO follows a _query-based approach_, where the user queries the algorithm about the causal relations of interest. In the first step, ETIO performs several CI tests on the input dataset. Particularly, it performs non-Bayesian tests that return p-values of the null hypothesis of conditional independencies. Then it employs an empirical Bayesian method that converts the p-values of dependencies and interdependencies into probabilities. Later, it selects a consistent subset of dependence, and prior knowledge constraints to resolve conflicts which are ranked in order of confidence. Particularly, ETIO imposes an m-separation constraint if a given independence is more probable than the corresponding dependence. These imposed constraints are the ones that correspond to test results, in order of probability, while removing conflicting test results. Finally, it identifies all invariant features based on the input queries using the well-known declarative programming language - answer set programming a.k.a ASP (Gelfond & Lifschitz (1988)).
Figure 21: Different phases of the HCM algorithm.
#### 3.7.3 \(b\)Qcd
Discovering causal relationships from observational data has been a challenging task, especially for the bivariate cases as it is difficult to determine whether there actually exists a cause-effect relationship or whether it is the effect of a hidden confounder. Tagasovska et al. (2020) proposed the approach bivariate **Q**uantile **C**ausal **D**iscovery (bQCD) to determine causal relationships in bivariate settings. Although they made no assumptions on the class of causal mechanisms, they did assume that there exists no confounder, feedback, or selection bias. They utilized _quantile scoring_ in place of Kolmogorov complexity (Kolmogorov (1963)), and used conditional quantiles, pinball loss instead of conditional mean, and squared loss. The approach bQCD performs almost similarly to the state-of-the-art techniques but it is much more computationally inexpensive. Also, the usage of quantile conditioning instead of mean conditioning makes bQCD more robust to heavy tails as the mean is more susceptible to outliers than the quantile. Moreover, not making any assumptions about the parametric class allows bQCD to be applied to a variety of processes where baseline methods perform significantly poorly when the assumptions do not hold. The source code of bQCD written in R is available on this site: [https://github.com/tagas/bQCD](https://github.com/tagas/bQCD).
#### 3.7.4 Lfcm
**Latent Factor Causal Models** (LFCMs) (Squires et al. (2022)) perform causal discovery in the _presence of unobserved confounders_. These models are motivated by gene regulatory networks. LFCMs work in three stages where it discovers: (i) clusters of observed nodes, (ii) a partial ordering over clusters, and (iii) finally, the entire structure over both observed and latent nodes. A graph \(G\) is called a latent factor causal model (LFCM) if it satisfies the following conditions: (a) Unique cluster assumption: Each observed node has exactly one latent parent, (b) Bipartite assumption: There are no edges between pairs of observed nodes or between pairs of latent nodes, (c) Triple-child assumption: Each latent node has at least 3 observed children, and (d) Double-parent assumption. The other assumption of LFCMs is that it allows non-exogenous latent variables. For cluster formation, LFCMs rely on t-separation (Sullivan et al. (2010)). When two ordered pairs of variables [e.g. \((X_{i}\), \(X_{j})\) and \((X_{u}\), \(X_{v})\)] are t-separated, then they belong to the same cluster. LFCMs are a biologically-motivated class of causal models with latent variables. The limitations of LFCMs include their applicability to only a linear Gaussian SEM, some major structural restrictions, and that it can fail when the true graph violates the double parent assumption.
#### 3.7.5 Meta-RL
Meta-RL is a _meta-learning algorithm_ in a reinforcement learning (RL) setting where the agent learns to _perform interventions_ to construct a causal graph (Sauter et al. (2022)). The goal is to be able to use previous learning experiences during training to generalize in unseen environments. This approach has some strong assumptions such as i) each environment is defined by an acyclic SCM, ii) every observable variable can be intervened on, iii) for each environment in the training set, the underlying SCM is given, and iv) intervention can be performed on at most one variable at a time. Meta-RL has two phases: i) Training, and
Figure 22: The graph \(G\) on left is a latent factor causal model (LFCM), and the graph on the right is the latent graph \(L(G)\) for \(G\).
ii) Application. The training phase starts by randomly choosing an SCM from a set of environments. There are mainly two sets of actions that an agent performs: _a) interventional actions_, and _b) structure actions_. In each step, any one action can be performed on the set of variables to generate a PDAG. The _agent policy is updated_ via the _interventional actions_ in each step. However, in case of the structural actions (e.g. add, delete, or reverse), the agent policy only gets updated at the end of the training procedure where a reward is sent to the agent. The reward is computed by comparing the hamming distance of the generated PDAG to the true causal structure when the training is completed. A _recurrent LSTM layer_ enables the policy to remember samples from the post-interventional distributions in the earlier steps. This should help to better identify causal relations since the results of sequential interventions can be used to estimate the distribution. Once trained, Meta-RL can then be applied to environments that have a structure unseen during training. For training, 24 SCMs with 3 observable variables, and 542 SCMs with 4 observable variables were created. Code to reproduce or run Meta-RL is available at [https://github.com/sa-and/interventional_RL](https://github.com/sa-and/interventional_RL). One limitation of this approach is that it needs modification in terms of scalability. Also, in real-world scenarios, every variable might not be accessible for intervention.
#### 3.7.6 CausalVAE
Yang et al. (2021) proposed a generative model named CausalVAE which learns disentangled and causally meaningful representation of the data by combining ideas of Variational Autoencoder (VAE) with the Structural Causal Model (SCM). They introduced a Causal Layer inside the vanilla VAE model which converts independent exogenous factors into causal endogenous ones. They didn't provide the causal graph to the model. Instead, they used the label of the causal variables as additional information, and the causal layer generated the causal graph from that. They evaluated the model on two synthetic datasets and the popular benchmark dataset CelebA. Once the true causal graph is identified, CausalVAE could generate counterfactual images and perform interventions effectively using the "do-operator". CausalVAE outperformed non-causal VAE models (\(\beta\)-VAE (Higgins et al. (2017)), LadderVAE (Sonderby et al. (2016)), ConditionVAE (Sohn et al. (2015))) in terms of the degree of information relevance between the learned representation and ground truth.
Figure 23: Training phase of the Meta-RL algorithm.
## 4 Causal Discovery Algorithms for Time Series Data
Time series data arise when observations are collected over a period of time. So far the methods that we have discussed are specialized for causal discovery from i.i.d or time-independent data. However, often, real-world data in different domains can be a time series (non-i.i.d data). For this type of data, there are different specialized causal discovery approaches based on CI testing, SEM/FCMs, Granger causality Granger1969, or deep neural networks Liu et al.2017. In this section, first, we provide a brief introduction to some of the common terminologies related to time-series data and temporal causal discovery. Then, we discuss the notable causal discovery approaches for time-series data.
**Definition 7** (Time Series Data): _Time series data is a collection of observations measured over consistent intervals of time. The observation of a time series variable \(X^{j}\) at time \(t\) is denoted by \(X^{j}_{t}\)._
Examples of time series data include retail sales, stock prices, climate data, heart rate of patients, brain activity recordings, temperature readings, etc. Any time series data may have the following _properties_:
* _Trend_: When the data show a long-term rise or fall, a trend is present. Such long-term increases or decreases in the data might not be always linear. The trend is also referred to as _changing direction_ when it might switch from an upward trend to a downward trend.
* _Seasonality_: It refers to the seasonal characteristics of time series data. Seasonality exists when the data regularly fluctuates based on different time spans (e.g. daily/weekly/ monthly/quarterly/yearly). An example is temperature data, where it is mostly observed that the temperature is higher in the summer, and lower in the winter. Any analysis related to time series usually takes the advantage of the seasonality in data to develop more robust models.
* _Autocorrelation_: Autocorrelation or self-correlation is the degree of similarity between a given time series and a lagged version of itself over successive time intervals. Time series data is usually autocorrelated i.e., the past influences the present and future Lawton et al.2001.
* _Stationarity & Non-stationarity_: Stationarity means that the joint probability distribution of the stochastic process does not change when shifted in time. A time series is stationary if it has causal links such that for variables \(i\) and \(j\), if \(X^{i}\to X^{j}\) at any timestamp \(t\), then \(X^{i}\to X^{j}\) also holds for all \(t^{\prime}\neq t\). This condition does not hold for a non-stationary time series where \(X^{i}\to X^{j}\) at a particular time \(t\) need not necessarily be true at any other time stamp \(t^{\prime}\).
Let \(X^{j}_{1:t}=\{X^{1}_{1:t}\), \(X^{2}_{1:t}\),..., \(X^{n}_{1:t}\}\) be a multivariate time series with \(n\) variables and \(t\) time steps. At any particular timestamp \(t\), the state of the \(n\) variables can be represented as \(X^{j}_{t}=\{X^{t}_{1}\), \(X^{t}_{2}\),..., \(X^{t}_{n}\}\). The past of a variable \(X^{j}_{t}\) is denoted by \(X^{j}_{1:t-1}\). The parent set (\(P_{A}\)) of a variable includes all the nodes with an edge towards it. The goal of any temporal causal discovery approach is to discover the causal relationships between the time series variables. Any time series causal graph may have the following _types of causal relationships/edges_: _(i) Instantaneous edges_, and _(ii) Lagged edges_.
**Definition 8** (Instantaneous Causal Effect): _When the delay between cause and effect is 0 timesteps, i.e. causal effects are of the form \(X^{i}_{t}\to X^{j}_{t}\) or \(X^{i}_{t}\to X^{i}_{t}\) (self-causation), then it is known as an instantaneous or contemporaneous causal relationship/effect Nauta et al.2019._
**Definition 9** (Lagged Causal Effect): _When the delay between cause and effect is at least 1 or more timesteps (i.e. causal effects of the form \(X^{i}_{t-}\to X^{j}_{t}\) or \(X^{i}_{t-}\to X^{i}_{t}\)), then it is known as a lagged causal relationship/effect. That is, a lagged causal effect occurs when a variable causes another variable or itself with a time lag = 1 or more._
In Figure 24, the red-colored edges represent the instantaneous causal effect (relationships among the variables at the same time step), and the blue edges represent the lagged causal effect. The green edges represent a special form of temporal causal relationships known as the _changing modules_ (CM). The CMs
represent the direct effect of a time stamp on a variable (e.g \(t\to X_{t}^{1}\) in Figure 24). Details on CM are available in Ferdous et al. (2023).
The causal graphs produced by different temporal causal discovery algorithms vary based on the details of the relationships they represent. Any temporal causal discovery algorithm may produce any of the following two _types of temporal causal graph_ as its outcome: a _full-time causal graph_ or a _summary causal graph_ (see Figure 25).
**Definition 10** (Full-time Causal Graph): _A full-time causal graph represents both the instantaneous (\(X_{t}^{i}\)\(\to X_{t}^{j}\) or \(X_{t}^{i}\to X_{t}^{j}\)) and time-lagged (\(X_{t-}^{i}\to X_{t}^{j}\) or \(X_{t-}^{i}\to X_{t}^{i}\)) causal edges where all the lags between a cause and effect are specified in the graph. A full-time casual graph may sometimes present the changing modules as well._
Figure 25 (left) represents a full-time causal graph where both instantaneous relations (e.g. \(X_{t}^{1}\)\(\to X_{t}^{2}\), \(X_{t-1}^{1}\)\(\to\)\(X_{t-1}^{2}\)), and lagged relations (e.g. \(X_{t-1}^{2}\)\(\to\)\(X_{t}^{3}\), \(X_{t-1}^{3}\)\(\to\)\(X_{t}^{3}\)) among the variables are depicted.
**Definition 11** (Summary Causal Graph): _A summary causal graph is a reduced version of a full-time causal graph where each lagged node represents the entire past (\(X_{t-}^{j}\)) of its corresponding instantaneous node (\(X_{t}^{j}\)), and the exact time lag between the cause and effect is not specified in the graph._
Figure 24: Types of causal relationships: Instantaneous edges (red), Lagged edges (blue), and Changing modules (green).
Figure 25: Full-time causal graph (Left) & Summary causal graph (Right)
In the following subsections, we describe briefly some of the notable causal discovery algorithms that focus on time series data. Figure 26 presents a taxonomy of some of the discussed approaches.
### Constraint-based
#### 4.1.1 \(ts\)Fcl
The algorithm _time series_ FCI or tsFCI (Entner & Hoyer (2010)) adapts the Fast Causal Inference (Spirtes et al. (2000a)) algorithm (developed for the analysis of non-temporal variables) to infer causal relationships from time series data. It works in two phases: (i) an _adjacency phase_, and (ii) an _orientation phase_. It makes use of temporal priority and consistency throughout time to orient edges and restrict conditioning sets. It provides a window causal graph, and an advantage is that it _can detect lagged hidden confounders_. However, a disadvantage is that it cannot model cyclic contemporaneous causation, and also, instantaneous relationships. A code package that implements tsFCI is available here: [https://sites.google.com/site/dorisentner/publications/tsfci](https://sites.google.com/site/dorisentner/publications/tsfci).
#### 4.1.2 Pcmci
A problem with large-scale time series data is that although adding more variables makes causal analysis more interpretable, if the additional variables don't have a significant effect on the causal model, this, in turn, makes the analysis less powerful, and original causal relations may also be overlooked. Moreover, at large dimensions, certain nonlinear tests even lose their ability to limit false positive rates (FPRs). Runge et al. (2019) proposed a two-stage algorithm PCMCI that can overcome this problem. In _Step-1_, the model selects conditions using \(PC_{1}\) (a variant of the skeleton discovery part of the PC (Spirtes et al. (2000b)) algorithm) to remove irrelevant variables which solve the issue of low power in the causal discovery process. In _Step-2_, the momentary conditional independence (MCI) test is used which helps to reduce the FPR even when the data is highly correlated. The MCI test measures if two variables are independent or not given their parent sets (see Equation 14). PCMCI assumes that the data is stationary, has time-lagged dependencies, and also assumes causal sufficiency. Even when the stationary assumption is violated (probably by obvious confounders), PCMCI still provides a more robust performance than Lasso regression (Tibshirani (1996)) or the PC (Spirtes et al. (2000b)) algorithm. However, for highly predictable systems where little new information is produced at each time step, PCMCI is not a good fit. Python implementation of PCMCI is available in the _Tigramite_ package ([https://github.com/jakobrunge/tigramite](https://github.com/jakobrunge/tigramite)).
\[X^{i}_{t-\tau}\perp\!\!\!\perp X^{j}_{t}|P_{A}(X^{j}_{t}),P_{A}(X^{i}_{t-\tau}) \tag{14}\]
Figure 26: Taxonomy of some causal discovery approaches for time series data.
#### 4.1.3 Pcmci+
PCMCI+ (Runge (2020)) is an extension of the PCMCI (Runge et al. (2019)) algorithm to discover contemporary or instantaneous causal links. PCMCI+ also assumes causal sufficiency like the PCMCI algorithm. It is also a two-stage algorithm where in the first stage, irrelevant edges from the causal model are eliminated. Unlike PCMCI, the edges are removed separately for lagged and contemporary conditioning sets where the contemporary phase employs more CI tests than the lagged phase. In the second stage, PCMCI+ employs the notion of momentary conditional independence (MCI) to improve the selection of conditioning sets for the various CI tests, improving their autocorrelation calibration, and boosting their detection power. The results show that when there is high autocorrelation in the data, PCMCI+ can achieve better performance in terms of higher recall, lower false positives, and faster execution compared to the PC algorithm. For lower autocorrelation, PCMCI+ performs almost similarly to PC. Implementation of PCMCI+ is also available in the Tigramite package ([https://github.com/jakobrunge/tigramite](https://github.com/jakobrunge/tigramite)).
#### 4.1.4 Lpcmci
Latent PCMCI or LPCMCI is a constraint-based causal discovery algorithm to determine causal relationships from large-scale time series data (Gerhardus & Runge (2020)). This is another extension of the PCMCI (Runge et al. (2019)) algorithm as it can discover causal relationships _even in the presence of latent confounders_. Moreover, it gives the flexibility to use the model when the data is linear or nonlinear, and also when the data has lagged or contemporary conditioning sets. The authors identified that when the conditional independence tests have a low effect size, existing techniques like FCI (Spirtes et al. (2000a)) suffer from low recall in the presence of autocorrelation. They demonstrated that this issue can be solved by including causal parents in the conditioning sets. By utilizing the orientation rules, these parents can be identified as early as in the edge removal stage. The results show that the proposed LPCMCI method can achieve higher recall than the baseline model SVAR-FCI. They also provide proof that LPCMCI is complete, sound, and order-independent. But still, LPCMCI cannot differentiate all members of the Markov class, and also, when the faithfulness assumption doesn't hold, LPCMCI might lead to an incorrect conclusion. Along with PCMCI and PCMCI+, the python code of LPCMCI is also available at the Tigramite GitHub package.
#### 4.1.5 Cd-Nod
Many existing approaches assume that the causal model is static, and therefore, there will be a fixed joint distribution of the observed data. However, these methods fail when the underlying data changes over time, and causal parameters vary during the period. Huang et al. (2020) proposed a causal discovery method that assumes that the parameter of the causal model can change over time or different datasets, and they named the method Constraint-based Causal Discovery from Heterogeneous/Nonstationary Data (CD-NOD). The proposed method can determine causal direction by taking advantage of distribution shifts, and these distribution changes, in the presence of stationary confounders, are helpful for causal discovery. The distribution shifts can be either time or domain indexes and are denoted by a surrogate variable \(C\). Broadly, CD-NOD has two phases where in the _first phase_ it recovers the causal skeleton \(S_{G}\), and in the _second phase_ it orients the edges as per some orientation rules. Given that the causal model offers a concise summary of how the joint distribution changes, they demonstrated that distribution shift contains important information for causal discovery. Recently, researchers discovered that this idea could help solve machine
Figure 27: Steps involved in the PCMCI method for time series causal discovery.
learning problems of domain adaptation and forecasting in nonstationary situations (Scholkopf et al. (2012); Zhang et al. (2013)). Conducted experiments demonstrate the changes of causal influence between the different states of brain functions, and the empirical results show that CD-NOD has improved precision and F1 score. However, they didn't consider that the causal directions might flip, and the power of conditional independence tests might reduce because of the distribution shifts. Algorithm's source code is available here: [https://github.com/Biwei-Huang/Causal-Discovery-from-Nonstationary-Heterogeneous-Data](https://github.com/Biwei-Huang/Causal-Discovery-from-Nonstationary-Heterogeneous-Data).
### Functional Causal Model (FCM)-based
#### 4.2.1 VarLiNGAM
VarLiNGAM (Hyvarinen et al. (2010)) combines the non-Gaussian instantaneous models with autoregressive models and shows that a non-Gaussian model is identifiable without prior knowledge of network structure. It estimates both instantaneous and lagged causal effects in models that are an example of structural vector autoregressive (SVAR) models. These models are a combination of structural equation models (SEM) and vector autoregressive (VAR) models. VarLiNGAM also shows that taking instantaneous influences into account can change the values of the time-lagged coefficients to a great extent. Thus, neglecting instantaneous influences can lead to misleading interpretations of causal effects. It also assesses the significance of the estimated causal relations. An implementation of this method is available at: [https://lingam.readthedocs.io/en/latest/tutorial/var.html](https://lingam.readthedocs.io/en/latest/tutorial/var.html).
#### 4.2.2 T\(i\)MiNo
Time-series Models with Independent Noise (TiMINo) (Peters et al. (2013)) studies a class of restricted structural equation models (SEMs) for time-series data that include nonlinear and instantaneous effects. It assumes \(X_{t}\) to be a function of all direct causes and some noise variable, the collection of which is supposed to be jointly independent. The algorithm is based on unconditional independence tests and is applicable to multivariate, linear, nonlinear, and instantaneous interactions. If the model assumptions are not satisfied by the data, TiMINo remains mostly undecided instead of making wrong causal decisions. While methods like Granger causality (Granger (1969)) are built on the asymmetry of time direction, TiMINo additionally takes into account identifiability emerging from restricted SEMs. This leads to a straightforward way of dealing with unknown time delays in different time series. An implementation of TiMINo is available in this repository: [https://github.com/ckassaad/causal_discovery_for_time_series](https://github.com/ckassaad/causal_discovery_for_time_series).
### Continuous Optimization-based
#### 4.3.1 Dynotears
Pamfil et al. (2020) proposed the Dynamic NOTEARS (DYNOTEARS) which is a structure learning approach for dynamic data that simultaneously estimates contemporaneous (intra-slice) and time-lagged (inter-slice) relationships between variables in a time-series. It is a score-based approach that revolves around minimizing a penalized loss subject to an acyclicity constraint. The optimization finds the conditional dependencies that are best supported by the data. It leverages insight from the approach NOTEARS (Zheng et al. (2018)) which uses an algebraic characterization of acyclicity in directed graphs for static data. The assumptions made by DYNOTEARS include that the structure of the network is fixed through time, and is identical for all time series in the data. This approach is scalable to high-dimensional datasets. An implementation of
Figure 28: Illustration of CD-NOD’s phase-1.
this approach is available in the CausalNex library ([https://github.com/quantumblacklabs/causalnex](https://github.com/quantumblacklabs/causalnex)), and also at [https://github.com/ckassaad/causal_discovery_for_time_series](https://github.com/ckassaad/causal_discovery_for_time_series).
#### 4.3.2 Nts-Notes
NTS-NOTEARS (Sun et al. (2021)) is a score-based causal discovery method that uses 1-D convolutional neural networks (CNNs) for time-series data to capture linear, nonlinear, lagged, and instantaneous relations among variables along with ensuring the acyclicity property of a DAG. It extends a recent continuous optimization-based approach NOTEARS (Zheng et al. (2018)) for learning nonparametric instantaneous DAGs, and adapts the acyclicity constraint from that approach. It assumes that there are no latent confounders in the data, and the underlying data-generating process is fixed and stationary over time. NTS-NOTEARS is faster than other constraint-based methods because of using nonlinear conditional independence tests. Incorporating prior knowledge into the learning process promotes the use of optimization constraints on convolutional layers for better casual discovery. Its implementation is available at: [https://github.com/xiangyu-sun-789/NTS-NOTEARS/](https://github.com/xiangyu-sun-789/NTS-NOTEARS/).
### Granger Causality (GC)-based
Granger (1969) investigated the causal relationships between the variables in a time series data which is known as Granger Causality (GC). It is based on the basic assumption that _causes precede their effects_. The author defines GC as follows: _A time series variable \(X^{i}\) causes \(X^{j}\), if the probability of \(X^{j}\) conditional on its own past, and the past of \(X^{i}\) (besides the set of the available information) does not equal the probability of \(X^{j}\) conditional on its own past alone_. The GC test can't be performed directly on non-stationary data. The non-stationary data needs to be transformed into stationary data by differencing it, either using first-order or second-order differencing. Granger Causality can be used when there are no latent confounders, and also, no instantaneous effects exist, i.e., no variable causes another variable at the same time stamp.
#### 4.4.1 Gvar
Generalized Vector AutoRegression (GVAR) (Marcinkevics and Vogt (2021)) is a framework for inferring multivariate Granger causality under nonlinear dynamics based on autoregressive modeling with self-explaining neural networks. It allows the detection of signs of Granger-causal effects and inspection of their variability over time in addition to relational inference. It focuses on two aspects: first, inferring Granger-causal relationships in multivariate time series under nonlinear dynamics, and second, inferring signs of Granger-causal relationships. A reproducible code of the approach is available at: [https://github.com/i6092467/GVAR](https://github.com/i6092467/GVAR).
#### 4.4.2 Navar
Bussmann et al. (2021) proposed the approach Neural Additive Vector AutoRegression (NAVAR) which is a causal discovery approach for capturing nonlinear relationships using _neural networks_. It is particularly trained using deep neural networks that extract the (additive) Granger causal influences from the time evolution in multivariate time series. NAVAR assumes an additive structure where the predictions depend linearly on independent nonlinear functions of the individual input variables. These nonlinear functions are modeled using neural networks. The additive structure of NAVAR allows scoring and ranking the causal relationships. Currently, NAVAR is implemented with MLPs and LSTMs as the backbone using Python which is available at: [https://github.com/bartbussmann/NAVAR](https://github.com/bartbussmann/NAVAR). However, more complex architectures such as dilated CNNs and transformers can also be used to model NAVAR.
### Miscellaneous Approaches
#### 4.5.1 oCSE
Causal Network Inference by Optimal Causation Entropy (oCSE) (Sun et al. (2015a)) is based on the _optimal causation entropy principle_ which utilizes a two-step process (_aggregative discovery and progressive removal_) to jointly infer the _set of causal parents_ of each node. It proposes a theoretical development of
_causation entropy_, an information-theoretic statistic designed for causal inference. Particularly, it proves the optimal causation entropy principle for Markov processes which is as follows: _the set of nodes that directly cause a given node is the unique minimal set of nodes that maximizes causation entropy_. This principle transforms the problem of causal inference into the optimization of causation entropy. Causation entropy can be regarded as a type of conditional mutual information designed for causal structure inference which generalizes the traditional, unconditioned version of transfer entropy. Causation entropy when applied to Gaussian variables also generalizes Granger causality and conditional Granger causality. An advantage of the method oCSE is that it often requires a relatively smaller number of samples, and fewer computations to achieve high accuracy. Due to its aggregative nature, the conditioning set encountered in entropy estimation remains relatively low-dimensional for sparse networks. An implementation of the oCSE algorithm is available on this website: [https://github.com/ckassaad/causal_discovery_for_time_series](https://github.com/ckassaad/causal_discovery_for_time_series).
#### 4.5.2 Tcdf
Temporal Causal Discovery Framework (TCDF) (Nauta et al. (2019)) is a _deep learning framework_ that discovers the causal relationships in observational time series data. Broadly, TCDF has the following steps: (i) Time series prediction, (ii) Attention interpretation, (iii) (a) Causal validation, (iii) (b) Delay discovery, and (iv) Temporal causal graph construction. TCDF consists of \(N\) independent _attention-based convolutional neural networks (CNNs)_ all with the same architecture but a different target time series. Each network receives all observed time series as input. The goal of each network is to predict one time series based on the past values of all time series in the dataset. A time series \(X_{i}\) is considered a potential cause of the target time series \(X_{j}\) if the attention score is beyond a certain threshold. By comparing all attention scores, a set of potential causes is formed for each time series. TCDF validates whether a potential cause (found by the attention mechanism) is an actual cause of the predicted time series by applying a causal validation step. TCDF uses _permutation importance (PI)_ as a causal validation method which measures how much an error score increases when the values of a variable are randomly permuted. Finally, all validated causal relationships are included in a temporal causal graph. TCDF learns the time delay between cause and effect by interpreting the network's kernel weights. This framework has experimented with simulated financial market data and FMRI data. It discovered roughly 95'97% of the time delays correctly. However, it performs slightly worse on short time series in FMRI data since a deep learning method has many parameters to fit. An implementation of TCDF can be found at: [https://github.com/M-Nauta/TCDF](https://github.com/M-Nauta/TCDF).
#### 4.5.3 Nbcb
NBCB (Assaad et al. (2021)) or Noise-based/Constraint-based approach is a hybrid approach that learns a _summary causal graph_ from observational time series data without being restricted to the Markov equivalent class even in the case of instantaneous relations. A _summary causal graph_ is one that represents the causal relations between time series without including lags. That is, it only represents the cause-effect relations in a given time series without the time delay between the cause and the effect. To find the summary graph, NBCB uses a hybrid approach which is divided into two steps. First, it uses a noise-based procedure to find the potential causes of each time series under the assumption of additive noise models (ANMs). Then, it uses a constraint-based approach to prune all unnecessary causes and hence ends up with an oriented causal graph. The second step is based on a new temporal causation entropy measure proposed by this study that is an extension of the causation entropy to time series data for handling lags bigger than one time
Figure 29: Interpretation of how the TCDF method works.
step. Furthermore, this study relies on a lighter version of the faithfulness hypothesis, namely adjacency faithfulness. An implementation of NBCB is available in the site [https://github.com/ckassaad/causal_discovery_for_time_series](https://github.com/ckassaad/causal_discovery_for_time_series).
#### 4.5.4 PCTMI
PCTMI (Assaad et al. (2022a)) is an entropy-based approach that discovers the summary causal graph for time series data with potentially different sampling rates. To do so this study proposes a new _temporal mutual information measure_ defined on a window-based representation of time series. Then it shows how this measure relates to an entropy reduction principle that can be seen as a special case of the _probabilistic raising principle_. PCTMI combines these two concepts in a PC-like algorithm (Spirtes et al. (2000b)) to construct the summary causal graph. PCTMI focuses particularly on the summary graph, rather than the full-time graph. It has mainly two steps: _(i) Skeleton construction and (ii) Edge orientation_. The skeleton construction as well as the orientation of instantaneous relations is similar to the PC algorithm but adapted for time series data. To orient the lagged relations, it uses the rules of an _entropic reduction_ (ER) principle (Michalos (1972)). PCTMI assumes both the causal Markov condition and faithfulness of the data distribution, common assumptions for constraint-based CD approaches. An implementation of PCTMI is available on this website: [https://github.com/ckassaad/causal_discovery_for_time_series](https://github.com/ckassaad/causal_discovery_for_time_series).
#### 4.5.5 Acd
Most causal discovery algorithms applied for time-series analysis find a causal graph for the data, and then refit the model whenever new samples do not fit with the underlying causal graph. But in many cases, samples share connections among them, for example, the brain activity of different regions at different times. When the algorithms fit a new model, this dynamic nature between the samples is lost, and can no longer identify the actual causal relation. To solve this problem, Lowe et al. (2022) proposed the Amortized Causal Discovery (ACD) technique which can identify the causal relations when samples are from different causal graphs but share common dynamics. ACD consists of an encoder and a decoder. The encoder predicts the causal graph's edges by learning Granger causal relations, and under the assumed causal model, the decoder simulates the dynamics of the system for the next time-step. The results showed that ACD performs better than existing causal discovery models for fully observed models, and also in the presence of hidden confounders and noise. However, experimentation is done only with simulated data which gives no guarantee that it will perform in the same way as more complex realistic data. Furthermore, ACD assumes that there exists a function that can specify the dynamics shared by all samples, but this cannot be verified in practice. Implementation of the model is available at [https://github.com/loeweX/AmortizedCausalDiscovery](https://github.com/loeweX/AmortizedCausalDiscovery).
## 5 Evaluation Metrics for Causal Discovery
We discuss the common metrics used to evaluate the performance of causal discovery algorithms below.
* **Structural Hamming Distance (SHD):** SHD is the total number of edge additions, deletions, or reversals that are needed to convert the estimated graph \(G^{\prime}\) into its ground-truth graph \(G\)(Zheng et al. (2018); Cheng et al. (2022)). It is estimated by determining the missing edges, extra edges, and edges with incorrect direction in the produced graph compared to its true graph. A lower hamming distance means the estimated graph is closer to the true graph, and vice versa. An estimated graph is fully accurate when its SHD = 0. We show the calculation of SHD for the graphs in Figure 30 using the formula in Equation 15 where \(A\) = total number of edge additions, \(D\) = total number of edge deletions, and \(R\) = total number of edge reversals. In the Figure 30, we need to _add_ the edge \(D\to C\), _delete_ the edges \(D\to B\) and \(D\to A\), and _reverse_ the edges \(C\to B\) and \(C\to A\) in the generated graph (graph b) to convert it into the true graph (graph a). Therefore, the \(SHD=1+2+2=5\) means a total of 5 actions are required to reach the true graph (graph a). \[SHD=A+D+R\] (15)
* **Structural Intervention Distance (SID):** SID is a distance metric for DAGs proposed by Peters and Buhlmann (2015). It measures the closeness between DAGs in terms of their capacities for causal effects. Specifically, it computes the number of falsely inferred intervention distributions (Cheng et al. (2022)) to reflect how false edges in the generated graph can influence the effects obtained.
* **False Discovery Rate (FDR):** FDR is the expected fraction of false discoveries among all the discoveries. In terms of causal discovery, FDR represents the ratio of the extra edges over the sum of the true edges and extra edges. Here, extra edges mean the edges that are present in the estimated graph but not present in the actual graph or the false positives (FP), and true edges mean edges that are present in both the graphs or the true positives (TP). The lower the FDR, the better the performance of causal discovery. \[FDR=\frac{FP}{TP+FP}\] (16)
* **True Positive Rate (TPR):** TPR denotes the proportion of the positives in the data correctly identified as positives. In terms of causal graphs, TPR is the ratio of the edges in the estimated graph that are also present in the true graph (TP) to the total number of true edges (true positives (TP) and false negatives (FN)). The higher the TPR of an estimated graph, the better the discovery. \[TPR=\frac{TP}{ActualPositive}=\frac{TP}{TP+FN}\] (17)
* **False Positive Rate (FPR):** In general terms, FPR is the proportion of negatives that are incorrectly identified as positives. In terms of causal graphs, FPR is the ratio of the false edges produced by the estimated graph which are absent in the true graph (false positives/extra edges) over the sum of true negatives (TN) and false positives (FP). The lower the FPR, the better the causal discovery performance. \[FPR=\frac{FP}{ActualNegative}=\frac{FP}{TN+FP}\] (18)
* **Precision:** Precision returns the proportion of true positives (TP) among all the values predicted as positive. That is, out of all the positives predicted, what percentage is truly positive. In terms of causal discovery, precision is the fraction of the correct or semi-correct edges over all the produced edges (Shen et al. (2020)). \[Precision=\frac{TP}{TP+FP}\] (19)
* **Recall:** Recall returns the proportion of the correctly predicted positive values. That is, out of the total positives, what percentage are predicted as positive. In causal discovery, recall is the fraction of edges in the ground-truth graph that are correctly or semi-correctly estimated (Shen et al. (2020)). The recall metric is the same as TPR. \[Recall=\frac{TP}{TP+FN}\] (20)
Figure 30: (a) Ground-truth graph \(G\), and (b) Estimated graph \(G^{\prime}\).
* **F1 Score:** The F1 score metric combines the precision and recall metrics into a single metric. It is the harmonic mean of precision and recall and is mostly used in case of imbalanced data. \[F1\ score=\frac{2TP}{2TP+FN+FP}\] (21)
* **Matthews Correlation Coefficient (MCC):** MCC is a single-value metric that summarizes the confusion matrix. It takes into account all four entries of the confusion matrix (TP, TN, FP, and FN). The value of MCC is 1 when the discovery of edges is fully accurate (FP = FN = 0), indicating perfect causal discovery. On the contrary, when the algorithm always misidentifies (TP = TN = 0), then the MCC is -1, representing the worst possible discovery. Thus, the MCC value lies between -1 and 1. \[MCC=\frac{TP\times TN-FP\times FN}{\sqrt{(TN+FN)(FP+TP)(TN+FP)(FN+TP)}}\] (22)
## 6 Datasets for Causal Discovery
There are a couple of benchmark causal discovery datasets from different domains that are often used for the evaluation of causal discovery approaches. We list some common i.i.d. and time series datasets below.
### I.I.D. datasets
* **ASIA:** ASIA is a synthetic dataset, also known as the Lung Cancer dataset (Lauritzen and Spiegelhalter (1988)). The associated graph (Figure 31 (a)) is a small toy network that models lung cancer in patients from Asia. Particularly, it is about different lung diseases (tuberculosis, lung cancer, or bronchitis), their relations to smoking, and patients' visits to Asia. This dataset is often used for benchmarking causal graphical models. The ground-truth graph has 8 nodes and 8 edges. Lippe et al. (2021), and Hasan and Gani (2022) have used this dataset for the evaluation of their approaches.
* **LUCAS**: The LUCAS (Lung Cancer Simple Set) is a synthetic dataset that contains toy data generated artificially by causal Bayesian networks with binary variables (Lucas et al. (2004)). Here, the target variable is _Lung Cancer_. The data-generating model of the LUCAS dataset is a Markov process, which means that the state of the children is entirely determined by the state of the parents. The ground-truth graph (Figure 31 (b)) is a small network with 12 variables and 12 edges. Hasan and Gani (2022) used this dataset to evaluate their framework.
* **SACHS**: SACHS (Sachs et al. (2005)) is a real dataset that measures the expression levels of multiple phosphorylated protein and phospholipid components in human cells. It is the most commonly used dataset for evaluating causal discovery approaches. It has a small network with 11 nodes and 17 edges (Figure 32). The dataset has both observational and interventional samples. Most of the CD approaches use the \(n=853\) observational samples to evaluate their method. This dataset has been
Figure 31: Ground-truth network of the ASIA (left), and (b) LUCAS (right) datasets.
used by many approaches such as Zheng et al. (2018), Zhu et al. (2019), Ng et al. (2020), Lachapelle et al. (2019) & Ng et al. (2022), Lippe et al. (2021) for evaluation purposes.
* **CHILD**: The CHILD (Spiegelhalter et al. (1993)) dataset is a medical Bayesian network for diagnosing congenital heart disease in a newborn "blue baby". The ground-truth network is a medium graph that consists of 20 nodes and 25 edges (Figure 34). The dataset includes features such as patient demographics, physiological characteristics, and lab test reports (Chest X-Ray, CO2 reports, etc.). This dataset was used by Lippe et al. (2021) in their study.
* **ALARM**: A Logical Alarm Reduction Mechanism (ALARM) is a patient monitoring system (Beinlich et al. (1989)) designed to provide an alarm message for patients, and has an associated synthetic dataset. In particular, it implements a cautionary alarm message for patient monitoring. The ground-truth graph is a medium-sized network with 37 nodes and 46 edges. This dataset was used by Yu et al. (2019), and Cai et al. (2013) to evaluate their approaches. The ground-truth network is available in this repository: [https://www.bnlearn.com/bnrepository/](https://www.bnlearn.com/bnrepository/).
* **HEPAR2**: It is a probabilistic causal model for the diagnosis of liver disorders (Onisko (2003)). This causal Bayesian network tries to capture the causal links among different risk factors, diseases, symptoms, and test results. The ground-truth graph is a large network with 70 nodes and 123 edges which is available in the bnlearn (Scutari (2009)) repository.
### Time Series datasets
* **fMRI** datasets: Functional Magnetic Resonance Imaging (fMRI) is a popular approach to investigating dynamic brain networks (Cao et al. (2019)). Different types of fMRI data are often used to
Figure 33: Ground-truth network of the CHILD dataset.
Figure 32: Ground-truth network of the SACHS dataset.
evaluate time-series causal discovery approaches. Zhang et al. (2017) used the fMRI Hippocampus dataset (Laumann et al. (2015)) that contains signals from six separate brain regions. Nauta et al. (2019) and 3 used a simulated blood oxygen level-dependent (BOLD) fMRI dataset that has 28 different underlying networks from 50 brain regions. It measures the neural activity of different brain regions based on the changes in blood flow. Huang et al. (2020) tested their approach using the task fMRI data to learn information flows between brain regions, and how causal influences change across resting state and task states.
* Earth Sciences** data: CauseMe (Munoz-Mari et al. (2020)) is a platform that contains benchmark causal discovery datasets to evaluate, and compare the performance of different CD approaches. It contains datasets generated from both synthetic models mimicking real challenges and real-world data sets from the earth science domain where the ground-truth network is known with high confidence. Bussmann et al. (2021) used different datasets from the CauseMe platform in their study. Specifically, they used the synthetic nonlinear VAR dataset, the hybrid climate and weather dataset, and the real-world river run-off dataset to evaluate their algorithm. It was also used by Runge et al. (2019) in their experiments.
* **causLens** datasets: Lawrence et al. (2021) from causaLens proposed a framework for generating synthetic time series data with a known ground truth causal structure for evaluating time series causal discovery approaches. They have an open-source repository ([https://github.com/causalens/cdml-neurips2020](https://github.com/causalens/cdml-neurips2020)) that captures the source code and datasets of their proposed framework. Datasets can be generated specifying different assumptions (causal sufficiency, i.i.d, instantaneous effects, etc.) using an example script in the repository. This facilitates the users to generate data as per their requirements. Located in England, causaLens is a leading software company with a focus to develop intelligent machines based on causal AI.
* **DREAM3 challenge** datasets: DREAM3 (Prill et al. (2010)) is a simulated gene expression dataset often used for evaluating time-series causal discovery algorithms. It has five different datasets of E.Coli and yeast gene networks (Ecoli1, Ecoli2, Yeast1, Yeast2, and Yeast3), each consisting of a maximum of 100 variables. Bussmann et al. (2021) used this dataset to evaluate their approach. Every dataset has 46 time series and every time series consists of only 21 timesteps.
* **Stock market** datasets: Stock market datasets contain multiple continuous time series data which are very useful to assess temporal causal discovery algorithms. Huang et al. (2020) used two different stock market datasets downloaded from Yahoo Finance to test their approach. It contains daily returns of stocks from Hong Kong and the United States.
## 7 Benchmarking Causal Discovery Algorithms
In this section, we report the performance of some common causal discovery approaches on i.i.d datasets. We compare the approaches in terms of three metrics: _SHD_, _TPR_, and _FDR_. For causal discovery on the i.i.d. datasets, we choose the following commonly used datasets with available ground-truth graphs: _ASIA_ (small network), _CHILD_ and _ALARM_ (medium networks), and _HEPAR2_ (large network). The causal discovery approaches that are benchmarked for the i.i.d. datasets are: _PC, GES, LiNGAM, Direct-LiNGAM, NOTEARS, DAG-GNN, GraN-DAG, GOLEM_, and _MCSL_. The implementations of the algorithms have been adopted from the gCastle (Zhang et al. (2021)) package.
From the results reported in Table 6, we see that for the ASIA dataset, both GES and Direct-LiNGAM approaches have the best (lowest) SHD. MCSL on the other hand has the worst (highest) SHD for ASIA. For the CHILD dataset, NOTEARS performs the best w.r.t. SHD, and once again MCSL has the worst SHD. DAG-GNN has the best (lowest) SHD for the ALARM dataset, and once again GES outperforms others with the lowest SHD in the case of the HEPAR2 dataset. In terms of TPR, GES, and MCSL both outperform others twice. That is GES has the best TPR for the ASIA and ALARM networks, and MCSL has the highest TPR for the CHILD and HEPAR2 networks. With respect to FDR, GraN-DAG outperforms the other algorithms with the lowest FDR in the case of all the datasets except the ALARM dataset. DAG-GNN has the best FDR in the case of ALARM. Overall the metrics of all the approaches in the case of the
HEPAR2 dataset which has a large ground-truth network are quite poor. This signifies that the existing approaches are not fully sufficient to handle large or very large networks, and should focus on improving their scalability. The development of new approaches should consider the scalability factor of the algorithm so that they can handle real-world large networks having 100 to 1000 nodes.
## 8 Tools for Causal Discovery
We briefly introduce the tools and software publicly available for users to perform causal discovery. These tools include the implementations of some benchmark causal discovery approaches as well as famous datasets, and commonly used evaluation metrics. Please refer to the table in the following page for the details of the tools or software packages.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{**ASIA**} & \multicolumn{3}{c|}{**CHILD**} & \multicolumn{3}{c|}{**ALARM**} & \multicolumn{3}{c|}{**HEPAR2**} \\ \hline
**Methods** & SHD & TPR & FDR & SHD & TPR & FDR & SHD & TPR & FDR & SHD & TPR & FDR \\ \hline PC & 5 & 0.6 & 0.3 & 43 & 0.24 & 0.86 & 55 & 0.67 & 0.6 & 172 & 0.35 & 0.75 \\ \hline GES & **4** & **0.63** & 0.38 & 34 & 0.38 & 0.89 & 56 & **0.74** & 0.61 & **70** & 0.5 & 0.23 \\ \hline LiNGAM & 7 & 0.25 & 0.6 & 23 & 0.28 & 0.63 & 43 & 0.43 & 0.55 & 111 & 0.1 & 0.32 \\ \hline Direct-LiNGAM & **4** & 0.5 & **0** & 28 & 0.12 & 0.82 & 40 & 0.39 & 0.5 & 110 & 0.1 & 0.07 \\ \hline NOTEARS & 12 & 0.13 & 0.83 & **22** & 0.16 & 0.64 & 41 & 0.17 & 0.38 & 123 & - & - \\ \hline DAG-GNN & 7 & 0.25 & 0.5 & 24 & 0.24 & 0.7 & **39** & 0.196 & **0.31** & 123 & 0 & 1 \\ \hline GraN-DAG & 7 & 0.13 & **0** & 24 & 0.04 & **0** & 44 & 0.044 & 0.75 & 122 & 0.008 & **0** \\ \hline GOLEM & 11 & 0.25 & 0.75 & 49 & 0.2 & 0.88 & 60 & 0.26 & 0.71 & 157 & 0.05 & 0.89 \\ \hline MCSL & 19 & 0.5 & 0.82 & 140 & **0.56** & 0.91 & 464 & 0.72 & 0.93 & 1743 & **0.45** & 0.97 \\ \hline \end{tabular}
\end{table}
Table 6: The benchmarking of some common causal discovery algorithms for I.I.D. datasets. The best results w.r.t each metric (SHD, TPR, and FDR) are boldfaced. A lower SHD and FDR are better. While a higher TPR signifies a better performance.
Figure 34: SHD, TPR and FDR plots of the different benchmarked approaches on some i.i.d. datasets. A lower SHD and FDR are better. While a higher TPR signifies a better performance.
## 9 Challenges and Applications of Causal Discovery
### Challenges
Despite the years of progress made in developing different approaches for causal discovery, there exists some concerns, and challenges that need to be addressed during the development of any causal discovery approach. One of the major concerns about the causal discovery algorithms is the strong _assumptions_ they make to recover the underlying causal graph from data. These assumptions make the task really challenging when _any of these are violated_. One such assumption is the causal sufficiency which considers that _there are no unobserved/latent variables_. Several methods estimate the causal relationships assuming there are no unobserved confounders. However, it might not be the case in real-world data. When real-world data violates this assumption and has hidden confounders, the estimation results could be distorted, and lead to false conclusions. Often _real datasets have hidden confounders_ that must be taken into account to obtain a true causal graph that represents the data generating process efficiently. Otherwise, this may lead to the _possibility of biases_ in the analysis. Therefore, the collected observational data with latent confounders is insufficient to infer the true underlying causal structure (Squires et al. (2022)). Some studies such as Jabbari et al. (2017), Liu et al. (2021), etc. address the presence of latent variables in causal discovery. Another assumption which is the causal faithfulness condition also fails in multiple cases (e.g., if some variables are completely determined by others).
Most of the CD algorithms are based on the assumption that the data samples are _independent and identically distributed (iid)_. However, in many real-world scenarios, the data may have been generated in a different way, and thus, the iid assumption is violated (Lee & Honavar (2020)). In such cases, using CD algorithms that assume that the data is i.i.d. may produce spurious and misleading relationships. Apart from failures of the assumptions, some approaches may get stuck to a local optimum. Especially, greedy methods like GES (Chickering (2002)), SGES (Chickering & Meek (2015)), etc. can get trapped in local optimum, even with large datasets. These methods may often produce sub-optimal graphs in the absence of infinite data. _Computational complexity_ is another challenge for causal discovery algorithms. The _search space grows super-exponentially_ due to the _combinatorial nature of the solution space_, which makes even simple methods computationally expensive (Chickering (1996)). In the case of the score-based approaches, the _large search space_ over all possible DAGs is a major drawback. Hence, score-based methods seem to work well when there are a few or moderate number of nodes. However, these methods suffer when the space of equivalence classes tends to grow super exponentially for dense networks. _Lack of abundant observational data_ is another major concern for many CD approaches. For constraint-based approaches such as PC (Spirtes et al. (2000b)), FCI (Spirtes et al. (2000a)), etc., accurate CI testing is possible only when an infinite amount of data is available. With a _finite amount of data, conditional independence (CI) tests become really challenging_. Another disadvantage of the constraint-based approaches is that with a _large sample size or high dimensionality_, the _number of CI tests grows exponentially_. Even, in some cases, the algorithm might take weeks to provide the output. That is, the run time of the algorithm becomes way too long.
Structure _identifiability_ of the underlying causal model (Shimizu et al. (2006)) is another issue in causal discovery. A causal graph \(G\) is typically not identifiable given observational data only, as a set of possible graphs could have generated the data. Also, the statistical issues stemming from high-dimensional datasets are of concern. Apart from these, a major challenge is the _lack of enough benchmark datasets_ with ground truth to train and evaluate the developed causal models. The lack of a comprehensive public data repository consisting of ground-truth graphs hinders the proper evaluation of CD approaches. This problem is severe for areas such as climate science where there is almost never any exact ground truth available (Melkas et al. (2021)). Hence, the only way to analyze the produced graphs in such scenarios is to let domain experts inspect those and see if they actually make sense (Ebert-Uphoff & Deng (2017), Gani et al. (2023)).
### Applications
Causal discovery is widely used in various fields, ranging from healthcare, economics, earth science, education, machine learning, natural language processing, and many more. The challenges faced with correlation-based
machine learning have facilitated the development of several causal discovery techniques and increased their applications in many domains.
In **biomedical and healthcare domains,** the key research questions revolve around identifying the underlying causal mechanism to find the risk factors that can be changed to cure a disease. To serve this purpose, researchers have been using causal discovery techniques for a long time. Mani & Cooper (1999) used a modified local causal discovery technique to identify the factors contributing to infant mortality in the USA. Wang et al. (2006) used a stepwise causal discovery method to identify active components or combinations of the components in herbal medicine. The _Fast Causal Inference (FCI)_ and _Fast Greedy Equivalence Search (FGS)_ methods were used by Shen et al. (2020) to see how accurately these techniques can generate the 'gold standard' graph of Alzheimer's Disease. They evaluated the performance of the algorithms on the dataset collected from the Alzheimer's Disease Neuroimaging Initiative (ADNI) and found that the causal graphs generated by FCI and FGES are almost identical to the 'gold standard' graph created from the literature. They also suggested that using longitudinal data with as much prior knowledge as feasible will maximize the effectiveness of causal discovery algorithms. More recently, Shen et al. (2021) proposed a causal discovery technique that can be applied to large-scale Electronic Health Record (EHR) data and has been applied to identify the causal structure for type-2 diabetes mellitus. Before applying the algorithm, they utilized a data transformation method that converts longitudinal data to disease events. The algorithm uses a BIC score to find the causal graph which overlaps 81% with the graph validated by the professionals. Some studies (Bikak et al. (2020), Gani et al. (2023)) have combined the outcomes from several causal discovery algorithms with the opinions from healthcare experts to develop more reliable and plausible causal graphs. Gani et al. (2023) studies the effect of liberal versus conservative oxygen therapy on the mortality of ICU patients where they present an expert-augmented causal estimation framework. The framework systematically combines results from a set of causal discovery algorithms with expert opinions to produce the final causal graph (Figure 35) that is used to answer some important clinical causal queries.
**Earth science and climate** related research is another domain where causality has been widely adopted. The well-known _PC algorithm_ was applied to find the causal links between Eastern Pacific Oscillation (EPO), Western Pacific Oscillation (WPO), North Atlantic Oscillation (NAO), and Pacific-North America (PNA) patterns which are four important patterns of atmospheric low-frequency variability in boreal winter (Ebert-Uphoff & Deng (2012)). The results, which support earlier research on dynamical processes, suggested that WPO and EPO are almost identical from a cause-and-effect standpoint due to their high contemporaneous coupling. The _PC_ and _PC stable_ algorithms were applied to daily geopotential height data at 500MB over the boreal winter (Ebert-Uphoff & Deng). The results showed that the atmospheric interactions become less strong on average over the whole Northern Hemisphere. Reduced interconnectedness across various geographic places is the result of this weakening, particularly in the tropics. Causal discovery methods were also applied to verify the results obtained from dynamic climate models. Hammerling et al. (2015) used the
Figure 35: Causal factors determining the influence of oxygen therapy on the mortality of critical care patients (Gani et al. (2023)). This causal graph was determined by the majority voting of 7 causal discovery algorithms combined with opinions from the domain experts.
_PC algorithm_ to learn the causal signatures from the output of the dynamic model. These causal signatures can provide an additional layer of error checking and identify whether the results of dynamic models are accurate or not. Ombadi et al. (2020) applied Granger causality, PC, convergence cross-mapping, and transfer entropy to hydrological models. The authors used these causal discovery methods to identify and investigate the causes of evaporation and transpiration in shrubland areas throughout the course of the summer and winter.
The **education sector** has leveraged causal discovery techniques for decades now. Druzdzel & Glymour (1995) performed an experiment based on the _Tetrad II_ (Scheines et al. (1994)) causal discovery program on why the retention rate of U.S. universities is low compared to their reputation. The causal discovery model identified that the retention rate mostly depends on the quality of incoming students. Fancsali (2014) used the _PC_ and _FCI_ algorithms to answer questions based on their causal effect. They specifically considered the situation given that a student who plays computer games scored poorly in his exam, can the algorithms answer whether reducing gaming time will improve his results? Quintana (2020) employed the _PC_ and _FGS_ algorithms to find which social and economic factors are directly related to academic achievements. The algorithms found earlier accomplishment, executive functions such as thinking skills, sustained attention focusing, and ambition as the primary drivers of academic performance, which is in line with other studies.
Over the past few years, the intersection of causality with **Machine Learning (ML)** and **Artificial Intelligence (AI)** techniques is quite a topic of interest. Sun et al. (2015b) utilized _Granger causality_ for selecting machine learning features in two-dimensional space. This approach outperformed the traditional feature selection techniques like Principal Component Analysis (PCA) (Abdi & Williams (2010)), Functional Connectome (FC) (Bishop et al. (1995)), and Recursive Feature Elimination (RFE) (Guyon et al. (2002)) due to the ability of _Granger_ causality to identify the causal connection between the input variable and the chosen time series. Nogueira et al. (2021) published a survey paper that mainly focused on the applications of causal discovery in machine learning. They discussed how the constraint-based and score-based approaches, as well as causal neural networks and causal decision trees, were applied along with machine learning in various topics. Although previously researchers were not much interested in applying causal techniques in **Natural Language Processing (NLP)**, an important sub-field of AI, recently several causal discovery methods are being applied in this area. To get a deeper explanation, one can read the survey written by Feder et al. (2021) that discusses the applications of different causal discovery techniques in NLP and how these techniques can help to improve this domain further.
In addition to the abovementioned domains, causal discovery techniques are also being used in **business, macroeconomics, manufacturing, and software engineering,** to name a few. Hu et al. (2013) used a causal Bayesian network with some specialized constraints to analyze the risks associated with software development projects. Luo et al. (2021) used causal discovery models to identify the relationship between flight delays and service nodes. Hall-Hoffarth (2022) employed causal discovery in macroeconomic dynamic stochastic general equilibrium (DSGE) models to learn the underlying causal structure. Vukovic & Thalmann (2022) wrote a review paper identifying the applications of causal discovery in manufacturing where root cause analysis, causality in a facilitator role, fault detection, analysis, and management have been highlighted
Figure 36: Causal influence of climate and environmental factors on the collapse of an Arctic ecosystem from a storm surge (Shepherd & Lloyd (2021)).
as important areas of application. In business, the understanding of causal relations plays a vital role to design effective interventions such as launching a new advertising campaign or a promotion (Borboudakis and Tsamardinos (2016)). Thus, causal discovery approaches and techniques have been widely adopted in several areas for understanding the underlying causal relationships, and thereby deriving actionable insights.
## 10 Discussion
Traditional AI applications that solely rely upon predictions lack explainability, and are often difficult to comprehend due to their black-box nature. _Causal analysis_ can overcome the lack of explainability in the existing AI models by embedding casual knowledge into them. These models have greater transparency, and thereby, achieve greater reliability. A crucial part of the causal analysis is _causal discovery_. It is the recovery of the underlying causal structure represented in a graphical form. Such visualizations of causal relationships are easy to comprehend as well as more appealing to a user. In this survey, we introduce a wide variety of existing approaches to perform causal discovery. We also provide a brief overview of the common terminologies used in the area of causal discovery, and summarize the different types of causal discovery algorithms available for both i.i.d. and time series data. Apart from discussing the approaches, we also discuss the commonly used datasets, metrics, and toolboxes for performing causal discovery.
With the growing number of approaches for performing causal discovery, it is essential to look into the common challenges or limitations faced during the process. Towards the end of this paper, we discuss some of the common challenges as well as a wide variety of applications of causal discovery in multiple fields. Future causality research should focus on the nature of real-world datasets, and develop methods that take into account these practical constraints for better and more reliable causal discovery. It is often observed during experiments that different CD methods produce causal graphs that disagree with each other to a great extent. In fact, in the experiments (benchmarking) that we performed, we also observed a significant disagreement among the approaches w.r.t. their estimated causal graphs. Therefore, it is needed to accurately quantify the uncertainty of the inferred structures. This is particularly more important for the areas such as the healthcare sector which is related to the well-being of humans. It is also important to consider any available background knowledge such as domain expertise, literature evidence, etc. during the causal discovery process which may help to overcome the existing challenges. Once the causal community becomes successful in addressing the existing challenges, we may hope to have better approaches with greater accuracy and reliability.
#### Acknowledgments
This work is partially supported by grants from the National Science Foundation (NSF Award # 2118285) and UMBC Strategic Awards for Research Transitions (START). The content of this work does not necessarily represent the policy of NSF or assume endorsement by the Federal Government.
|
2306.01854 | Reinforcement Learning with General Utilities: Simpler Variance
Reduction and Large State-Action Space | We consider the reinforcement learning (RL) problem with general utilities
which consists in maximizing a function of the state-action occupancy measure.
Beyond the standard cumulative reward RL setting, this problem includes as
particular cases constrained RL, pure exploration and learning from
demonstrations among others. For this problem, we propose a simpler single-loop
parameter-free normalized policy gradient algorithm. Implementing a recursive
momentum variance reduction mechanism, our algorithm achieves
$\tilde{\mathcal{O}}(\epsilon^{-3})$ and $\tilde{\mathcal{O}}(\epsilon^{-2})$
sample complexities for $\epsilon$-first-order stationarity and
$\epsilon$-global optimality respectively, under adequate assumptions. We
further address the setting of large finite state action spaces via linear
function approximation of the occupancy measure and show a
$\tilde{\mathcal{O}}(\epsilon^{-4})$ sample complexity for a simple policy
gradient method with a linear regression subroutine. | Anas Barakat, Ilyas Fatkhullin, Niao He | 2023-06-02T18:16:35Z | http://arxiv.org/abs/2306.01854v1 | # Reinforcement Learning with General Utilities:
###### Abstract
We consider the reinforcement learning (RL) problem with general utilities which consists in maximizing a function of the state-action occupancy measure. Beyond the standard cumulative reward RL setting, this problem includes as particular cases constrained RL, pure exploration and learning from demonstrations among others. For this problem, we propose a simpler single-loop parameter-free normalized policy gradient algorithm. Implementing a recursive momentum variance reduction mechanism, our algorithm achieves \(\tilde{\mathcal{O}}(\epsilon^{-3})\) and \(\tilde{\mathcal{O}}(\epsilon^{-2})\) sample complexities for \(\epsilon\)-first-order stationarity and \(\epsilon\)-global optimality respectively, under adequate assumptions. We further address the setting of large finite state action spaces via linear function approximation of the occupancy measure and show a \(\tilde{\mathcal{O}}(\epsilon^{-4})\) sample complexity for a simple policy gradient method with a linear regression subroutine.
Machine Learning, Reinforcement Learning,
truncated gradient step can be formulated as solving a trust-region subproblem at each iteration, which is reminiscent of trust-region based algorithms such as TRPO (Schulman et al., 2015) and PPO (Schulman et al., 2017). In particular, implementing TSIVR-PG requires tuning a gradient truncation radius depending on problem parameters while also choosing adequate large batches. Besides these algorithmic considerations, a major limitation of recent prior work (Zhang et al., 2021, 2020; Kumar et al., 2022) is the need to estimate the unknown occupancy measure at each state-action pair. In several problems of practical scale, the number of states and/or actions is prohibitively large and renders tabular methods intractable. For instance, the size of a state space grows exponentially with the number of state variables. This is commonly known as the curse of dimensionality.
In this paper, we consider the RL problem with general utilities. Our contributions are as follows:
* We propose a novel single-loop normalized PG algorithm called N-VR-PG using only a single trajectory per iteration. In particular, our algorithm does not require the knowledge of problem specific parameters, large batches nor checkpoints unlike TSIVR-PG in Zhang et al. (2021). Instead of gradient truncation, we propose to use a normalized update rule for which no additional gradient truncation hyperparameter is needed. At the heart of our algorithm design is a recursive double variance reduction mechanism implemented with momentum for both the stochastic policy gradient and the occupancy measure estimator (in the tabular setting), akin to STORM (Cutkosky and Orabona, 2019) in stochastic optimization.
* We show that using a normalized gradient update guarantees bounded IS weights for the softmax parametrization. Unlike in most prior works focusing on the particular case of the standard RL setting, variance of IS weights is automatically bounded and no further assumption is needed. We further demonstrate that IS weights can also be similarly controlled when using a gaussian policy for continuous state-action spaces under mild assumptions.
* In the general utilities setting with finite state-action spaces and softmax policy, we show that our algorithm requires \(\tilde{\mathcal{O}}(\varepsilon^{-3})\) samples to reach an \(\varepsilon\)-stationary point of the objective function and \(\tilde{\mathcal{O}}(\varepsilon^{-2})\) samples to reach an \(\varepsilon\)-globally optimal policy by exploiting the hidden concavity of the problem when the utility function is concave and the policy is overparametrized. In the standard RL setting, we further show that such sample complexity results also hold for continuous state-action spaces when using the gaussian policy under adequate assumptions.
* Beyond the tabular setting, we consider the case of large finite state and action spaces which has not been previously addressed in this general setting to the best of our knowledge. We consider approximating the unknown state-action occupancy measure itself by a linear combination of pre-selected basis functions via a least-mean-squares solver. This linear function approximation procedure combined with a stochastic policy gradient method results in an algorithm for solving the RL problem with general nonlinear utilities for large state and action spaces. Specifically, we show that our PG method requires \(\tilde{\mathcal{O}}(\varepsilon^{-4})\) samples to guarantee an \(\varepsilon\)-first-order stationary point of the objective function up to an error floor due to function approximation.
**Related works.** We briefly discuss standard RL before closely related works for RL with general utility.
**Variance-reduced PG for standard RL.** In the last few years, there has been a vast array of work around variance-reduced PG methods for solving the standard RL problem with a cumulative sum of rewards to reduce the high variance of the stochastic policy gradients (see for e.g., Papini et al. (2018); Xu et al. (2020); Pham et al. (2020); Gargiani et al. (2022)). Yuan et al. (2020); Huang et al. (2020) proposed momentum-based policy gradient methods. All the aforementioned works use IS and make an unverifiable assumption stipulating that the IS weights variance is bounded. To relax this unrealistic assumption, Zhang et al. (2021) provide a gradient truncation mechanism complementing IS for the specific case of the softmax parameterization whereas Shen et al. (2019); Salehkaleybar et al. (2022) incorporate second-order information for which IS is not needed. Even in the special case of standard cumulative reward, our algorithm differs from prior work in that it combines the following features: it is single-loop, runs with a single trajectory per iteration and uses a normalized update rule to control the IS weights without further assumption. In particular, our algorithm does not make use of second order information and thus our analysis does not require second-order smoothness conditions. Typically, variance-reduced PG methods guarantee a \(\tilde{\mathcal{O}}(\varepsilon^{-3})\) sample complexity to reach a first-order stationary policy, improving over its \(\tilde{\mathcal{O}}(\varepsilon^{-4})\) counterpart for vanilla PG. Subsequently to the recent work of Agarwal et al. (2021) which provided global optimality guarantees for PG methods despite the non-concavity of the problem, several works (Liu et al., 2020; Zhang et al., 2021; Ding et al., 2021; Yuan et al., 2022; Masiha et al., 2022; Yuan et al., 2023) established global optimality guarantees for stochastic PG methods with or without variance reduction under policy parametrization. The best known sample complexity to reach an \(\epsilon\)-globally optimal policy is \(\tilde{\mathcal{O}}(\epsilon^{-2})\) and was achieved via policy mirror descent without parametrization (Lan, 2022; Xiao, 2022), with log-linear policies recently (Yuan et al., 2023) and via variance-reduced PG for softmax parametrization by exploiting hidden convex
ity (Zhang et al., 2021b). Very recently, Fatkhullin et al. (2023) obtained a \(\tilde{\mathcal{O}}(\epsilon^{-2})\) sample complexity for Fisher-non-degenerate parametrized policies.
**RL with General Utility.** There is a huge literature addressing control problems with nonstandard utilities that we cannot hope to give justice to. Let us mention though some early examples in Operations Research such as inventory problems with constraints on the probability of shortage (Derman and Klein, 1965) and variance-penalized MDPs (Filar et al., 1989; Kallenberg, 1994) where the problem is formulated as a nonlinear program in the space of state-action frequencies. In the rest of this section, we briefly discuss the most relevant research to the present paper. Zhang et al. (2020) study the policy optimization problem where the objective function is a concave function of the state-action occupancy measure to include several known problems such as constrained MDPs, exploration and learning from demonstrations. To solve this problem for which dynamic programming cannot be employed, Zhang et al. (2020) investigate policy search methods and first define a variational policy gradient for RL with general utilities as the solution to a stochastic saddle point problem. Exploiting the hidden convexity structure of the problem, they further show global optimality guarantees when having access to exact policy gradients. However, the procedure to estimate even a single policy gradient via the proposed primal-dual stochastic approximation method from sample paths turns out to be complex. Leveraging the formulation of the RL problem as a stochastic composite optimization problem, Zhang et al. (2021b) later proposed a (variance-reduced) stochastic PG approach for solving general utility RL ensuring a \(\tilde{\mathcal{O}}(\epsilon^{-3})\) sample complexity to find an \(\epsilon\)-stationary policy under smoothness of the utility function and the policy parametrization and a \(\tilde{\mathcal{O}}(\epsilon^{-2})\) global optimality sample complexity for a concave utility with an overparametrized policy. When the utility is concave as a function of the occupancy measure, the corresponding RL problem is known as Convex RL or Convex MDPs. Using Fenchel duality, Zahavy et al. (2021) casted the convex MDP problem as a min-max game between a policy player and a cost player producing rewards that the policy player must maximize. An insightful consequence of this viewpoint is that any algorithm solving the standard RL problem can be used for solving the more general convex MDP problem. In the present paper, we adopt the direct policy search approach with policy parametrization proposed in Zhang et al. (2021b) instead of the dual viewpoint. Geist et al. (2022) show that Convex RL is a subclass of Mean-Field games. Zhang et al. (2022) consider a decentralized version of the problem with general utilities with a network of agents.
## 2 Preliminaries
**Notations.** For a given finite set \(\mathcal{X}\), we use the notation \(|\mathcal{X}|\) for its cardinality and \(\Delta(\mathcal{X})\) for the space of probability distributions over \(\mathcal{X}\). We equip any Euclidean space with its standard inner product denoted by \(\left\langle\cdot,\cdot\right\rangle.\) The notation \(\|\cdot\|\) refers to both the standard \(2\)-norm and the spectral norm for vectors and matrices respectively.
**Markov Decision Process with General Utility.** Consider a discrete-time discounted Markov Decision Process (MDP) with a general utility function \(\mathbb{M}(\mathcal{S},\mathcal{A},\mathcal{P},F,\rho,\gamma)\), where \(\mathcal{S}\) and \(\mathcal{A}\) are finite state and action spaces respectively, \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\) is the state transition probability kernel, \(F:\mathcal{M}(\mathcal{S}\times\mathcal{A})\rightarrow\mathbb{R}\) is a general utility function defined over the space of measures \(\mathcal{M}(\mathcal{S}\times\mathcal{A})\) on the product space \(\mathcal{S}\times\mathcal{A}\), \(\rho\) is the initial state distribution and \(\gamma\in(0,1)\) is the discount factor. A stationary policy \(\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})\) maps each state \(s\in\mathcal{S}\) to a distribution \(\pi(\cdot|s)\) over the action space \(\mathcal{A}\). The set of all stationary policies is denoted by \(\Pi\). At each time step \(t\in\mathbb{N}\) in a state \(s_{t}\in\mathcal{S}\), the RL agent chooses an action \(a_{t}\in\mathcal{A}\) with probability \(\pi(a_{t}|s_{t})\) and the environment transitions to a state \(s_{t+1}\) with probability \(\mathcal{P}(s_{t+1}|s_{t},a_{t})\). We denote by \(\mathbb{P}_{\rho,\pi}\) the probability distribution of the Markov chain \((s_{t},a_{t})_{t\in\mathbb{N}}\) induced by the policy \(\pi\) with initial state distribution \(\rho\). We use the notation \(\mathbb{E}_{\rho,\pi}\) (or often simply \(\mathbb{E}\) instead) for the associated expectation. We define for any policy \(\pi\in\Pi\) the state-action occupancy measure \(\lambda^{\pi}\in\mathcal{M}(\mathcal{S}\times\mathcal{A})\) as:
\[\lambda^{\pi}(s,a)\stackrel{{\text{def}}}{{=}}\sum_{t=0}^{+\infty }\gamma^{t}\mathbb{P}_{\rho,\pi}(s_{t}=s,a_{t}=a)\,. \tag{1}\]
We denote by \(\Lambda\) the set of such occupancy measures, i.e., \(\Lambda\stackrel{{\text{def}}}{{=}}\left\{\lambda^{\pi}:\pi\in \Pi\right\}.\) Then, the general utility function \(F\) assigns a real to each occupancy measure \(\lambda^{\pi}\) induced by a policy \(\pi\in\Pi\). A state-action occupancy measure \(\lambda^{\pi}\) will also be seen as a vector of the Euclidean space \(\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\).
**Policy parametrization.** In this paper, we will consider the common softmax policy parametrization defined for every \(\theta\in\mathbb{R}^{d},s\in\mathcal{S},a\in\mathcal{A}\) by:
\[\pi_{\theta}(a|s)=\frac{\exp(\psi(s,a;\theta))}{\sum_{a^{\prime}\in\mathcal{A} }\exp(\psi(s,a^{\prime};\theta))}\,, \tag{2}\]
where \(\psi:\mathcal{S}\times\mathcal{A}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) is a smooth function. The softmax parametrization will be important for controlling IS weights for variance reduction. However, some of our results will not require this specific parameterization and we will explicitly indicate it when appropriate.
**Problem formulation.** The goal of the RL agent is to find a policy \(\pi_{\theta}\) (determined by the vector \(\theta\)) solving the problem:
\[\max_{\theta\in\mathbb{R}^{d}}F(\lambda^{\pi_{\theta}})\,, \tag{3}\]
where \(F\) is a smooth function supposed to be upper bounded and \(F^{\star}\) is used in the remainder of this paper to denote the maximum in (3). The agent has only access to (a) trajectories of finite length \(H\) generated from the MDP under
the initial distribution \(\rho\) and the policy \(\pi_{\theta}\) and (b) the gradient of the utility function \(F\) with respect to (w.r.t.) its variable \(\lambda\). In particular, provided a time horizon \(H\) and a policy \(\pi_{\theta}\) with \(\theta\in\mathbb{R}^{d}\), the learning agent can simulate a trajectory \(\tau=(s_{0},a_{0},\cdots,s_{H-1},a_{H-1})\) from the MDP whereas the state transition kernel \(\mathcal{P}\) is unknown. This general utility problem was described, for instance, in Zhang et al. (2021) (see also Kumar et al. (2022)). Recall that the standard RL problem corresponds to the particular case where the general utility function is a linear function, i.e., \(F(\lambda^{\pi_{\theta}})=\langle r,\lambda^{\pi_{\theta}}\rangle\) for some vector \(r\in\mathbb{R}^{\mathcal{S}\times\mathcal{A}}\) in which case we recover the expected return function as an objective:
\[V^{\pi_{\theta}}(r)\stackrel{{\text{def}}}{{=}}\mathbb{E}_{\rho, \pi_{\theta}}\left[\sum_{t=0}^{+\infty}\gamma^{t}r(s_{t},a_{t})\right]\,. \tag{4}\]
In the standard RL case, we shall use the notation \(J(\theta)\stackrel{{\text{def}}}{{=}}V^{\pi_{\theta}}(r)\) where \(r\) is the corresponding reward function.
**Policy Gradient for General Utilities.** Following the exposition in (Zhang et al., 2021) (see also more recently (Kumar et al., 2022)), we derive the policy gradient for the general utility objective. For convenience, we use the notation \(\lambda(\theta)\) for \(\lambda^{\pi_{\theta}}\). Since the cumulative reward can be rewritten more compactly \(V^{\pi_{\theta}}(r)=\langle\lambda^{\pi_{\theta}},r\rangle\), it follows from the policy gradient theorem that:
\[[\nabla_{\theta}\lambda(\theta)]^{T}r=\nabla_{\theta}V^{\pi_{ \theta}}(r)\] \[=\mathbb{E}_{\rho,\pi_{\theta}}\left[\sum_{t=0}^{+\infty}\gamma^ {t}r(s_{t},a_{t})\sum_{t^{\prime}=0}^{t}\nabla\log\pi_{\theta}(a_{t^{\prime}} |s_{t^{\prime}})\right]\,, \tag{5}\]
where \(\nabla_{\theta}\lambda(\theta)\) is the Jacobian matrix of the vector mapping \(\lambda(\theta)\). Using the chain rule, we have
\[\nabla_{\theta}F(\lambda(\theta)) =[\nabla_{\theta}\lambda(\theta)]^{T}\nabla_{\lambda}F(\lambda( \theta))\] \[=\nabla_{\theta}V^{\pi_{\theta}}(r)|_{r=\nabla_{\lambda}F( \lambda(\theta))}\,. \tag{6}\]
**Stochastic Policy Gradient.** In light of (6), in order to estimate the policy gradient \(\nabla_{\theta}F(\lambda(\theta))\) for general utilities, we can use the standard reinforce estimator suggested by Eq. (5) but we also need to estimate the state-action occupancy measure \(\lambda(\theta)\) (when \(F\) is nonlinear)1. Define for every reward function \(r\) (which is also seen as a vector in \(\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\)), every \(\theta\in\mathbb{R}^{d}\) and every \(H\)-length trajectory \(\tau\) simulated from the MDP with policy \(\pi_{\theta}\) and initial distribution \(\rho\) the (truncated) policy gradient estimate:
Footnote 1: In the cumulative reward setting, notice that the general utility function \(F\) is linear and \(\nabla_{\lambda}F(\lambda(\theta))\) is independent of \(\lambda(\theta)\).
\[g(\tau,\theta,r)=\sum_{t=0}^{H-1}\left(\sum_{h=t}^{H-1}\gamma^{h}r(s_{h},a_{h} )\right)\nabla\log\pi_{\theta}(a_{t}|s_{t})\,. \tag{7}\]
We also define an estimator for the state-action occupancy measure \(\lambda^{\pi_{\theta}}=\lambda(\theta)\) (see (1)) truncated at the horizon \(H\) by:
\[\lambda(\tau)=\sum_{h=0}^{H-1}\gamma^{h}\delta_{s_{h},a_{h}}\,, \tag{8}\]
where for every \((s,a)\in\mathcal{S}\times\mathcal{A}\), \(\delta_{s,a}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\) is a vector of the canonical basis of \(\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\), i.e., the vector whose only non-zero entry is the \((s,a)\)-th entry which is equal to \(1\).
**Importance Sampling.** Given a trajectory \(\tau=(s_{0},a_{0},s_{1},a_{1},\cdots,s_{H-1},a_{H-1})\) of length \(H\) generated under the initial distribution \(\rho\) and the policy \(\pi_{\theta}\) for some \(\theta\in\mathbb{R}^{d}\), we define for every \(\theta^{\prime}\in\mathbb{R}^{d}\) the IS weight:
\[w(\tau|\theta^{\prime},\theta)\stackrel{{\text{def}}}{{=}}\prod_{h =0}^{H-1}\frac{\pi_{\theta^{\prime}}(a_{h}|s_{h})}{\pi_{\theta}(a_{h}|s_{h})}\,. \tag{9}\]
Since the problem is nonstationary in the sense that updating the parameter \(\theta\) shifts the distribution over trajectories, it follows that for any \(r\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\), \(\mathbb{E}_{\rho,\pi_{\theta}}[g(\tau,\theta,r)-g(\tau,\theta^{\prime},r)] \neq\nabla_{\theta}V^{\pi_{\theta}}(r)-\nabla_{\theta}V^{\pi_{\theta^{\prime}}} (r)\). Using the IS weights, we correct this bias to obtain
\[\mathbb{E}_{\rho,\pi_{\theta}}[g(\tau,\theta,r)-w(\tau|\theta^{ \prime},\theta)g(\tau,\theta^{\prime},r)]\\ =\nabla_{\theta}V^{\pi_{\theta}}(r)-\nabla_{\theta}V^{\pi_{\theta^{ \prime}}}(r)\,.\]
The use of IS weights is standard in variance-reduced PG.
## 3 Normalized Variance-Reduced Policy Gradient Algorithm
In this section, we present our N-VR-PG algorithm (see Algorithm 1) to solve the RL problem with general utilities. This algorithm has two main distinctive features compared to vanilla PG and existing algorithms (Zhang et al., 2021): (i) recursive variance reduction: instead of using the stochastic PG and occupancy measure estimators respectively reported in (7) and (8), we use recursive variance-reduced estimators for both the PG and the state-action occupancy measure akin to STORM in stochastic optimization (Cutkosky and Orabona, 2019). This leads to a simple single-loop algorithm using a single trajectory per iteration and for which no checkpoints nor any second order information are needed; (ii) normalized PG update rule: normalization will be crucial to control the IS weights used in the estimators. We elaborate more on the motivation for using it in Section 4.1.
_Remark 3.1_.: In Algorithm 1, note that \(g(\tau_{t},\theta_{t},r_{t-1})\) and \(g(\tau_{t},\theta_{t-1},r_{t-2})\) are used in \(v_{t}\) instead of \(g(\tau_{t},\theta_{t},r_{t})\) and \(g(\tau_{t},\theta_{t-1},r_{t-1})\) respectively to address measurability and independence issues in the analysis.
_Remark 3.2_.: (Standard RL).: In the cumulative reward setting, estimating the occupancy measure is not needed. Hence, Algorithm 1 simplifies (see Algorithm 4 in Appendix A).
## 4 Convergence Analysis of N-Vr-Pg
We first introduce our assumptions regarding the regularity of the policy parametrization and the utility function \(F\).
**Assumption 4.1**.: In the softmax parametrization (2), the map \(\psi(s,a;\gamma)\) is twice continuously differentiable and there exist \(l_{\psi},L_{\psi}>0\) s.t. (i) \(\max_{s\in\mathcal{S},a\in\mathcal{A}}\sup_{\theta}\|\nabla\psi(s,a;\theta)\| \leq l_{\psi}\) and (ii) \(\max_{s\in\mathcal{S},a\in\mathcal{A}}\sup_{\theta}\|\nabla^{2}\psi(s,a; \theta)\|\leq L_{\psi}\,.\)
**Assumption 4.2**.: There exist constants \(l_{\lambda},L_{\lambda},L_{\lambda,\infty}>0\) s.t. for all \(\lambda,\lambda^{\prime}\in\Lambda\), \(\|\nabla_{\lambda}F(\lambda)\|_{\infty}\leq l_{\lambda}\) and
\[\|\nabla_{\lambda}F(\lambda)-\nabla_{\lambda}F(\lambda^{\prime})\|_{\infty} \leq L_{\lambda}\|\lambda-\lambda^{\prime}\|_{2}\,,\] \[\|\nabla_{\lambda}F(\lambda)-\nabla_{\lambda}F(\lambda^{\prime} )\|_{\infty} \leq L_{\lambda,\infty}\|\lambda-\lambda^{\prime}\|_{1}\,.\]
Assumptions 4.1 and 4.2 were previously considered in Zhang et al. (2021b, 2020) and guarantee together that the objective function \(\theta\mapsto F(\lambda^{\pi_{a}})\) is smooth. Assumption 4.2 is automatically satisfied for the cumulative reward setting (i.e., \(F\) linear) if the reward function is bounded.
### Normalization ensures boundedness of IS weights
Most prior works suppose that the variance of the IS weights is bounded. Such assumption cannot be verified. In this section we provide an alternative algorithmic way based on the softmax policy to control the IS weights without the aforementioned assumption. Since our algorithm only uses IS weights for two consecutive iterates, our key observation is that a normalized gradient update rule automatically guarantees bounded IS weights. In particular, compared to Zhang et al. (2021b), we do not use a gradient truncation mechanism which requires an additional truncation hyperparameter depending on the problem parameters and dictates a non-standard stationarity measure (see Remark 4.6). This simple algorithmic modification requires several adjustments in the convergence analysis (see Appendix E and F). We formalize the result in the following lemma.
**Lemma 4.3**.: _Let Assumption 4.1 hold true. Suppose that the sequence \((\theta_{t})\) is updated via \(\theta_{t+1}=\theta_{t}+\alpha\frac{d_{t}}{\|d_{t}\|}\) where \(d_{t}\in\mathbb{R}^{d}\) is any non-zero update direction and \(\alpha_{t}\) is a positive stepsize. Then, for every integer \(t\) and any trajectory \(\tau\) of length \(H\), we have \(w(\tau|\theta_{t},\theta_{t+1})\leq\exp\{2Hl_{\psi}\alpha_{t}\}\,.\) If, in addition, \(H=\mathcal{O}(\frac{\log T}{1-\gamma})\) and \(\alpha_{t}=\alpha=T^{-\frac{3}{3}}\), then there exists a constant \(W>0\) s.t. \(w(\tau|\theta_{t},\theta_{t+1})\leq W\,.\) Moreover, we have \(\mathrm{Var}\left[w(\tau_{t+1}|\theta_{t},\theta_{t+1})\right]\leq C_{w} \alpha^{2}\) where \(\tau_{t+1}\) is a trajectory of length \(H\) sampled from \(\pi_{\theta_{t+1}}\) and \(C_{w}\stackrel{{\text{\tiny def}}}{{=}}H((8H+2)l_{\psi}^{2}+2L_{ \psi})(W+1)\,.\)_
In this lemma, the variance of the IS weights decreases over time at a rate controlled by \(\alpha^{2}\) and this result will be crucial for our convergence analysis of N-VR-PG. We show in Lemma E.19 in the Appendix that such a result also holds for Gaussian policies for continuous state action spaces.
### First-order stationarity
In this section, we show that N-VR-PG requires \(\tilde{\mathcal{O}}(\varepsilon^{-3})\) samples to reach an \(\varepsilon\)-first-order stationary (FOS) point of the objective function for RL with general utilities.2
Footnote 2: All the proofs of our results are provided in the Appendix.
**Theorem 4.4**.: _Let Assumptions 4.1 and 4.2 hold. Let \(\alpha_{0}>0\) and let \(T\geq 1\) be an integer. Set \(\alpha_{t}=\frac{\alpha_{0}}{T^{\nicefrac{{2}}{{3}}}},\eta_{t}=\left(\frac{2 }{t+1}\right)^{\nicefrac{{2}}{{3}}}\) and \(H=(1-\gamma)^{-1}\log(T+1)\). Then,_
\[\mathbb{E}\left[\left\|\nabla_{\theta}F(\lambda(\bar{\theta}_{T}))\right\|\right] \leq \mathcal{O}\left(\frac{1+(1-\gamma)^{3}\Delta\alpha_{0}^{-1}+(1- \gamma)^{-1}\alpha_{0}}{(1-\gamma)^{3}T^{\nicefrac{{1}}{{3}}}}\right),\]
_where \(\Delta\stackrel{{\text{\tiny def}}}{{=}}\mathcal{F}^{*}-\mathbb{E }[F(\lambda(\theta_{1}))]\) and \(\bar{\theta}_{T}\) is sampled uniformly at random from \(\{\theta_{1},\cdots,\theta_{T}\}\) of Algorithm 1._
_Remark 4.5_.: In terms of dependence on \((1-\gamma)^{-1}\), we significantly improve over the result of Zhang et al. (2021b) which does not make it explicit. We defer a detailed comparison regarding this dependence to Appendix B.
_Remark 4.6_.: Unlike Zhang et al. (2021b) which utilizes a gradient truncation radius, our sample complexity does not depend on the inverse of this gradient truncation hyperparameter which might be small. Indeed, to translate their guarantee from the non-standard gradient mapping dictated by gradient truncation to the standard stationarity measure (used in our result), one has to incur an additional multiplicative constant \(\delta^{-1}\) where \(\delta\) is the gradient truncation radius (see Lemma 5.4 in (Zhang et al., 2021b)).
Recalling the notation \(J(\theta)=V^{\pi_{\theta}}(r)\) (see (4)) for the standard RL setting, we can state the following corollary.
**Corollary 4.7**.: _Under the setting of Theorem 4.4, if we set \(\alpha_{0}=1-\gamma\), then \(\mathbb{E}\left[\left\|\nabla J(\bar{\theta}_{T})\right\|\right]\leq\mathcal{O} \left((1-\gamma)^{-2}T^{-\nicefrac{{1}}{{3}}}\right)\,.\)_
The next result addresses the case of continuous state-action spaces in the standard RL setting using a Gaussian policy.
Notably, we rely on similar considerations as for the softmax policy to control the variance of IS weights. We defer a precise statement of this result to Appendix E.4.
**Theorem 4.8** (informal).: _Using the Gaussian policy under some regularity conditions, \(\mathsf{N}\)-\(\mathsf{VR}\)-\(\mathsf{PG}\) (see Algorithm 4) requires \(\tilde{\mathcal{O}}(\varepsilon^{-3})\) to reach an \(\varepsilon\)-first-order stationary point of the expected return \(J\)._
### Global optimality
In this section, we show that \(\mathsf{N}\)-\(\mathsf{VR}\)-\(\mathsf{PG}\) only requires \(\tilde{\mathcal{O}}(\varepsilon^{-2})\) samples to reach an \(\varepsilon\)-globally optimal policy under a concave reparametrization of the RL problem with concave utilities and an additional overparametrization assumption. Our results and assumptions match the recent results in Zhang et al. (2021) for finite state-action spaces.
**Assumption 4.9**.: The utility function \(F\) is concave.
**Assumption 4.10**.: For the softmax policy parametrization in (2), the following three requirements hold: (i) For any \(\theta\in\mathbb{R}^{d}\), there exist relative neighborhoods \(\mathcal{U}_{\theta}\subset\mathbb{R}^{d}\) and \(\mathcal{V}_{\lambda(\theta)}\subset\Lambda\) respectively containing \(\theta\) and \(\lambda(\theta)\) s.t. the restriction \(\lambda|_{\mathcal{U}_{\theta}}\) forms a bijection between \(\mathcal{U}_{\theta}\) and \(\mathcal{V}_{\lambda(\theta)}\) ; (ii) There exists \(l>0\) s.t. for every \(\theta\in\mathbb{R}^{d}\), the inverse \((\lambda|_{\mathcal{U}_{\theta}})^{-1}\) is \(l\)-Lipschitz continuous; (iii) There exists \(\bar{\epsilon}>0\) s.t. for every positive real \(\epsilon\leq\bar{\epsilon}\), \((1-\epsilon)\lambda(\theta)+\epsilon\lambda(\theta^{*})\in\mathcal{V}_{ \lambda(\theta)}\) where \(\pi_{\theta^{*}}\) is the optimal policy.
For the tabular softmax parametrization (i.e., \(\psi(s,a;\theta)=\theta_{s,a},d=|\mathcal{S}||\mathcal{A}|\)), a continuous local inverse can be defined whereas computing the Lipschitz constant \(l\) is more involved as reported in Zhang et al. (2021) (see Appendix C for a discussion of Assumption 4.10). Relaxing this strong assumption is left for future work.
_Remark 4.11_.: Compared to Assumption 5.11 in Zhang et al. (2021), Assumption 4.10 is quasi-identical with the slight difference that it does not depend on the gradient truncation hyperparameter \(\delta\) used in Zhang et al. (2021).
Our global optimality convergence result is as follows.
**Theorem 4.12**.: _Let Assumptions 4.1, 4.2 and 4.9 hold. Additionally, let Assumption 4.10 be satisfied with \(\bar{\epsilon}\geq\frac{\alpha_{0}(1-\gamma)}{2\ell_{\theta}(T+1)^{a}}\) for some integer \(T\geq 1\) and reals \(\alpha_{0}>0\), \(a\in(0,1)\). Set \(\alpha_{t}=\frac{\alpha_{0}}{(T+1)^{\pi}}\), \(\eta_{t}=\frac{2}{t+1}\) and \(H=(1-\gamma)^{-1}\log(T+1)\). Then the output \(\theta_{T}\) of \(\mathsf{N}\)-\(\mathsf{VR}\)-\(\mathsf{PG}\) (see Algorithm 1) satisfies_
\[F^{\star}-\mathbb{E}\left[F(\lambda(\theta_{T}))\right]\leq\mathcal{O}\left( \frac{\alpha_{0}^{2}}{(1-\gamma)^{3}(T+1)^{2a-\frac{3}{2}}}\right),\]
_Thus, setting \(\alpha_{0}=(1-\gamma)^{\nicefrac{{3}}{{2}}}\), the sample complexity to achieve \(F^{\star}-\mathbb{E}\left[F(\lambda(\theta_{T}))\right]\leq\varepsilon\) is \(\mathcal{O}\left(\varepsilon^{\frac{-2}{4a-3}}\right)\)._
**Corollary 4.13**.: _In the setting of Theorem 4.12, \(\mathsf{N}\)-\(\mathsf{VR}\)-\(\mathsf{PG}\) (see Algorithm 4) requires \(\tilde{\mathcal{O}}\left(\varepsilon^{\frac{-2}{2a-1}}\right)\) samples to achieve \(J^{\star}-\mathbb{E}\left[J(\theta_{T})\right]\leq\varepsilon\) where \(J^{\star}\) is the optimal expected return._
_Remark 4.14_.: We refer the reader to Appendix F.2 for a precise statement of Corollary 4.13. If we know problem parameters and choose time varying step-sizes \(\alpha_{t}=\frac{\alpha_{0}}{t}\), then we can obtain exactly \(\tilde{\mathcal{O}}(\varepsilon^{-2})\) sample complexity.
We can state a similar global optimality result to Corollary 4.13 for continuous state-action spaces (see Appendix F.3).
## 5 Large State-Action Space Setting
An important limitation of Algorithm 1 and the prior work (Zhang et al., 2021) is the need to estimate the occupancy measure for each state-action pair in the case of general nonlinear utilities. This procedure is intractable if the state and/or action spaces are prohibitively large and finite or even worse infinite/continuous. In the case of infinite or continuous state-action spaces, the occupancy measure \(\lambda^{\pi_{\theta}}\) induced by a policy \(\pi_{\theta}\) cannot be represented by a vector in finite dimensions. Thus, the derivative of the utility function \(F\) w.r.t. its variable \(\lambda\) is not well defined in the chain rule in (6) for the policy gradient. Therefore, more adequate notions of derivative for optimization on the space of measures are probably needed and this would require different methodological and algorithmic tools which go beyond the scope of this work. In this paper, we propose to do a first step by considering the setting of large _finite_ state and action spaces which is already of practical interest.
### PG for RL with General Utilities via linear function approximation of the occupancy measure
Similarly to the classical linear function approximation of the (action-)value function in standard RL, we propose to approximate the (truncated) state-action occupancy measure by a linear combination of pre-selected basis functions in order to break the so-called curse of dimensionality. Our exposition is similar in spirit to the compatible function approximation framework (Sutton et al., 1999) which was recently extended in Agarwal et al. (2021) (see also Yuan et al. (2023) for a recent example). However, we are not concerned here by the approximation of the action-value function nor are we considering the NPG (or Q-NPG) method but we are rather interested in approximating the discounted occupancy measure. Recall that we are considering the more general problem of RL with general utilities. Beyond this connection with existing work, we shall precise that our approach mostly shares the use of standard least squares regression for estimating an unknown function which is the state-action occupancy measure in our case.
Let \(m\) be a positive integer and let \(\phi:\mathcal{S}\times\mathcal{A}\to\mathbb{R}^{m}\) be
a feature map. We shall approximate the truncated3 state-action occupancy measure for a given policy \(\pi_{\theta}\) (\(\theta\in\mathbb{R}^{d}\) fixed) by a linear combination of feature vectors from the feature map, i.e., for every state-action pair \((s,a)\in\mathcal{S}\times\mathcal{A}\),
Footnote 3: We could use the non-truncated occupancy measure (see Appendix G). For simplicity of exposition, we use the truncated version, the difference between both quantities is of the order of \(\gamma^{H}\).
\[\lambda_{H}^{\pi_{\theta}}(s,a)\approx\left\langle\phi(s,a),\omega_{\theta} \right\rangle, \tag{10}\]
for some \(\omega_{\theta}\in\mathbb{R}^{m}\) that we shall compute. Typically, the dimension \(m\) is much smaller than \(\left|\mathcal{S}\right|\times\left|\mathcal{A}\right|.\) The feature map summarizes the most important characteristics of state-action pairs. Typically, this map is designed based on experience and domain-specific knowledge or intuition regarding the MDP. Standard examples of basis functions for the feature map include radial basis functions, wavelet networks or polynomials. Nevertheless, designing such a feature map is an important practical question that is often problem-specific and we will not address it in this work.
In order to compute such a vector \(\omega_{\theta}\), we will use linear regression. Accordingly, we define the expected regression loss measuring the estimation quality of any parameter \(\omega\) for every \(\theta\in\mathbb{R}^{d},\omega\in\mathbb{R}^{m}\) by:
\[L_{\theta}(\omega)\stackrel{{\text{def}}}{{=}}\mathbb{E}_{s \sim\rho,a\sim\mathcal{U}(\mathcal{A})}[(\lambda_{H}^{\pi_{\theta}}(s,a)- \left\langle\phi(s,a),\omega\right\rangle)^{2}]\,, \tag{11}\]
where \(\rho\) is the initial distribution in the MDP and \(\mathcal{U}(\mathcal{A})\) is the uniform distribution over the action space \(\mathcal{A}\).4 In practice, we cannot minimize \(L_{\theta}\) exactly since this would require having access to the true state-action occupancy measure and averaging over all state-action pairs \(s\sim\rho,a\sim\mathcal{U}(\mathcal{A})\). Therefore, we compute an approximate solution \(\hat{\omega}_{\theta}\approx\arg\min_{\omega}L_{\theta}(\omega)\). For this procedure, we need: (i) unbiased estimates of the true truncated state-action occupancy measure \(\lambda_{H}^{\pi_{\theta}}(s,a)\) (or the non-truncated one \(\lambda^{\pi_{\theta}}(s,a)\)) for \(s\sim\rho,a\sim\mathcal{U}(\mathcal{A})\) and (ii) a regression solver based on samples to minimize \(L_{\theta}\) as defined in (11). As for item (i), we use a Monte-Carlo estimate \(\hat{\lambda}_{H}^{\pi_{\theta}}(s,a)\) of the truncated occupancy measure computed from a single rollout (see Algorithm 5 for details).5 An unbiased stochastic gradient of the function \(L_{\theta}\) in (11) is then given by
Footnote 4: Other exploratory sampling distributions for \(s\) and \(a\) can be considered, we choose \(\rho\) and \(\mathcal{U}(\mathcal{A})\) for simplicity.
\[\hat{\nabla}_{\omega}L_{\theta}(\omega)\stackrel{{\text{def}}}{{= }}2(\left\langle\phi(s,a),\omega\right\rangle-\hat{\lambda}_{H}^{\pi_{\theta}}(s,a))\,\phi(s,a)\,. \tag{12}\]
We can then solve the regression problem consisting in minimizing \(L_{\theta}\) in (11) via the averaged SGD algorithm (see Algorithm 2) as proposed in Bach & Moulines (2013).
```
Input:\(\omega_{0}\in\mathbb{R}^{m},K\geq 1,\beta>0,\rho,\pi_{\theta}\). for\(k=0,\dots,K-1\)do Sample \(s\sim\rho;a\sim\mathcal{U}(\mathcal{A})\) Compute an estimator \(\hat{\lambda}_{H}^{\pi_{\theta}}(s,a)\) via Algorithm 5 \(\hat{\nabla}_{\omega}L_{\theta}(\omega_{k})\stackrel{{\text{def}}} {{=}}2(\left\langle\phi(s,a),\omega_{k}\right\rangle-\hat{\lambda}_{H}^{\pi_{ \theta}}(s,a))\,\phi(s,a)\) \(\omega_{k+1}=\omega_{k}-\beta\,\hat{\nabla}_{\omega}L_{\theta}(\omega_{k})\) endfor Return:\(\hat{\omega}_{\theta}=\frac{1}{K}\sum_{k=1}^{K}\omega_{k}\)
```
**Algorithm 2** (averaged) SGD for Occupancy Measure Estimation via Linear Function Approximation
```
Input:\(\omega_{0}\in\mathbb{R}^{m},K\geq 1,\beta>0,\rho,\pi_{\theta}\). for\(k=0,\dots,K-1\)do Sample \(s\sim\rho;a\sim\mathcal{U}(\mathcal{A})\) Compute an estimator \(\hat{\lambda}_{H}^{\pi_{\theta}}(s,a)\) via Algorithm 5 \(\hat{\nabla}_{\omega}L_{\theta}(\omega_{k})\stackrel{{\text{def}}} {{=}}2(\left\langle\phi(s,a),\omega_{k}\right\rangle-\hat{\lambda}_{H}^{\pi_{ \theta}}(s,a))\,\phi(s,a)\) \(\omega_{k+1}=\omega_{k}-\beta\,\hat{\nabla}_{\omega}L_{\theta}(\omega_{k})\) endfor Return:\(\hat{\omega}_{\theta}=\frac{1}{K}\sum_{k=1}^{K}\omega_{k}\)
```
**Algorithm 3** Stochastic PG for RL with General Utilities via Linear Function Approximation of the Occupancy Measure
**Remark 5.1**.: When running Algorithm 3, notice that the vector \(\hat{\lambda}_{t}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\) (and hence the vector \(r_{t}\)) does not need to be computed for all state-action pairs as this would be unrealistic and even impossible in the large state-action setting we are considering. Indeed, at each iteration, one does only need to compute \((r_{t}(s_{h}^{(t)},a_{h}^{(t)}))_{0\leq h\leq H-1}\) where \(\tau_{t}=(s_{h}^{(t)},a_{h}^{(t)})_{0\leq h\leq H-1}\) to obtain the stochastic policy gradient \(g(\tau_{t},\theta_{t},r_{t-1})\) as defined in (7).
### Convergence and sample complexity analysis
In this section, we provide a convergence analysis of Algorithm 3. For every integer \(t\), let \(\omega_{*}(\theta_{t})\in\arg\min_{\omega}L_{\theta_{t}}(\omega)\). We decompose the regression loss into the statistical error measuring the accuracy of our approximate solution and
the approximation error measuring the distance between the true occupancy measure and its best linear approximation using the feature map \(\phi\):
\[L_{\theta_{t}}(\hat{\omega}_{t})=\underbrace{L_{\theta_{t}}(\hat{\omega}_{t})-L_{ \theta_{t}}(\omega_{*}(\theta_{t}))}_{\text{statistical error}}+\underbrace{L_{ \theta_{t}}(\omega_{*}(\theta_{t}))}_{\text{approximation error}}\,,\]
where we use the shorthand notation \(\hat{\omega}_{t}=\hat{\omega}_{\theta_{t}}\) and \(\hat{\omega}_{\theta_{t}}\) is the output of Algorithm 2 after \(K\) iterations. We assume that both the statistical and approximation errors are uniformly bounded along the iterates of our algorithm. Such assumptions have been considered for instance in a different context in the compatible function approximation framework (see Assumptions 6.1.1 and Corollary 21 in Agarwal et al. (2021), also Assumptions 1 and 5 in Yuan et al. (2023)).
**Assumption 5.2** (Bounded statistical error).: There exists \(\epsilon_{\text{stat}}>0\) s.t. for all iterations \(t\geq 0\) of Algorithm 3, we have \(\mathbb{E}[L_{\theta_{t}}(\hat{\omega}_{\theta_{t}})-L_{\theta_{t}}(\omega_{*} (\theta_{t}))]\leq\epsilon_{\text{stat}}\,.\)
We will see in the next section that we can guarantee \(\epsilon_{\text{stat}}=\mathcal{O}(1/K)\) where \(K\) is the number of iterations of SGD (Algorithm 2) to find the approximate solution \(\hat{\omega}_{t}\) at each iteration \(t\) of Algorithm 3.
**Assumption 5.3** (Bounded approximation error).: There exists \(\epsilon_{\text{approx}}>0\) s.t. for all iterations \(t\geq 0\) of Algorithm 3, we have \(\mathbb{E}[L_{\theta_{t}}(\omega_{*}(\theta_{t}))]\leq\epsilon_{\text{approx}}\,.\)
This error is due to function approximation and depends on the expressiveness of the approximating function class. The true state-action occupancy measure to be estimated may not lie in the function approximation class under consideration.
**Theorem 5.4**.: _Let Assumptions 4.1, 4.2, 5.2 and 5.3 hold true. In addition, suppose that there exists \(\rho_{\min}>0\) s.t. \(\rho(s)\geq\rho_{\min}\) for all \(s\in\mathcal{S}\,.\) Let \(T\geq 1\) be an integer and let \((\theta_{t})\) be the sequence generated by Algorithm 3 with a positive step size \(\alpha=\mathcal{O}(1)\) and batch size \(N\geq 1\). Then,_
\[\mathbb{E}[\|\nabla_{\theta}F(\lambda(\bar{\theta}_{T}))\|^{2}] \leq\mathcal{O}\left(\frac{1}{T}\right)+\mathcal{O}\left(\frac{1}{N}\right)+ \mathcal{O}(\gamma^{2H})\\ +\mathcal{O}(\epsilon_{\text{stat}}+\epsilon_{\text{approx}})\,, \tag{13}\]
_where \(\bar{\theta}_{T}\in\{\theta_{1},\cdots,\theta_{T}\}\) uniformly at random._
A few comments are in order regarding Theorem 5.4 : (1) The specific structure of the softmax parametrization is not needed for Theorem 5.4. Indeed, this softmax parametrization is only useful to control IS weights used for variance reduction in Algorithm 1. Assumption 4.1 can be replaced by any smooth policy parametrization satisfying the same standard conditions with \(\nabla\log\pi_{\theta}\) instead of \(\psi\,;\) (2) If the true (truncated) occupancy measure does not lie in the class of linear functions described, a positive function approximation error \(\epsilon_{\text{approx}}\) is incurred due to the bias induced by the limited expressiveness of the linear function approximation. A possible natural alternative is to consider richer classes such as neural networks to approximate the state-action occupancy measure and reduce the approximation bias. In this more involved case, the expected least squares (or other metrics) regression loss would likely become nonconvex and introduce further complications in our analysis. Such an extension would require other technical tools that are beyond the scope of the present paper and we leave it for future work.
In order to establish the total sample complexity of our algorithm, we need to compute the number of samples needed in the occupancy measure estimation subroutine of Algorithm 2. To do so, we now specify the number of SGD iterations required in Algorithm 2 to approximately solve our regression problem. In particular, we will show that we can achieve \(\epsilon_{\text{stat}}=\mathcal{O}(1/K)\) where \(K\) is the number of iterations of the SGD subroutine using Theorem 1 in Bach & Moulines (2013). Before stating our result, we make an additional standard assumption on the feature map \(\phi\,.\)
**Assumption 5.5**.: The feature map \(\phi:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}^{m}\) satisfies: (i) There exists \(B>0\) s.t. for all \(s\in\mathcal{S},a\in\mathcal{A}\), \(\|\phi(s,a)\|\leq B\) and (ii) There exists \(\mu>0\) s.t. \(\mathbb{E}_{s\sim\rho,a\sim\mathcal{U}(\mathcal{A})}[\phi(s,a)\phi(s,a)^{T}] \succcurlyeq\mu I_{m}\) where \(I_{m}\in\mathbb{R}^{m\times m}\) is the identity matrix.
Assumption 5.5 guarantees that the covariance matrix of the feature map is invertible. Similar standard assumptions have been commonly considered for linear function approximation settings (Tsitsiklis & Van Roy, 1997).
We are now ready to state a corollary of Theorem 5.4 establishing the total sample complexity of Algorithm 3 to achieve an \(\epsilon\)-stationary point of the objective function.
**Corollary 5.6**.: _Let Assumptions 4.1, 4.2, 5.3 and 5.5 hold in the setting of Theorem 5.4 where we run the SGD subroutine of Algorithm 2 with step size \(\beta=1/8B^{2}\) and \(\omega_{0}=0\) for \(K\) iterations at each timestep \(t\) of Algorithm 3. Then, for every \(\epsilon>0\), setting \(T=\mathcal{O}(\epsilon^{-2})\), \(N=\mathcal{O}(\epsilon^{-2})\), \(K=\mathcal{O}(\epsilon^{-2})\) and \(H=\mathcal{O}(\log(\frac{1}{\epsilon}))\) guarantees \(\mathbb{E}[\|\nabla_{\theta}F(\lambda(\bar{\theta}_{T}))\|]\leq\mathcal{O}( \epsilon)+\mathcal{O}(\sqrt{\epsilon_{\text{approx}}})\) where \(\bar{\theta}_{T}\in\{\theta_{1},\cdots,\theta_{T}\}\) uniformly at random. The total sample complexity to reach an \(\epsilon\)-stationary point (up to the \(\mathcal{O}(\sqrt{\epsilon_{\text{approx}}})\) error floor) is given by \(T\times(K+M)\times H=\tilde{\mathcal{O}}(\epsilon^{-4})\,.\)_
In terms of the target accuracy \(\epsilon\), this result matches the optimal sample complexity to obtain an \(\epsilon\)-FOSP for nonconvex smooth stochastic optimization via SGD (without variance reduction) up to a log factor.
## 6 Numerical Simulations
In this section, we present two simple numerical experiments to illustrate the performance of our algorithm compared to prior work and complement our theoretical contri
butions. Our implementation is based on the code provided in Zhang et al. (2021).6 Our goal is to show that our algorithm can be competitive compared to existing algorithms while gaining simplicity. We leave further experimental investigations in larger scale problems for future work.
Footnote 6: Available in OpenReview ([https://openreview.net/forum?id=Re_VXFOyyO](https://openreview.net/forum?id=Re_VXFOyyO)).
**(a) Nonlinear objective function maximization.** We consider a general utility RL problem where the objective function \(F:\mathbb{R}_{+}^{|\mathcal{S}|\times|\mathcal{A}|}\rightarrow\mathbb{R}\) is a nonlinear function of the occupancy measure defined for every \(\lambda\in\mathbb{R}_{+}^{|\mathcal{S}|\times|\mathcal{A}|}\) by:
\[F(\lambda)\stackrel{{\text{def}}}{{=}}\sum_{s\in\mathcal{S}} \log\left(\sum_{a\in\mathcal{A}}\lambda_{s,a}+\sigma\right)\,,\]
where \(\sigma\) is a small constant which we set to \(\sigma=0.125\). We test our algorithm in the FrozenLake8x8 benchmark environment available in OpenAI gym (Brockman et al., 2016). The result of the experiment is illustrated in Figure 1 (right). The performance curves show that our NVR-PG algorithm shows a relatively faster convergence compared to the TSIVR-PG algorithm (Zhang et al., 2021) and the MaxEnt algorithm which is specific to the maximum entropy exploration problem (by Hazan et al. (2019)) while the final performances are comparable (see also the overlapping shaded areas). We refer the reader to Section 6.3 in Zhang et al. (2021) for further details regarding our setting.
**(b) Standard RL.** While the focus of our work is on the general utility case beyond the standard RL setting, we also perform simulations for the particular case where the objective is a linear function of the state action occupancy measure (i.e., the standard cumulative reward setting) in the CartPole benchmark environment (Brockman et al., 2016). Figure 1 (left) shows that our algorithm is competitive with TSIVR-PG (actually even slightly faster, see between 250-500 episodes and see also the shaded areas) and all other algorithms which are not designed for the general utility case (REINFORCE (Williams, 1992), SVRPG (Xu et al., 2020), SRVR-PG (Xu et al., 2020), HSPGA (Pham et al., 2020)) while gaining simplicity compared to existing variance-reduced methods. Indeed, our algorithm is single-loop and does not require two distinct batch sizes and checkpoints nor does it require bounded importance sampling weights. Hyperparameters of the algorithms are tuned.
## 7 Perspectives
Compared to the standard RL setting, the general utilities setting is much less studied. A better understanding of the hidden convexity structure of the problem and its interplay with general policy parametrization would be interesting to derive global optimality guarantees under milder assumptions which would accommodate more practical and expressive policy parametrizations such as neural networks. Regarding the case of large state action spaces, future avenues of research include designing more efficient procedures and guarantees for approximating and estimating the occupancy measure to better address the curse of dimensionality as well as investigating the dual point of view for designing more efficient algorithms. Addressing the case of continuous state-action spaces is also an interesting research direction.
## Acknowledgements
This work was supported by ETH AI Center doctoral fellowship, ETH Foundations of Data Science (ETH-FDS), and ETH Research Grant funded via ETH Zurich Foundation.
Figure 1: (right) Nonlinear objective maximization in the FrozenLake environment and (left) Standard RL in the CartPole environment. In both cases, the performance curves represent the median return over 20 runs of the algorithms (with 20 seeds) and the shaded colored areas are computed with the 1/4 and 3/4 quantiles of the outcomes. |
2302.02672 | Identifiability of latent-variable and structural-equation models: from
linear to nonlinear | An old problem in multivariate statistics is that linear Gaussian models are
often unidentifiable, i.e. some parameters cannot be uniquely estimated. In
factor (component) analysis, an orthogonal rotation of the factors is
unidentifiable, while in linear regression, the direction of effect cannot be
identified. For such linear models, non-Gaussianity of the (latent) variables
has been shown to provide identifiability. In the case of factor analysis, this
leads to independent component analysis, while in the case of the direction of
effect, non-Gaussian versions of structural equation modelling solve the
problem. More recently, we have shown how even general nonparametric nonlinear
versions of such models can be estimated. Non-Gaussianity is not enough in this
case, but assuming we have time series, or that the distributions are suitably
modulated by some observed auxiliary variables, the models are identifiable.
This paper reviews the identifiability theory for the linear and nonlinear
cases, considering both factor analytic models and structural equation models. | Aapo Hyvärinen, Ilyes Khemakhem, Ricardo Monti | 2023-02-06T10:21:21Z | http://arxiv.org/abs/2302.02672v2 | # Identifiability of latent-variable and structural-equation models:
###### Abstract
An old problem in multivariate statistics is that linear Gaussian models are often unidentifiable, i.e. the parameters cannot be uniquely estimated. In factor (component) analysis, an orthogonal rotation of the factors is unidentifiable, while in linear regression, the direction of effect cannot be identified. For such linear models, non-Gaussianity of the (latent) variables has been shown to provide identifiability. In the case of factor analysis, this leads to independent component analysis, while in the case of the direction of effect, non-Gaussian versions of structural equation modelling solve the problem. More recently, we have shown how even general nonparametric nonlinear versions of such models can be estimated. Non-Gaussianity is not enough in this case, but assuming we have time series, or that the distributions are suitably modulated by some observed auxiliary variables, the models are identifiable. This paper reviews the identifiability theory for the linear and nonlinear cases, considering both factor analytic models and structural equation models.
**Keywords:** Identifiability ; independent component analysis ; structural equation model ; factor analysis ; disentanglement ; non-Gaussianity
## 1 Introduction
The goal of this paper is to provide a succinct and relatively self-contained exposition of the identifiability theory of a class of latent-variable models called independent component analysis, as well as of a class of structural-equation models. The theory has both linear and nonlinear versions, where "nonlinear" is to be taken in the sense of general (non-parametric) nonlinearities. The latent-variable models and structural-equation model are intimately related, and the identifiability theory of the former can be used to construct an identifiability theory of the latter. We focus on identifiability theory, and aim to explain the
basic results on an approachable manner. Estimation methods and algorithms are given very little attention in this paper.
We start by motivating these different models in the rest of this section. The following sections are structured as follows. The notion of identifiability is defined in Section 2. The model of (linear) independent component analysis (ICA) is considered in Section 3. Linear structural equation models (SEM) are considered in Section 4. Moving to nonlinear (non-parametric) models, Section 5 treats the identifiability of nonlinear ICA, and Section 6, the identifiability of nonlinear SEM. Section 7 provides further discussion on the utility of identifiability, topics for future research, and algorithms. Section 8 concludes the paper.
### Linear representation learning and factor analysis
The problem of identifiability of latent variables was encountered already decades ago in the case of classical factor analysis, which forms the basis for all our developments. The basic model is as follows. Assume \(s_{i},i=1,\ldots,n\) are \(n\) standardized uncorrelated Gaussian latent random variables. The covariance of \(\mathbf{s}\) is thus identity, a property which is termed "whiteness". Assume \(\mathbf{A}\) is an \(m\times n\) matrix, and denote by \(\mathbf{n}\) an \(m\)-dimensional vector of uncorrelated noise variables. We observe the \(m\)-dimensional random vector \(\mathbf{x}\) which is a noisy linear mixture given by
\[\mathbf{x}=\mathbf{A}\mathbf{s}+\mathbf{n} \tag{1}\]
The goal would be to recover the components in the vector \(\mathbf{s}\), or at least the matrix \(\mathbf{A}\). Many methods have been proposed (Harman, 1967), but fundamentally such factor analysis suffers from the indeterminacy of a "factor rotation", which is another way of saying that the factor analytic model is not identifiable for Gaussian factors. This is because any orthogonal transformation of the Gaussian latent vector, due to its whiteness, gives exactly the same distribution for the data, while giving quite different values of the latent variables, i.e. the features. This is a fundamental problem to which we will return more than once below.
Thus, the parameter matrix \(\mathbf{A}\) and the components \(s_{i}\) cannot be uniquely recovered. Such unidentifiability is a problem since one important goal of fitting such models is to find the underlying structure of the data, or a useful _representation_. If the representation a model gives is not unique or even well-defined, it is not possible to find the underlying structure of the data. It is therefore crucial to find models which are identifiable. The problem is particularly pertinent in the case where \(m=n\), i.e. dimension reduction is not performed, which is our focus in this paper.1
One practical motivation here is that we might want to separate signals from linear mixtures, and this should be done "blindly", i.e. using minimum information. This is illustrated in Fig. 1. The four observed signals (to be taken as entries of a four-dimensional time series) are apparently unstructured or noisy. Can we find their underlying, hidden structure?
Fortunately, this problem of unidentifiability can be solved by independent component analysis (ICA) which assumes independent latent variables that are non-Gaussian. For example, applying ICA on the signal in Fig. 1, we "separate" them into four original source signals which were seriously mixed in the observed data, as shown in the Figure. This kind of application of ICA is often called "blind source separation". If there were a factor rotation that remains undetermined, no such separation would be possible.
ICA will be reviewed in section 3 below and forms the basis of most of the theory reviewed in this paper.
### Nonlinear representation learning and disentanglement
Machine learning has recently been profoundly transformed by deep learning, which essentially means learning (estimating) arbitrary non-parametric nonlinear functions from data. While the theory is quite well developed in the supervised case, consisting essentially of regression, the unsupervised case is much less developed. Unsupervised means that we only observe a potentially high-dimensional random vector \(\mathbf{x}\), and there is no "output" or "label" or "regressand" defined (just like in factor analysis above). It is in fact widely appreciated that unsupervised, nonlinear learning of representations or features is one of the biggest challenges facing machine learning at the moment. It is often referred to as "disentanglement", although this term is vague and not very well-defined.
It is often assumed that most satisfactory solution for unsupervised deep learning is based on estimation of probabilistic generative models, because probabilistic theory often gives optimal objectives for learning, and immediately enables probabilistic inference of various quantities, such as the latent variables.
Figure 1: The basic idea of ICA. From the four measured signals shown in the upper row, ICA is able to recover the original source signals which were mixed together in the measurements, as shown in the bottom row.
This brings the machine learning theory in close connection with statistical estimation. Finding a good representation can then be defined as recovering the original latent variables that were assumed to generate the data. Some of the most popular probabilistic methods for unsupervised deep learning are based on variational autoencoders (VAE; Kingma and Welling (2014)) and generative adversarial networks (GAN; Goodfellow et al. (2014)). These methods have been successful in approximating the probability density function (pdf) of the data and generating new data points.
Unfortunately, in most models used in unsupervised deep learning, there has not been any proof that the original latent variables can be recovered. In fact, they tend to rely on transformations of latent variables that are Gaussian and even white. We thus recover the same serious problem of unidentifiability (factor rotation) as in the case of linear models already mentioned.
Inspired by the theory of linear ICA, we might try to directly extend it to the nonlinear case. Such a basic framework would assume that the observed data is generated by an invertible nonlinear (non-parametric) transformation of non-Gaussian independent components, and we wish to recover them from observed data alone.
However, the extension of ICA to general nonlinear mixtures has proven very problematic. In particular, if the observed data \(\mathbf{x}\) are obtained as i.i.d. samples, i.e. there is no temporal or similar structure in the data, the model is seriously unidentifiable (Hyvarinen and Pajunen, 1999). This is due to a further kind of unidentifiability that is specific to nonlinear models.
Fortunately, a solution to non-identifiability in nonlinear ICA can be found by utilizing temporal structure in time series (Harmeling et al., 2003; Sprekeler et al., 2014; Hyvarinen and Morioka, 2016, 2017) or similar "auxiliary" information (Hyvarinen et al., 2019; Khemakhem et al., 2020a). Section 5 reviews the identifiability theory for such nonlinear latent variable model, in particular nonlinear ICA.
### Causal discovery and structural equation models
Causal discovery is another goal of statistical analysis. It may appear to be unrelated to latent variable models, but we will see below that there is actually a deep connection. Causal models play a fundamental role in modern scientific endeavor (Spirtes et al., 2000; Pearl, 2009; Peters et al., 2017). While randomized control studies are the gold standard to study the effect of one variable on another, such an approach is unfeasible or unethical in many scenarios (Spirtes and Zhang, 2016). Furthermore, big data sets publicly available on the internet often try to be generic and thus cannot be strongly based on specific interventions. As such, it is both necessary and important to develop _causal discovery_ methods through which to uncover causal structure from (potentially large-scale) passively observed data. Data collected without the explicit manipulation of certain variables is often termed _observational data_, in contrast to experimental data where certain variables are intervened upon, as in randomized controlled trials.
Before going to the general case, it is very useful to consider the linear bivariate case to understand the basic idea. The problem of causal discovery then essentially means finding the "direction of effect" by choosing between two regression models; either
\[x_{2}=bx_{1}+e_{2} \tag{2}\]
which we might denote by \(x_{1}\to x_{2}\), or
\[x_{1}=bx_{2}+e_{1} \tag{3}\]
which we might denote by \(x_{2}\to x_{1}\). It is well-known that if the variables are standardized _Gaussian_, the situation is completely symmetric: The likelihood is equal for both models, and the variance explained is equal for both models (and so is the regression coefficient as already imposed above). Thus, we see again that for Gaussian variables we have a problem: the direction of effect is not identifiable. However, as will be seen below, it is identifiable for non-Gaussian variables.
In the general case, we use the framework of structural equation models2 (SEMs) (Bollen, 1989). Fundamentally, SEMs define a statistical model that describes the interactions of a set of observed variables \(\mathbf{x}=(x_{1},\ldots,x_{n})\) using a set of mutually independent disturbances or noise variables \(\mathbf{e}=(e_{1},\ldots,e_{n})\). However, SEM is not only a statistical model of a probability distribution, but also a mathematical tool that can be used to encapsulate causal knowledge (Pearl, 2009). SEMs are in this sense more powerful than latent variable models: not only do they describe the set of all distributions, they can be used to perform interventions and answer counterfactual queries by changing the noise distribution or the causal mechanism in one or more of the equations (23). We leave a formal definition for later, and just illustrate the SEM by Fig. 2 which shows how the influences of the observed variables can be described by a graph, where the arcs show which \(x_{i}\) causally influences which \(x_{j}\).
Footnote 2: Structural Equation Models (SEM) are sometimes referred to as Structural Causal Models (SCM) or Functional Causal Models (FCM) in recent machine-learning literature.
SEM are only useful for causal discovery if they define an _identifiable_ causal model. In the case of a causal model, identifiability typically means that we can distinguish between cause and effect, or find the direction of effect as in the example above. In general, it means we can find the right "causal ordering" over the variables \((x_{1},\ldots,x_{n})\) such that, loosely speaking, the earlier variables are the causes of the latter variables. Without identifiability, different causal orders would give rise to the same data, which would prevent any real scientific analysis of causality.
Although the problem of causal discovery may seem, at first sight, very different from estimating latent variable models, the deep result here is that we can in some cases reduce the estimation of a SEM to the estimation of a latent-variable model such as ICA. Thus, if we develop an identifiable latent-variable model, we may be able to develop a corresponding SEM that is also identifiable. In fact, even the estimation methods can be largely shared. Section 4 below
will consider identifiability in the linear case, while Section 6 will consider the nonlinear case.
## 2 Definition of identifiability
Now we shall proceed to a formal definition of identifiability. Identifiability is an important property of probabilistic models. For example, it seems futile to interpret the quantities estimated in the model, whether parameters or some further latent variables, if the model is not identifiable. We defer a more detailed discussion of the utility of identifiability to Section 7, and simply give the definition and some examples here.
Consider a probabilistic model \(\mathcal{P}\), defined as the set of distributions \(\{P_{\boldsymbol{\theta}}:\boldsymbol{\theta}\in\Theta\}\) with parameter \(\boldsymbol{\theta}\) taking values in some set \(\Theta\), on a set of possible observations \(\mathcal{X}\) (typically \(\mathbb{R}^{n}\) in our case). A model \(\mathcal{P}\) is said to be _identifiable_ if the mapping \(\boldsymbol{\theta}\in\Theta\mapsto P_{\boldsymbol{\theta}}(\mathbf{x})\) is injective (i.e. one-to-one):
\[(P_{\boldsymbol{\theta}_{1}}(\mathbf{x})=P_{\boldsymbol{\theta}_{2}}(\mathbf{x }),\,\forall\mathbf{x}\in\mathcal{X})\implies\boldsymbol{\theta}_{1}= \boldsymbol{\theta}_{2}. \tag{4}\]
In other words, if two parameters \(\boldsymbol{\theta}_{1}\) and \(\boldsymbol{\theta}_{2}\) generate the same distribution over the set of observations \(\mathcal{X}\), then they are necessarily equal. It is important to note that identifiability is a property of the probabilistic model and not of any particular estimation method.
Figure 2: A SEM can be expressed by a directed graph (typically acyclic), where the arcs express causal influences, as well as statistical dependenceis. Here, the nodes have been ordered so that the influences all go from top to bottom. The disturbance or noise variables are not plotted here, since each observed variable simply has its own disturbance variable.
As a simple theoretical example, consider that we flipped a possibly biased coin \(N\) times. A coin toss only has two outcomes: heads with probability \(\theta\in[0,1]\), or tails with probability \(1-\theta\). Let us define two different models for this data. In the first case, we use a Bernoulli model with parameter \(\theta\) for the outcome of a coin flip; denote by \(\mathrm{Ber}_{\theta}(x)\) the probability of the coin flip given \(x\in\{H,T\}\) which codes heads and tails. This model is identifiable. To see this, let \((\theta_{1},\theta_{2})\) be such that \(\mathrm{Ber}_{\theta_{1}}(x)=\mathrm{Ber}_{\theta_{2}}(x)\) for \(x\in\{H,T\}\). Since \(\mathrm{Ber}_{\theta}(H)=\theta\), and this probability is directly given by the data in the limit of infinite coin flips, we conclude that \(\theta_{1}=\theta_{2}\) and that the model is identifiable. On the other hand, as a counterexample, we can imagine something like a latent variable model: we do not observe a real coin, but rather the output of a computer simulation. Tossing a coin in this case proceeds in two steps: we first draw a sample \(z\sim\mathcal{N}(\mu,\sigma^{2})\), i.e. a latent variable with a Gaussian distribution with mean \(\mu\) and variance \(\sigma^{2}\); then we assign heads to \(x\) if \(z\geq 0\) and tails otherwise. This is effectively a latent variable model with parameters \((\mu,\sigma)\). Crucially, elementary calculations show that the observed probabilities depend only on the ratio \(\frac{\mu}{\sigma}\), which takes the same value for infinitely many pairs \((\mu,\sigma)\), which means that this model is not identifiable.
As a most fundamental example for our purposes, consider the _Gaussian factor analysis_ in Eq. (1), where the components or factors \(s_{i}\) are Gaussian, uncorrelated, and have unit variance. This model is well-known not to be identifiable. For simplicity, consider the matrix \(\mathbf{A}\) to be square and orthogonal--denote it by \(\mathbf{U}\) to highlight those assumptions--and ignore the noise \(\mathbf{n}\). An intuitive justification for unidentifiability would be that a Gaussian distribution is completely determined by covariances (and means). Now, the number of covariances is \(\approx n^{2}/2\) due to symmetry, so we cannot solve for the \(n^{2}\) parameters in the mixing matrix as we have "more variables than equations". More rigorously, the Gaussian distribution exhibits a rotational symmetry when the covariance matrix is identity (as is the case for the components here). The pdf is given by the probability transformation formula (transforming \(\mathbf{x}\) back to \(\mathbf{s}\)):
\[p(\mathbf{x})=\frac{1}{(2\pi)^{n/2}}\exp(\frac{1}{2}\|\mathbf{U}^{T}\mathbf{x} \|^{2})|\det\mathbf{U}^{T}|=\frac{1}{(2\pi)^{n/2}}\exp(\frac{1}{2}\|\mathbf{x }\|^{2}) \tag{5}\]
where the last equality comes from the orthogonality of \(\mathbf{U}\). Now, we see that the pdf of \(\mathbf{x}\) does not depend on \(\mathbf{U}\) at all. Thus, \(\mathbf{U}\) cannot be identifiable. In practice, we could rotate the factors \(\mathbf{s}\) by any orthogonal matrix, and compensate for that by rotating the columns of \(\mathbf{U}\) by the inverse, and the data distribution would stay the same, which is why this is called the "factor rotation indeterminacy".
The basic definition of identifiability talks thus about identifiability of parameters. Sometimes, by slight abuse of terminology, we also talk about _identifiability of the latent variables_, but it is not quite clear how that should be defined. If we consider the factor analysis model in Eq. (1) without noise, knowing the matrix \(\mathbf{A}\) will immediately give us the components \(\mathbf{s}\) by using the (pseudo)inverse of \(\mathbf{A}\) (assuming \(m\geq n\) as is typical). Thus, identifiability of \(\mathbf{A}\) implies, in a non-rigorous sense, identifiability of \(\mathbf{s}\). On the other hand, if
there actually is noise in the model, knowing \(\mathbf{A}\) will not give us the components \(\mathbf{s}\) since the noise cannot be completely removed. In this case, identifiability could be defined in the sense that the posterior \(p(\mathbf{s}|\mathbf{x},\mathbf{A})\) can be recovered in a suitable sense (Khemakhem et al., 2020a). However, we leave such a definition aside here, and focus on identifiability of the parameters.
It should also be noted that the strict definition of identifiability in Eq. (4) may be limiting in some cases. In practical scenarios, we may want to introduce identifiability with slightly relaxed definitions, such as identifiability of parameters up to an equivalence class. For example, independent components are only identifiable up to arbitrary scaling of the components, but this is usually not considered a problem.
## 3 Linear Independent Component Analysis
ICA is a statistical latent variable model which is very closely related to the classic factor analysis model reviewed above. The basic idea is that assuming the components to be non-Gaussian breaks the rotational symmetry just described and leads to an identifiable model. In particular, for non-Gaussian data, higher-order moments give more information than that contained in the covariances (Hyvarinen and Oja, 2000; Hyvarinen et al., 2001). The identifiability of the basic ICA model is well-known since Comon (1994), and in fact was proved in the 1950's by Darmois and Skitovich.
### Definition and identifiability
The basic model is as follows (Comon, 1994; Jutten and Herault, 1991; Hyvarinen and Oja, 2000; Hyvarinen et al., 2001). Assume \(s_{i},i=1,\ldots,n\) are \(n\) independent, non-Gaussian latent random variables. Assume \(\mathbf{A}\) is an invertible \(n\times n\) matrix. We observe the random vector \(\mathbf{x}\) which is a linear mixture given by
\[\mathbf{x}=\mathbf{A}\mathbf{s} \tag{6}\]
We assume that the means of the \(s_{i}\) are centered to zero and that their variances are finite and normalized to unity. Note the differences to the factor analysis model:
1. The components are assumed to be _non-Gaussian_. This is the fundamental difference which sets the model apart from classical factor analysis. (While the first letter in "ICA" emphasizes independence, the factors in classical Gaussian factor analysis are independent as well; the terminology is slightly misleading regarding this difference).
2. The number of components (factors) is equal to the number of observed variables. This is basically assumed for the sake of mathematical simplicity: the matrix \(\mathbf{A}\) is then uniquely invertible, but the assumption could be relaxed. At the same time, it emphasizes the fact that in the context
of ICA we are not so interested in dimension reduction but on finding the original components. In practice, the dimension is often reduced by principal component analysis (PCA) before application of ICA. In terms of the classical terminology of factor analysis, ICA can then be seen as a "factor rotation", considering the principal components as estimates of factors.
3. There is no noise term. This is partly justified by the large number of components: some of them can be noise whereas others are more interesting components. Since we do not reduce the dimension of the data, all variance of the data is "explained" by the components anyway.
It is really the non-Gaussianity that fundamentally distinguishes the ICA model from classical factor analysis, and enables identifiability. The importance of non-Gaussianity is further emphasized by the theory of ICA estimation, which provides also an intuitive proof of identifiability as follows. One of the most fundamental theorems in ICA says that we can estimate ICA by finding an invertible transformation that maximizes the non-Gaussianity of the components. In fact, each component corresponds to a maximum (over some parameters \(w_{i}\)) of non-Gaussianity of a linear combination \(\sum_{i}w_{i}x_{i}\). This is because, loosely speaking, by the Central Limit Theorem, a sum of independent random variables is more Gaussian than any of the original random variables (strictly speaking this holds if the random variables have the same distribution). The linear combination \(\sum_{i}w_{i}x_{i}\) is a linear combination of the \(s_{i}\), and thus its non-Gaussianity is maximized when it is actually equal to one of the \(s_{i}\). See Hyvarinen and Oja (2000); Hyvarinen et al. (2001) for details.
Note two rather trivial indeterminacies, i.e. unidentifiable aspects of the ICA model. First, the ordering of the components is not identifiable, or even defined by the model. Second, each component can only be estimated up to linear scaling (and sign), since if a component is multiplied by a scalar constant and the corresponding column of \(\mathbf{A}\) is divided by that constant, the data distribution stays the same. This indeterminacy is partly removed when the variances of the components are conventionally defined to be equal to unity, which also means that \(\mathbf{s}\) is white, but this is a mere convention and the real variance of the components is unidentifiable.
### A simple identifiability proof
Next, we provide a simple rigorous identifiability proof for ICA. The price to pay for its simplicity is some reduction in generality compared to the celebrated Darmois-Skitovich theory. In our simple case, the identifiability theorem takes then the following form:
**Theorem 1**: _Assume the independent components have finite variance, and the log-pdf's of the independent components have continuous second derivatives. If the variables \(x_{i},i=1,\ldots,n\) in Eq. (6) are mutually independent, \(\mathbf{A}\) has exactly one non-zero entry in each row and each column._
_Proof:3_ As typical in ICA theory, we can assume without restriction of generality that \(\mathbf{A}\) is orthogonal. This is because \(\mathbf{s}\) is white and we also can whiten \(\mathbf{x}\) as preprocessing, which implies the orthogonality of \(\mathbf{A}\) since the covariance of \(\mathbf{x}\) is equal to \(\mathbf{A}\mathbf{A}^{T}\).
Footnote 3: Alternatively, we could do the same proof in the Fourier domain, i.e. using characteristic functions \(\hat{p}\). Then, \(p(\mathbf{x})\) and \(p_{i}(s_{i})\) will be replaced by the characteristic functions, and the Jacobian disappers in the first equations. Thus, we would replace the assumption of smooth pdf’s by the assumption of continuous second derivatives of the characteristic functions of the \(p_{i}\), denoted by \(\hat{p}_{i}\). Such an assumption is related to the moment structure of the components: it is just slightly more restrictive than assuming finite variances for the components. The whole proof is valid for characteristic functions with minimal changes. Thus we get a more general if a bit more complicated proof.
The pdf of \(\mathbf{x}\) is obtained by the formula for transformation of probability density as
\[p(\mathbf{x})=\prod_{i=1}^{n}p_{i}(\sum_{j=1}^{n}a_{ji}x_{j})|\det\mathbf{A}^ {T}| \tag{7}\]
where \(p_{i}\) denotes the pdf of the \(i\)-th component. Taking logarithms, and noting that the determinant of an orthogonal matrix is equal to \(\pm 1\), we get
\[\log p(\mathbf{x})=\sum_{i=1}^{n}\log p_{i}(\sum_{j=1}^{n}a_{ji}x_{j}) \tag{8}\]
Next, since the \(x_{i}\) are assumed mutually independent, \(\log p(\mathbf{x})=\sum_{i=1}^{n}f_{i}(x_{i})\) for some functions \(f_{i}\). This implies that any second-order cross-derivative must be zero:
\[\frac{\partial^{2}\log p(\mathbf{x})}{\partial x_{k}\partial x_{l}}=\sum_{i=1 }^{n}a_{ki}a_{li}(\log p_{i})^{\prime\prime}(\sum_{j=1}^{n}a_{ji}x_{j})=0,\ \ \ \mbox{for all $k\neq l$} \tag{9}\]
This set of equations can be collected together and expressed in matrix form as
\[\mathbf{A}^{T}\mbox{diag}_{i}[(\log p_{i})^{\prime\prime}(\sum_{j=1}^{n}a_{ji }x_{j})]\mathbf{A}=\mbox{diag}_{i}[c_{i}(\mathbf{x};\mathbf{A})] \tag{10}\]
where the \(c_{i}(\mathbf{x};\mathbf{A})\) are some unknown scalar-valued functions of \(\mathbf{x}\) and \(\mathbf{A}\) corresponding to the case \(k=l\). This equation must hold for all \(\mathbf{x}\).
For Gaussian densities (and no others), \(\log p_{i}\) is quadratic, and its second derivative is constant. We have assumed non-Gaussianity, so \((\log p_{i})^{\prime\prime}\) is not constant. By continuity, \((\log p_{i})^{\prime\prime}\) takes in fact an infinity of different values, since it takes all the values in some interval of the real line. This implies we can find for each \(i\) a point \(y_{i}\) such that the entries \(d_{i}=(\log p_{i})^{\prime\prime}(y_{i})\) are all distinct, i.e. \(d_{j}\neq d_{k}\) for \(j\neq k\). (This is possible even if one of the components is Gaussian.) By the invertibility of \(\mathbf{A}\), we can find corresponding \(\mathbf{x}\) such that \(y_{i}=\sum_{j=1}^{n}a_{ji}x_{j}\) for all \(i\). In the following, we consider (10) with \(\mathbf{x}\) fixed to such value, so the diagonal entries \(d_{i}\) and \(c_{i}\) are fixed.
Now, the equation on the LHS of (10) is actually an eigen-value decomposition (EVD), since **A** is orthogonal. Importantly, the diagonal entries (eigenvalues) are distinct, which implies by a well-known results in linear algebra that the EVD is unique up to ordering of the eigenvalues and the signs of the eigenvectors. The RHS can be interpreted as an EVD as well, with eigenvectors such that each row and column has exactly one non-zero entry. Since the "eigenvectors" of the EVD on both sides must match up to permutation, the eigenvectors of **A** must have the same property for non-zero entries. \(\square\)
Note that if the components were Gaussian, the diagonal matrix on the left-hand-side of (10) would be equal to identity, and thus the whole LHS is equal to identity for any **A**. Then, any orthogonal **A** fulfills (10), since it is easy to show that the right-hand-side must necessarily be identity as well. This shows how Gaussian variables are not allowed. In fact, the uniqueness of the eigen-value decomposition only holds for distinct eigenvalues.
Another point to note is that the matrix **A** is not shown to be identity. The form that **A** takes enables a permutation of the components, as well as multiplying each by a scalar constant. These are fundamental indeterminacies that cannot be solved in any model similar to factor analysis, as already pointed out above.
### Alternative approaches
The literature of blind source separation actually considers non-Gaussianity as only one, if the most important, principle that enables estimation of a linear mixing model. In the case of time series data, identifiability is enabled by two very different kinds statistical structure: autocorrelations of the components (Tong et al., 1991; Belouchrani et al., 1997) or their non-stationarity (Matsuoka et al., 1995; Pham and Cardoso, 2001). Together with non-Gaussianity, these constitute what Cardoso (2001) called the "three easy routes to [linear] ICA". We shall consider these principles in more details in the case of nonlinear ICA below.
Yet another approach is possible in the case where the data is non-negative, and in particular so that the data has a concentration to small positive values. This leads to the model originally called positive matrix factorization (Paatero and Tapper, 1994) and popularized under the heading of non-negative matrix factorization (NMF) by Lee and Seung (1999). Its identifiability has been analyzed by Donoho and Stodden (2004). Combinations of NMF and ICA are considered by Plumbley (2003); Hoyer (2004). Related to this, Hyttinen et al. (2022) consider the case where the observations are binary.
We further note that great similarity of the ICA model with dictionary learning and sparse coding. The similarity was well-known in early work of those methods (Olshausen and Field, 1997) but seems to have been forgotten recently. The only real differences between ICA and a probabilistic formulation of dictionary learning is that in dictionary learning, the number of components is large (\(n>m\)), and there is noise. Some identifiability results for such cases are considered by Eriksson and Koivunen (2004).
Linear Structural Equation Model
Next, we consider a linear SEM, which we use here as a fundamental framework for causal discovery. We will see how its identifiability can be proven based on the theory of linear ICA.
### Definition of model
A linear SEM consists of a collection of equations of the form
\[x_{i}=\sum_{j=1}^{n}b_{ij}x_{j}+e_{i},\quad i=1,\ldots,n, \tag{11}\]
where the \(x_{i}\) are the observed variables, and the \(e_{i}\) are latent variables called the disturbances, external influences, or simply noise variables. The idea is that the variable \(x_{i}\) is caused by those \(x_{j}\) for which the coefficient \(b_{ij}\) is non-zero.
An SEM is often associated with a directed acyclic graph (DAG) \(\mathcal{G}\) called the _causal graph_. Each node of \(\mathcal{G}\) corresponds to an observed variable \(x_{i}\), and there is an edge from \(x_{j}\) to \(x_{i}\) iff \(b_{ij}\) is non-zero. We have here actually made the assumption of acyclicity very typical in this context. It means that when the matrix \(\mathbf{B}\) is interpreted as a connection matrix of a (weighted) graph, the ensuing graph has no cycles: one cannot start at a node and follow the edges so that one comes back to the starting node. An interesting property of a DAG is that the nodes (variables) can be ordered so that all the connections go "forward" in that ordering; this is called a "causal order", but it is not necessarily unique. An illustration of such a DAG expressing a SEM, visually ordered according to the causal ordering, was given in Figure 2.
The problem is now to estimate the parameters \(b_{ij}\) based on observations of \(\mathbf{x}\). It is well-known that the problem is in general ill-posed. In particular, for Gaussian data the model is unidentifiable, as shown above for the case of two variables. One wide-spread solution is to use prior knowledge to fix most of the \(b_{ij}\) to zero; for example, randomized controlled trials might be available to provide that knowledge. However, using such prior knowledge, not to mention interventions, is in strong contradiction with the aim of _discovery_ which is central to us.
### Identifiability
The key to identifiability of the SEM is its close relation to ICA. This intimate relation of latent-variable models and SEMs is a central theme of this paper, and we will later see how it applies even in the nonlinear case.
Denote by \(\mathbf{B}\) a matrix which collects all the coefficients \(b_{ij}\). We can express the SEM as
\[\mathbf{x}=\mathbf{B}\mathbf{x}+\mathbf{e} \tag{12}\]
where the vector \(\mathbf{e}\) collects the external influences. Now, by elementary linear algebra this implies
\[\mathbf{x}=(\mathbf{I}-\mathbf{B})^{-1}\mathbf{e} \tag{13}\]
In other words, the SEM implies that the data follows a latent-variable model. In fact, this is nothing else than an ICA model with mixing matrix \(\mathbf{A}=(\mathbf{I}-\mathbf{B})^{-1}\), under suitable assumptions. In particular, assume that the \(e_{i}\) are _mutually independent and non-Gaussian_. Then, the model in Eq. (13) is exactly an ICA model, and has by definition the same number of components (corresponding to disturbances) as observed variables.
Thus it would seem that we can estimate an ICA model of \(\mathbf{x}\), and transform the obtained \(\mathbf{A}\) back to \(\mathbf{B}\) by simply \(\mathbf{B}=\mathbf{I}-\mathbf{A}^{-1}\). However, there is one serious complication: ICA does not estimate order of the components, \(e_{i}\). As pointed out in Section 3, there is no ordering inherently defined between the components. This is in stark contrast to the SEM, where we know that \(e_{i}\) is the external influence of the variable \(x_{i}\) (for the same index \(i\)). Thus, we need to find the right ordering of the independent components, which implies an ordering of the columns of the mixing matrix \(\mathbf{A}\), before we can transform back to \(\mathbf{B}\).
The recovery of the correct ordering is not possible in general. However, as mentioned above, in the theory of SEM and causal discovery the assumption of a directed _acyclic_ graph (DAG) is often made. Based on the acyclicity assumption, the right ordering of the components can be found. To see how this is possible, consider the 2D case. Assume we have estimated the inverse of the mixing matrix \(\mathbf{W}=\mathbf{A}^{-1}\), for some arbitrary ordering of the components. Based on Eq. (13), we see that we should have \(\mathbf{W}=\mathbf{I}-\mathbf{B}\). The acyclicity of \(\mathbf{B}\) means in this 2D case that it has exactly one non-zero entry, by definition in the off-diagonal (unless the graph is degenerate and \(\mathbf{B}\) is all zeros). Thus, \(\mathbf{W}\) has exactly one zero entry. Denoting an arbitrary non-zero entry by \(*\), the real \(\mathbf{W}\) could, for example, be of the form
\[\mathbf{W}=\begin{pmatrix}1&*\\ 0&1\end{pmatrix} \tag{14}\]
Now, if the rows of \(\mathbf{W}\) are switched to the wrong order in the estimation due to the indeterminacy of the order of components, it is easy to see that the zero goes to the diagonal. This is a contradiction since the diagonal entries are all one by definition (even allowing for any rescaling). Thus, among the two different orderings, it is possible to find the right one based on acyclicity. The general case is explained by Shimizu et al. (2006); note that acyclicity is sufficient but not necessary (Lacerda et al., 2008).
A smaller detail is that the normalization of the mixing matrix in ICA and SEM is different: In ICA, the variances of the components are typically defined to be unity, while in SEM, a related normalization is obtained by the fact that \(\mathbf{I}-\mathbf{B}\) has all ones in the diagonal. However, this is just a normalization convention that has little implication for identifiability.
Thus, we see that the SEM is identifiable under the assumptions of independence and non-Gaussianity (like in ICA), together with the new assumption of acyclicity. The resulting model is called LiNGAM for Linear Non-Gaussian Acyclic Model. We refer the reader to Shimizu et al. (2006) for details on this basic model, and Shimizu (2014) for a longer treatment with some more recent developments.
Regarding estimation of LiNGAM, it is also possible to develop a very explicit interpretation of the resulting SEM estimation in terms of maximization of non-Gaussianity (Hyvarinen and Smith, 2013), just like in the case of ICA estimation.
### Alternative approaches
Alternative identifiable SEM frameworks have been proposed, e.g. by (Peters and Buhlmann, 2014; Jakobsen et al., 2022). These remove the non-Gaussianity assumption but introduce alternative restrictions more inspired by causal inference literature. One particularly important problem with causal discovery is that we may not observe all the relevant variables; discovery in the presence of such hidden "confounders" is considered, e.g. by Hoyer et al. (2008); Tashiro et al. (2014). A combination of source separation and LiNGAM was proposed by Monti and Hyvarinen (2018). Finally, we note that the alternative methods for estimating a linear mixing model, reviewed in Subsection 3.3 above, could also be used to estimate a linear SEM. In fact, Zhang and Hyvarinen (2010) use a related idea to develop another combination of source separation and causal discovery.
## 5 Nonlinear Independent Component Analysis
Nonlinear ICA is a fundamental problem in unsupervised learning which has attracted a considerable amount of attention recently. It promises a principled approach to representation learning and "disentanglement", in particular using deep neural networks. Nonlinear ICA attempts to find nonlinear components in multidimensional data by generalizing the linear ICA framework in Section 3. The essential difference to most methods for unsupervised representation learning is that the approach starts by defining a generative model in which the original latent variables can be recovered, i.e. the model is identifiable by design.
### Problems in identifiability
We start by introducing a simple model for nonlinear ICA which turns out to be unidentifiable. Denote, as above, an observed \(n\)-dimensional random vector by \(\mathbf{x}=(x_{1},\ldots,x_{n})\). We assume it is generated using \(n\) independent latent variables called independent components, \(s_{i}\). A straightforward definition of the nonlinear ICA problem is to assume that the observed data is an arbitrary (but smooth and invertible) transformation \(\mathbf{f}\) of the latent variables \(\mathbf{s}=(s_{1},\ldots,s_{n})\) as
\[\mathbf{x}=\mathbf{f}(\mathbf{s}) \tag{15}\]
The goal is then to recover the inverse function \(\mathbf{f}^{-1}\) as well as the independent components \(s_{i}\) based on observations of \(\mathbf{x}\) alone.
Research in nonlinear ICA has been hampered by the fact that such simple approaches to nonlinear ICA are not identifiable, in stark contrast to the linear ICA case. To put it simply, for any \(x_{1},x_{2}\), one can always find a function \(g(x_{1},x_{2})\) which is independent of \(x_{1}\). Darmois provided such a construction back in 1952, thus showing the "impossibility" of nonlinear ICA. He simply used the conditional cumulative distribution function
\[g(\xi_{1},\xi_{2})=P(x_{2}<\xi_{2}|x_{1}=\xi_{1}) \tag{16}\]
to construct a new variable \(z=g(x_{1},x_{2})\) which turns out to be independent of \(x_{1}\). (A slight generalization was provided by Hyvarinen and Pajunen (1999), and a slight variation by Locatello et al. (2019).) The problem is that we can apply this equally well after making an initial transformation \(\tilde{x}_{1}=h(x_{1},x_{2})\), so we see that _any_ such transformation \(\tilde{x}_{1}\) could be considered an independent component, since we can always find a decomposition of the data as \(\tilde{x}_{1}\) and the variable defined by the construction above. The marginal distributions of the two variables can further be easily transformed to anything we like, so we can, in particular, reproduce the _distributions_ of the original independent components exactly without recovering the _components_ themselves. (This logic is a bit different from the definition of identifiability given above, since here we consider the distributions of the latent variables, but it is equivalent.) Thus, it is clear that independence is not a strong enough assumption to enable identifiability, whatever assumptions (e.g. non-Gaussianity) we may make on the distributions of the independent components \(s_{i}\).
The unidentifiability is of course a major problem in practice since most of the utility of ICA rests on the fact that the model is identifiable, or in alternative terminology, the "sources can be separated". We should also note that curiously, many unsupervised deep learning models, such as VAE and GAN, assume a latent-variable model where the \(s_{i}\) are even Gaussian, thus recreating even the classical Gaussian factor rotation problem and making the unidentifiability even worse.
### Identifiable model definition
However, it should be noted that Darmois's counterexample above assumes that the data is sampled i.i.d. (independently and with identical distributions) from a random vector; in particular, the observations (data points) are statistically independent of each other (which is not to be confused with the independence of the components). A promising direction is to relax the assumption of i.i.d. sampling. A fundamental case is to consider time series, and the _temporal structure_ of independent components. Thus, we assume
\[\mathbf{x}(t)=\mathbf{f}(\mathbf{s}(t)) \tag{17}\]
where \(t\) is the time index. As a first attempt, we can assume that the sources \(s_{i}(t)\) have non-zero _autocorrelations_, which has a long history in the linear case (Tong et al., 1991; Belouchrani et al., 1997). In machine learning literature,
such models have been widely used under the heading of "temporal coherence", "temporal stability", or "slowness" of the features. Harmeling et al. (2003) proposed that we could try to find nonlinear transformations which are maximally uncorrelated even over time lags and after nonlinear scalar transformations. Sprekeler et al. (2014) showed that a closely related method enables separation of sources if they all have distinct autocorrelations functions. Sprekeler et al. (2014) thus constitutes probably the first identifiability proof for nonlinear ICA with general nonlinearities; however, it suffers from the restrictive condition that the sources must have different statistical properties, which is rather unrealistic in many cases. Note that just like the linear scaling is unidentifiable in linear ICA, a nonlinear scaling by a monotonic function is fundamentally unidentifiable in nonlinear ICA; in both cases, the global sign of each component as well as their ordering are unidentifiable as well.
Hyvarinen and Morioka (2017) proposed a rigorous and general treatment of identifiability for components which are stationary time series, the components being independent from each other, but each component having _temporal dependencies_. Assuming the components \(s_{i}(t)\) have sufficient temporal dependencies and sufficient non-Gaussianity according to certain technical definitions, the model was proven to be identifiable, up to monotonic pointwise transformations of the components. The proof was refined by Halva et al. (2021), partly based on Schell and Oberhauser (2023); an alternative approach was proposed by Klindt et al. (2020).
Intuitively speaking, temporal dependencies can provide much more information since now the components are independent over any lags, i.e. \(s_{i}(t)\) and \(s_{j}(t-\tau)\) are independent for any lag \(\tau\). Such independence can then be imposed on any estimates of the components. This provides many more constraints compare to the i.i.d. case considered in the Darmois construction given above (Schell and Oberhauser, 2023). Therefore, it is intuitively plausible that the model becomes identifiable.
Another form of temporal structure that has been previously used in the case of linear blind source separation is _nonstationarity_(Matsuoka et al., 1995; Pham and Cardoso, 2001), in particular nonstationarity of variance. Such nonstationarity of variances seems to be prominent in many kinds of real data, for example in EEG and MEG (Brookes et al., 2011), natural video (Hyvarinen et al., 2009), and closely related to changes in volatility in financial time series. This principle was extended to the nonlinear case by Hyvarinen and Morioka (2016); Khemakhem et al. (2020a) gives the strongest identifiability results based on this principle so far. They assumed a piecewise nonstationary model where the components in each segment follow an exponential family, while the parameters of the exponential family change from one segment to another. It was further assumed that the segmentation is known. Then the identifiability of the model was proven under technical assumptions that guarantee the nonstationarity is strong enough. While the assumptions here look quite restrictive, this result can be extended in various ways as will be seen below. Note that in this theory, the components can usually only be recovered up to a nonlinear, pointwise but not necessarily monotonic, function of the components; for example, the only
squares of the components can be recovered in the simplest models.
Again, we can give the identifiability results a simple intuitive justification. By the basic assumption of independence, the components are independent at any time point, i.e. \(s_{i}(t)\) and \(s_{j}(t)\) must be independent for any \(t\). Crucially, nonstationarity implies that the distributions inside the segments are different; thus, the number of independence constraints is basically multiplied by the number of segments. This way we get many more constraints compared to the i.i.d. case, which was considered in the Darmois construction given above.
The two kinds of temporal structure used here are illustrated in their basic forms in Fig. 3. Such a dichotomy of temporal structure is useful for intuition about the temporal dependencies. However, nonstationarity can be subsumed as a case of temporal dependencies by assuming that the apparent nonstationarity stems from a hidden Markov model (Halva and Hyvarinen, 2020). This leads to the general theory of Halva et al. (2021) which shows identifiability for very general temporal dependencies, including those coming from HMM's. A related model combining the two kinds of temporal structure was proposed by Morioka et al. (2021).
Another way of generalizing identifiability is to realize that the proofs based on nonstationarity simply assume that conditionally on the segment index, the distributions of the components change, while they impose no temporal structure inside the segments. Thus, we can replace this segment index by any observed _auxiliary variable_ that conditions the distributions of the components. It could be the index of a subject in a biomedical setting, the index of a country in an international study, or anything that changes the underlying statistics in a suitable way. Thus, we get another set of identifiability results, based on observation of such an additional auxiliary information (Hyvarinen et al., 2019). Khemakhem et al. (2020, 2020) generalize those results even further; see also Zimmermann et al. (2021). We will see below how this is particularly useful in the
Figure 3: The two fundamental kinds of temporal structure used in nonlinear ICA. Left: Autocorrelated sources, which show the basic form of temporal dependencies. Right: Nonstationary sources, in particular of exhibiting nonstationarity of variances. If the independent components have either of such structures, the model can be identifiable. Furthermore, such structures can be combined or generalized in various ways.
case of causal discovery.
Another generalization of the results is to consider the noisy mixing model, where noise is added to the mixing as in the classical factor analysis in Eq. 1. Such additive noise has some algorithmic advantages and has actually been widely used in related unsupervised models as well as by Khemakhem et al. (2020a). While its distribution is typically considered known, Halva et al. (2021) prove the identifiability of the noise distribution under quite general conditions.
### Sketch of an identifiability proof
Next we provide a sketch of a simple identifiability proof following Hyvarinen and Morioka (2016). This is only a special case of the currently available theory, but serves to illustrate the basic principles. For the most sophisticated currently available proofs based on Hyvarinen and Morioka (2016), the reader is referred to Khemakhem et al. (2020a,b), whose proofs are thus generalizations of the following. Note also that Halva et al. (2021) develop a proof using a very different method, based on Hyvarinen and Morioka (2017), and reminiscent of the linear ICA identifiability proof given above. Another very different approach to the proofs was proposed by Hyvarinen et al. (2019).
We consider a nonstationary model where at each segment \(\tau=0,\ldots T\), the \(i\)-th component follows an exponential family of order one:
\[\log p_{\tau}(s_{i})=b_{i}(s_{i})+\lambda_{\tau,i}q_{i}(s_{i}) \tag{18}\]
with the \(q_{i},b_{i}\) being the sufficient statistic and the base measure of component \(i\), and \(\lambda_{\tau,i}\) the parameter for component \(i\) and segment \(\tau\). (Instead of the segment \(\tau\), the proof could also be developed for a time index \(t\) with no changes.) Now, compute the log-pdf of a data point \(\mathbf{x}\) in the segment \(\tau\) under the nonlinear ICA model. Using the probability transformation formula, the log-pdf is given by
\[\log p_{\tau}(\mathbf{x})=\sum_{i=1}^{n}b_{i}(g_{i}(\mathbf{x}))+\lambda_{\tau,i}q_{i}(g_{i}(\mathbf{x}))+\log|\det\mathbf{J}\mathbf{g}(\mathbf{x})| \tag{19}\]
where we drop the index \(t\) from \(\mathbf{x}\) for simplicity, and \(\mathbf{g}(\mathbf{x})=(g_{1}(\mathbf{x}),\ldots,g_{n}(\mathbf{x}))^{T}\) is the inverse function of (the true) mixing function \(\mathbf{f}\); thus, \(s_{i}=g_{i}(\mathbf{x})\) by definition. \(\mathbf{J}\) denotes the Jacobian matrix.
Assume we now have two different models which give the same data distribution, thus violating identifiability. That is, there is another inverse function \(\tilde{\mathbf{g}}\), sufficient statistics and base measure \(\tilde{q},\tilde{b}\) and parameters \(\tilde{\lambda}\) such that for all \(\tau\),
\[\sum_{i=1}^{n}b_{i}(g_{i}(\mathbf{x}))+\lambda_{\tau,i}q_{i}(g_{ i}(\mathbf{x}))+\log|\det\mathbf{J}\mathbf{g}(\mathbf{x})|\\ =\sum_{i=1}^{n}\tilde{b}_{i}(g_{i}(\mathbf{x}))+\tilde{\lambda}_ {\tau,i}\tilde{q}_{i}(\tilde{g}_{i}(\mathbf{x}))+\log|\det\mathbf{J}\tilde{ \mathbf{g}}(\mathbf{x})| \tag{20}\]
Now, subtract both sides of this equation for the corresponding terms obtained for \(\tau=0\). We get
\[\sum_{i=1}^{n}\alpha_{\tau,i}q_{i}(g_{i}(\mathbf{x}))=\sum_{i=1}^{n}\tilde{\alpha }_{\tau,i}\tilde{q}_{i}(\tilde{g}_{i}(\mathbf{x})) \tag{21}\]
with \(\alpha_{\tau,i}=\lambda_{\tau,i}-\lambda_{0,i}\). Remarkably, we got rid of the Jacobian terms.
Now it is enough to collect Eq. (21) for all \(\tau=1,\ldots,T\). Assume that the matrix of the \(\alpha\) is full rank, which means that we have enough segments (their number is greater than the dimension of the data) and there is enough variability in their distributions. Then, we can solve for the \(\tilde{q}_{i}(\tilde{g}_{i}(\mathbf{x}))\) which are then given by a linear transformation of the \(q_{i}(g_{i}(\mathbf{x}))\). Such a linear indeterminacy can be resolved by linear ICA assuming that the components are marginally independent (in addition to conditionally independent in each segment). Thus, we see that \(\tilde{q}_{i}(\tilde{g}_{i}(\mathbf{x}))=q_{j}(g_{j}(\mathbf{x}))=q_{j}(s_{j})\) for some permutation of the indices \(i,j\), and we obtain the components up to the pointwise nonlinear transformation given by the sufficient statistics. (The pointwise transformation is characterized in detail by Khemakhem et al. (2020), who actually don't need a condition of marginal independence.) \(\square\)
### Alternative approach by constraining nonlinearity
An alternative approach to making nonlinear ICA identifiable consists of constraining the mixing function. A natural approach would be to assume that the mixing function is close to linear (Zhang and Chan, 2008). Recent research has focused on imposing constraints on the Jacobian matrix \(\mathbf{Jf}\) of the mixing function to achieve something to that effect.
The classic Liouville's theorem of conformal mappings is highly relevant here. It considers the case where the Jacobian is almost orthogonal in the sense that
\[\mathbf{Jf}(\mathbf{x})^{T}\mathbf{Jf}(\mathbf{x})=\alpha(\mathbf{x})\mathbf{I} \tag{22}\]
for a scalar-valued function \(\alpha\). The theorem states that any such sufficiently smooth function \(\mathbf{f}\) has to be belong to a very restricted class of functions (called Mobius transformations). A simple corollary of the theorem, stated and proven in Appendix A, says that if \(\alpha\equiv 1\), i.e. the Jacobian is orthogonal, the function \(\mathbf{f}\) is actually necessarily affine: \(\mathbf{f}(\mathbf{x})=\mathbf{U}\mathbf{x}+\mathbf{b}\) for some orthogonal matrix \(\mathbf{U}\). This theory thus provides a strong result on what kind of constraints on the Jacobian are meaningful. Constraining the Jacobian to be orthogonal is not meaningful since it does not allow for any nonlinear mixing functions. An intriguing question is, therefore, to explore some relaxations of the constraint in Eq. (22). Hopefully, some such relaxations would allow for a sufficiently large class of nonlinear functions, while still providing identifiability (Gresele et al., 2021; Zimmermann et al., 2021; Buchholz et al., 2022).
A number of further approaches constraining the mixing have recently been proposed. Kivva et al. (2022) show identifiability in the case of a piecewise affine mixing function. Moran et al. (2021) impose a restriction on how many
observed variables are influenced by a single independent component, thus leading to a special kind of "sparsity" of the mixing function. In a slightly different context, Donoho and Grimes (2003) consider a constraint of orthogonality of the Jacobian (but for non-invertible \(\mathbf{f}\)) in the case of dimension reduction, which Horan et al. (2021) apply on nonlinear ICA combined with dimension reduction. Furthermore, Willetts and Paige (2021) automatically learn the auxiliary variables from observations by solving a secondary task, even without temporal structure; while Gresele et al. (2020) use a different "view" of the data instead of auxiliary variable.
## 6 Nonlinear Structural Equation Model
Just like the theory of linear ICA helped in solving the linear SEM problem in Section 4, it turns out that the theory of nonlinear ICA helps in solving the nonlinear SEM problem. In what follows we focus on the bivariate causal discovery problem. This corresponds to recovering the causal structure using observations from two variables, which we denote by \(x_{1}\) and \(x_{2}\). While bivariate causal discovery is simplified special case of the more general causal discovery problem, it remains a challenging task.
### Definition of problem
We start by defining a fully nonlinear SEM of arbitrary dimension as
\[x_{j}=f_{j}(\mathbf{PA}_{j},e_{j}),\quad j=1,\ldots,n, \tag{23}\]
for arbitrary nonlinear functions \(f_{i}\). The variables \(\mathbf{PA}_{j}\subseteq\{x_{1},\ldots,x_{n}\}\setminus x_{j}\) are the parents of the variable \(x_{j}\) in the associated graph. The point is that the "parents" cause the variable \(x_{j}\), and in the general case the problem of causal discovery can be framed as finding which variables are parents of which.
In order to accomplish identifiability, causal discovery algorithms generally adopt one of two techniques. The first approach is to impose constraints on the functions \(f_{j}\) that define the SEM (23). In the extreme case, we obtain linear models, possibly with non-Gaussian external influences as reviewed above. In contrast to the ICA case, however, some quite meaningful restricted non-linear models have also been proposed for SEM and will be reviewed below. Nevertheless, arguably the most interesting case is where the nonlinearities are unrestricted while we impose some statistical assumptions on the external influences, which is our main topic here.
### Identifiable weakly nonlinear causal models
In this subsection, we will review some of the most notable identifiable nonlinear causal models based on strong restrictions on the nonlinearity.
Additive noise model (ANM).Hoyer et al. (2009) introduced the additive noise model, in which the SEM has the form
\[x_{j}=f_{j}(\mathbf{PA}_{j})+e_{j},\]
and the noise variables \(\mathbf{e}\) are both mutually independent and independent of \(\mathbf{x}\). Their theoretical identifiability result focuses on the case of two variables \(x_{1}\) and \(x_{2}\). It stipulates that if \(x_{1}\) causes \(x_{2}\), which we denote by \(x_{1}\to x_{2}\), then we cannot write \(x_{1}=g(x_{2})+\tilde{e}\) for some function \(g\) and noise \(\tilde{e}\) that is independent of \(x_{2}\). Essentially, this SEM is asymmetrical with respect to \(x_{1}\) and \(x_{2}\) and can only describe the natural cause-effect relationship. In other words, it is identifiable. Peters et al. (2014) generalized the identifiability result to the case of more than two variables.
Post-nonlinear model (PNL).Zhang and Hyvarinen (2009) introduced the post-nonlinear model, which generalizes ANM by adding a subsequent invertible mapping \(g_{j}\):
\[x_{j}=g_{j}(f_{j}(\mathbf{PA}_{j})+e_{j}).\]
The noise variables \(\mathbf{e}\) are still assumed to be mutually independent and independent of the causes. The authors show that the bivariate PNL model is identifiable in most cases and enumerate five special situations in which the model is not identifiable. This identifiability theory generalizes that of ANM, which is a special case when \(g_{j}\) is the identity mapping. Note that if we knew \(g_{j}\), we could reduce the PNL model to an ANM by transforming the effect through the inverse of the mapping \(g_{j}\), the transformed variable \(g_{j}^{-1}(x_{j})\) being a deterministic function of the original effect \(x_{j}\).
Causal autoregressive flow (CAREFL).Khemakhem et al. (2021) note that SEMs are closely related to a class of model called (affine autoregressive) normalizing flows (Rezende and Mohamed, 2015; Huang et al., 2018) in machine learning. That theory leads to the following definition of a SEM on the observations \(\mathbf{x}\):
\[x_{j}=e^{\alpha_{j}(\mathbf{PA}_{j})}z_{j}+\beta_{j}(\mathbf{PA}_{j}),\quad j=1,2 \tag{24}\]
where \(z_{1},z_{2}\) are statistically independent latent noise variables, and \(\alpha_{j}(\mathbf{PA}_{j})\) and \(\beta_{j}(\mathbf{PA}_{j})\) are scalar-valued functions, defined constant (with respect to \(\mathbf{x}\)) when there are no parents. This affine causal model generalizes the additive noise model (Hoyer et al., 2009) by adding a cause-dependent coefficient to the noise variable \(z_{j}\) in the SEM; thus, the disturbance \(e^{\alpha_{j}(\mathbf{PA}_{j})}z_{j}\) is not independent of the cause \(\beta_{j}(\mathbf{PA}_{j})\), in contrast to most models. (We use here the notation by Khemakhem et al. (2021) where \(z\) is used instead of \(e\), the two terms on the RHS are switched, and Greek letters are used for the functions; but the real difference to ANM is in the modulation of the noise.) Identifiability of the model is shown by Khemakhem et al. (2021) under very general conditions. Recent work by Immer et al. (2022); Strobl and Lasko (2022) has modified and generalized the identifiability conditions.
### Identifiability theory for general nonlinearities
Next we describe a method for estimating a nonlinear SEM for general nonlinearities; this leads to an identifiability result as well. We assume we observe bivariate data \(\mathbf{x}(t)\in\mathbb{R}^{2}\) where \(t\) provides an index over all observations (\(t\) may be, e.g., a time index but this is not necessary).
Importantly, we further assume data is available over a set of distinct environmental conditions or segments, \(C\in\mathcal{C}\). As such, each \(\mathbf{x}(t)\) is allocated to one such environmental condition \(C(t)\in\mathcal{C}\). In the case of causal discovery, such environmental conditions could be due to different interventions or due to some changes in the internal dynamics of the system being observed. To clearly align the proposed method with the terminology of nonlinear ICA reviewed in Section 5, we note that we may consider each environmental condition as a distinct segment in nonstationarity-based nonlinear ICA. As noted above, the segment should be interpreted in a very general sense, as in the theory of nonlinear ICA by auxiliary variables (Hyvarinen et al., 2019; Khemakhem et al., 2020).
Next we outline the method for causal discovery over bivariate data by Monti et al. (2019), which they termed Non-linear SEM Estimation using NonStationarity (NonSENS). Without loss of generality, we explain the basic logic assuming that \(x_{1}\to x_{2}\), such that the associated SEM is of the form:
\[x_{1}(t) =f_{1}(e_{1}(t)), \tag{25}\] \[x_{2}(t) =f_{2}(x_{1}(t),e_{2}(t)), \tag{26}\]
where \(e_{1},e_{2}\) are latent disturbances whose distributions are assumed to vary across environmental conditions, thus creating the non-stationarity. The DAG associated with Equations (25) and (26) is shown in Figure 4. The key idea here is that the latent disturbances in a bivariate SEM, \(\mathbf{e}\), can be interpreted as corresponding to the independent sources in a non-linear ICA model, \(\mathbf{s}\), not unlike in the original LiNGAM theory reviewed above based on (Shimizu et al., 2006).
The proposed NonSENS algorithm consists of a two-step procedure. First, it seeks to recover latent disturbances via non-stationarity-based nonlinear ICA.
Figure 4: Visualization of DAG, \(\mathcal{G}\), associated with the SEM in equations (25) and (26). The associated structural equations are provided on the right. Note that in contrast to Fig. 2, the disturbance variables are shown as well.
We note that the non-stationarity introduced by the various environmental conditions, \(C\in\mathcal{C}\), implies that the method by Hyvarinen and Morioka (2016) is well suited to recover the source variables \(e_{1}\) and \(e_{2}\). Given estimated components, we may employ knowledge regarding the statistical independences between observed data and estimated sources in order to infer the causal structure, based on an interesting independence property of the components pointed out by Monti et al. (2019). Denote by \(x\perp\!\!\!\perp y\) statistical independence of \(x\) and \(y\), and by \(\not\!\!\perp\) lack of independence. We have:
**Proposition 1**: _Assume the true causal structure follows equations (25) and (26), as depicted in Figure 4. Then it follows that \(x_{1}\perp\!\!\!\perp e_{2}\) while \(x_{1}\not\!\!\perp e_{1}\) and \(x_{2}\not\!\!\perp e_{1}\) as well as \(x_{2}\not\!\!\perp e_{2}\)._
This proposition highlights the relationship between observations \(\mathbf{x}\) and true latent sources, \(\mathbf{e}\): The cause is independent of one of the disturbances, while the effect is not independent of any disturbance. In practice, nonlinear ICA returns estimated latent sources whose ordering is random. As a result, we must test for independence between each of the variables, \(x_{1}\) and \(x_{2}\), and each of the estimated latent sources, \(\hat{e}_{1}\) and \(\hat{e}_{2}\). This results in a total of four distinct independence tests.
The proposed method is thus recapitulated as follows: After estimating (point-wise transformations of) latent variables by nonlinear ICA, we make the four statistical independence tests, and conclude a causal effect in the case where, for one of the directions, there is evidence to reject the null hypothesis of independence in three of the tests and only one of the tests fails to reject the null. In such a case, the observed variable regarding which one null hypothesis was not rejected is identified as the cause variable.
We finally note that the proposition and method just described based on Monti et al. (2019) constitute, in fact, a constructive identifiability proof: The direction of effect in arbitrary nonlinear SEM is identifiable, essentially assuming the data is divided into some conditions that fulfill the assumptions of nonlinear ICA methods based on nonstationarity.
## 7 Discussion
In this section, we discuss the actual utility of identifiability in applications, some questions for future research, as well as the question of algorithm development.
### Utility of identifiability
#### 7.1.1 Interpretability; finding causal direction
The utility of identifiability is obvious in the case where the parameters (or latent variables) of the model give directly some useful information about the phenomenon being analyzed. This is very clear in the case of causal analysis,
where the whole point is to find out which variable causes which, and if the model is not identifiable, the analysis is typically not possible at all. Likewise, in the case of linear ICA, the components obtained often correspond to meaningful phenomena, as in the case of blind source separation (Fig.1). This has been extensively used in neuroscience, for example, where the sources may correspond to sources of activity in the brain (Hyvarinen et al., 2010).
#### 7.1.2 Feature extraction; semi-supervised and transfer learning
What is less understood is the utility of identifiability in nonlinear ICA. In deep learning, the components are often very difficult to interpret, since the nonlinearities implemented by the neural networks are difficult to understand or visualize. Therefore, the main utility of identifiability of nonlinear ICA might be found elsewhere. In fact, a representation learned by unsupervised deep learning is often used for some further purpose; for example, the features are used in a further classification task. So, the question is whether the identifiability of the components is useful for such a task.
To begin with, we note that even if the computation of the components is not possible to interpret, single components might still be useful for classification and decision-making. For example, single components might work as biomarkers in a biomedical setting, and finding such biomarkers clearly requires identifiability. In a more general setting, Zhu et al. (2023) show that nonlinear ICA is very useful for analysing neuroscience data in a _semi-supervised setting_, which means that first the components are learned from a big generic data set, and then used on a smaller data set to solve a specific classification task using a simple linear classifier. In a similar vein, Khemakhem et al. (2020) propose some theory on how identifiable representations would be particularly useful for _transfer learning_, which is a related task where we have data from many different data sets (e.g. subjects in a biomedical setting) and then want to generalize to new subjects. While these developments point out the empirical utility of nonlinear ICA, they don't quite conclusively prove the utility of identifiability of nonlinear ICA; more developments on that topic would be warranted.
### Questions for future research
#### 7.2.1 Combining causal analysis with feature learning
In this paper, the methods are either finding some hidden factors or doing causal discovery. A very interesting topic would be to combine the two, so that the model learns hidden factors and causal connections _between_ the hidden factors. This is what some would call "causal representation learning". In the linear case, such methods can be found in Zhang and Hyvarinen (2009); Monti and Hyvarinen (2018). In the nonlinear case, this is a topic of great current interest in deep learning, see e.g. Lachapelle et al. (2022); Morioka and Hyvarinen (2023) for some developments.
#### 7.2.2 Dependent components; causal discovery with confounders
In fact, models with causal dependencies between the latent variables are a special case of models where the latent variables are not independent. A general framework for this case was proposed by Khemakhem et al. (2020), who developed an extension of nonlinear ICA, called Independently Modulated Component Analysis (IMCA), where the components are allowed to dependent. They key idea is that the dependencies are assumed to be stationary, or independent of the auxiliary variable. Allowing such dependencies is likely to be useful in many contexts, and another highly promising topic for future research.
Another potential application of the IMCA framework in causal discovery is to allow for the presence of confounding. As already mentioned, a confounder is a hidden variable that affects both dependent and independent variables, resulting in spurious associations in the causal graph. We can establish a relationship between IMCA and confounded structural equation models (SEM) since the dependencies of the disturbances can be considered to be induced by the unobserved confounders. The identifiability of IMCA implies that the causal direction of such an SEM is likewise identifiable. However, because it is based on independence tests, the estimation technique of NonSENS based on testing described above cannot be used here. Possibly, a non-constraint-based method, such as likelihood ratio measures (Hyvarinen and Smith, 2013; Monti et al., 2019), might still be pursued.
#### 7.2.3 Identifiability of intermediate layers
If the nonlinear function in nonlinear ICA is modelled by a neural network, we are estimating much more than only a single nonlinearity. In fact, the intermediate layers in neural networks are frequently used as useful features for a later classification task. They may even be preferable in some applications over the representations learned by the final layer (Alain and Bengio, 2018; Chen et al., 2020; Mikolov et al., 2013). An intriguing question arises: can the identifiability results of the representations learnt by nonlinear ICA be generalized to previous layers? Khemakhem et al. (2020) showed that some such neural network architectures are, in fact, identifiable, using a form of induction to "propagate identifiability" forward through the network. Thus, a potential avenue of research is to prove that the intermediate layers preceding a final layers in a neural network can inherit the identifiability guarantees of the final layer. Extending such proofs to convolutional networks would be particularly interesting since they are frequently utilized in image learning and have a strong mathematical theory (Wiatowski and Bolcskei, 2017).
### Estimation algorithms
Finally, let us very briefly discuss what kind of estimation methods are available. Basically, we only need methods to estimate ICA, including nonlinear ICA, since the estimation of SEMs can be reduced to ICA. In machine learning, we typically distinguish between two parts of an estimation method: 1)
an estimation principle (such as maximum likelihood), typically leading to an objective function, and 2) a computational algorithm, typically optimizing the objective function. For linear ICA, we proposed FastICA (Hyvarinen, 1999) which is widely used and was originally also used to solve the SEM estimation by Shimizu et al. (2006), although further methods for SEM were subsequently developed e.g. by Shimizu et al. (2011); Hyvarinen and Smith (2013).
For nonlinear ICA, we have made several proposals; for an in-depth exposition, see Hyvarinen et al. (2023). First, we proposed "self-supervised" methods (Hyvarinen and Morioka, 2016, 2017; Hyvarinen et al., 2019), which is a class of methods of great current interest in machine learning. Subsequently, we proposed maximum likelihood estimation, starting with models with additive noise (Khemakhem et al., 2020; Halva et al., 2021) that enable variational approximation. The case of maximum likelihood estimation in the basic noise-free model used in this paper initially looks easy, but presents serious computational problems, which we have largely solved in Gresele et al. (2020a). In any case, developing better algorithms is definitely an interesting topic for future research as well.
Once we have an estimating algorithm, the question of its finite-sample performance can be considered. For linear ICA, the asymptotic variance (statistical efficiency) has been analyzed by several authors, including Cardoso and Laheld (1996); Pham and Garrat (1997); Hyvarinen (1997); Tichavsky et al. (2006), and these results may be more or less directly applicable to linear SEM estimation as well. Some analysis of the robustness against outliers has also been performed, in both linear and nonlinear cases (Hyvarinen, 1997; Sasaki et al., 2020). For the nonlinear case, we are not aware of any analysis of statistical efficiency except in terms of simulations. These are clearly interesting points for future research.
## 8 Conclusion
We started this review from the linear, Gaussian factor analysis model which is known to be unidentifiable since the 1950's if not earlier. The identifiability problem was solved by independent component analysis, a _non_-Gaussian factor analysis model which is identifiable in the precise sense that it can recover components that actually created the data, as in the case of blind source separation. Linear ICA further enables estimation of linear structural equation models, thus leading to identifiable causal discovery. ICA can be made nonlinear, but this is not at all straightforward: new assumptions are needed for identifiability. Here, we considered mainly temporal dependencies and nonstationarity (the latter being taken in a very general sense). Restricting the nonlinear mixing is also likely to work but few successful models have been proposed so far. As in the linear case, we finally saw that nonlinear ICA enables identifiability and estimation of nonlinear SEM as well. |
2305.18855 | STT4SG-350: A Speech Corpus for All Swiss German Dialect Regions | We present STT4SG-350 (Speech-to-Text for Swiss German), a corpus of Swiss
German speech, annotated with Standard German text at the sentence level. The
data is collected using a web app in which the speakers are shown Standard
German sentences, which they translate to Swiss German and record. We make the
corpus publicly available. It contains 343 hours of speech from all dialect
regions and is the largest public speech corpus for Swiss German to date.
Application areas include automatic speech recognition (ASR), text-to-speech,
dialect identification, and speaker recognition. Dialect information, age
group, and gender of the 316 speakers are provided. Genders are equally
represented and the corpus includes speakers of all ages. Roughly the same
amount of speech is provided per dialect region, which makes the corpus ideally
suited for experiments with speech technology for different dialects. We
provide training, validation, and test splits of the data. The test set
consists of the same spoken sentences for each dialect region and allows a fair
evaluation of the quality of speech technologies in different dialects. We
train an ASR model on the training set and achieve an average BLEU score of
74.7 on the test set. The model beats the best published BLEU scores on 2 other
Swiss German ASR test sets, demonstrating the quality of the corpus. | Michel Plüss, Jan Deriu, Yanick Schraner, Claudio Paonessa, Julia Hartmann, Larissa Schmidt, Christian Scheller, Manuela Hürlimann, Tanja Samardžić, Manfred Vogel, Mark Cieliebak | 2023-05-30T08:49:38Z | http://arxiv.org/abs/2305.18855v1 | # STT4SG-350: A Speech Corpus for All Swiss German Dialect Regions
###### Abstract
We present STT4SG-350 (Speech-to-Text for Swiss German), a corpus of Swiss German speech, annotated with Standard German text at the sentence level. The data is collected using a web app in which the speakers are shown Standard German sentences, which they translate to Swiss German and record. We make the corpus publicly available. It contains 343 hours of speech from all dialect regions and is the largest public speech corpus for Swiss German to date. Application areas include automatic speech recognition (ASR), text-to-speech, dialect identification, and speaker recognition. Dialect information, age group, and gender of the 316 speakers are provided. Genders are equally represented and the corpus includes speakers of all ages. Roughly the same amount of speech is provided per dialect region, which makes the corpus ideally suited for experiments with speech technology for different dialects. We provide training, validation, and test splits of the data. The test set consists of the same spoken sentences for each dialect region and allows a fair evaluation of the quality of speech technologies in different dialects. We train an ASR model on the training set and achieve an average BLEU score of \(74.7\) on the test set. The model beats the best published BLEU scores on 2 other Swiss German ASR test sets, demonstrating the quality of the corpus.
## 1 Introduction
We present STT4SG-350, a corpus of Swiss German speech, annotated with Standard German text at the sentence level. The corpus represents all Swiss German dialect regions and contains 343 hours of speech.
Swiss German is a family of German dialects spoken by around 5 million people in Switzerland. It differs from Standard German regarding phonology, vocabulary, morphology, and syntax. There are significant differences among the Swiss German dialects as well, particularly regarding phonology and vocabulary. Swiss German is primarily a spoken language. It is also used in writing, but mainly in informal text messages. In most other contexts, including formal letters, laws, and newspapers, Standard German is used instead. One important reason for this is Swiss German's lack of a standardized orthography.
The diversity among dialects, exacerbated by the lack of a standardized orthography, leads to a large number of written variants for each word. This, together with the small amount of text resources compared to Standard German, makes automated processing of Swiss German text challenging.
STT4SG-350 is, to the best of our knowledge, the largest public speech corpus for Swiss German. While the primary use case is automatic speech recognition (ASR), it is also a useful resource for text-to-speech (TTS), dialect identification, and speaker recognition. By providing roughly the same amount of data per dialect region, irrespective of its population size, the corpus contributes to improving speech technology for underrepresented dialects. In addition, the test set, which contains the same spoken sentences in each dialect, allows a fair evaluation of the quality of speech technologies in different dialects. Furthermore, it contributes to more inclusive speech technology by keeping a balanced gender ratio and featuring speakers of all ages.
## 2 Related Work
The SDS-200 corpus [22] contains 200 hours of speech by around 4,000 speakers with Standard German transcripts. The recordings cover a large part of the Swiss German dialect landscape. The number of recordings per speaker follows a long-tail distribution. For example, the top 3 speak
ers account for 23% of recordings. The Swiss Parliaments Corpus or SPC (Pluss et al., 2021) contains 299 hours of speech in the Bernese dialect. The text is Standard German, taken from parliament minutes, and is not a fully accurate transcription. Text and audio are automatically aligned. The SwissDial corpus (Dogan-Schonberger et al., 2021) contains 26 hours of studio-quality recordings by 8 speakers, each speaking a different dialect, with both Standard German and Swiss German transcripts. The Radio Rottu Oberwallis corpus (Garner et al., 2014) contains 8 hours of speech transcribed in Swiss German, of which 2 are also transcribed in Standard German. The ArchiMob corpus (Samardzic et al., 2016) contains 69 hours of speech with Swiss German transcripts.
For Swiss German ASR, the desired output text language is Standard German for the vast majority of use cases. Tackling speech-to-text translation with an end-to-end approach is feasible as shown by Weiss et al. (2017). Applying a similar approach to Swiss German ASR and therefore avoiding Swiss German text and its challenges altogether lead to promising results in recent years, see (Pluss et al., 2023; Khosravani et al., 2021; Pluss et al., 2022, 2021).
Dogan-Schonberger et al. (2021) experiment with TTS for Swiss German. Their models achieve a 5-scale mean opinion score of 2.9 to 4.1. Importantly, their approach requires Swiss German input text.
## 3 Data Collection
Data for STT4SG-350 was collected in two phases: 1) the test set with 76 participants from December 2021 until March 2022, and 2) the train and validation sets with 240 participants from May until November 2022.
### Recording
Speech was recorded using a web app based on the code1 by Pluss et al. (2022). Recordings are made sentence by sentence. The app displays a Standard German sentence, which the participant is asked to translate to Swiss German and speak aloud. A screenshot of the recording functionality can be found in Appendix A. The goal of the translation step is to get a correct, natural-sounding Swiss German sentence in the participant's dialect. We display a popup with examples before the first recording to explain this to participants. We also display a short explanation below the sentence to be recorded. We manually validated the correctness of at least 10 randomly sampled recordings per participant at collection time. In contrast to Pluss et al. (2022), for phase 2, we recorded 44.1 kHz lossless FLAC audio rather than 32 kHz lossy MP3 audio. The recording quality depends on the microphones used by participants, which range from studio microphones to headsets and laptop microphones. Depending on the microphone, mouse clicks can be audible in recordings.
Footnote 1: MPL-2.0 license
### Dialect Regions
For this work, we divided the Swiss German dialect continuum into 7 dialect regions, listed in Table 1, based on the clustering method by Scherrer and Stoeckle (2016)2. The cluster analysis was carried out on 350 phonological, lexical, morphological, and syntactic phenomena. We slightly adjusted the resulting clusters to match the dialect regions commonly used in public discourse more closely. The goal of these adjustments was to make it more intuitive for participants to choose their dialect region. The borders are intentionally fuzzy to give participants the freedom to choose the region that fits their dialect best.
Footnote 2: Population statistics from [https://www.bfs.admin.ch](https://www.bfs.admin.ch)
### Sentence Selection
Sentences were randomly selected from Swiss newspapers and from parliament minutes of 2 Swiss parliaments. Sentence filtering for newspapers follows Pluss et al. (2022). The goal of the filtering is to limit sentence complexity to reduce errors in the translation task. For example, only sentences of 5 to 12 words are kept. The newspaper sentences cover a broad range of topics, including culture, finance, science, sports, and technology. They also cover content and named entities particularly relevant for Switzerland. Parliament sentences are not filtered. They bring additional diversity to the corpus with longer sentences on average and a distinct vocabulary. For the test set, 3,515 sentences were selected (67% newspapers, and 33% parliaments). To allow a fair comparison among the dialects, each sentence was recorded in each of the 7 dialects. For the training and validation data, 94% news and 6% parliament sentences were selected, and we dropped the requirement to record each sentence in all dialect regions to in
crease vocabulary and phrase diversity.
### Metadata
Participants self-reported the following metadata:
* The dialect region that best fits the participant's dialect.
* The zip code of the place where the participant grew up or went to school.
* Age group (< 19, 19-29, 30-39, 40-49, 50-59, 60-69, 70-79, 80-89, > 89)
* Gender (female, male, non-binary)
We manually checked the correspondence of reported metadata and recordings for each participant. Collecting the dialect provenance as a zip code allows us to investigate dialects and the performance of speech technologies for them at different granularity levels. Collecting age group and gender helps to make sure that speech technology is inclusive and works across different demographic groups.
### Recruitment
For the test set, all participants were recruited via the crowdsourcing platform TestingTime3. For the train set, half the participants were recruited via TestingTime, whereas the other half were recruited via universities, high schools, newspaper ads, personal contacts, and the crowdsourcing platform seniors@work4 (for details refer to Appendix F and 6). Only native Swiss German speakers able to correctly translate Standard German to Swiss German were recruited. The goal was to collect the same amount of recordings in each dialect region and we recruited accordingly. The number of recordings per participant was limited to 368 for the test set5 and 1,112 for the train data. Recruiting the 316 participants required a considerable effort, especially in the low-population regions GR and VS.
Footnote 3: [https://www.testingtime.com](https://www.testingtime.com)
Footnote 4: [https://www.seniorsatwork.ch](https://www.seniorsatwork.ch)
Footnote 5: Due to a lack of suitable participants in some dialect regions, 6 participants were allowed to contribute up to 722 recordings.
## 4 Corpus
The corpus is publicly available6 under the META-SHARE NonCommercial NoRedistribution license7. The distribution format and the included metadata is described in Appendix B. Potential risks are described in Appendix D. The handling of offensive content and personal data is discussed in Appendix E.
Footnote 6: [https://swissnlp.org/datasets/](https://swissnlp.org/datasets/)
Footnote 7: [http://www.meta-net.eu/meta-share/meta-share-licenses/META-SHARE2NonCommercial%20NoRedistribution-vX201.0.pdf](http://www.meta-net.eu/meta-share/meta-share-licenses/META-SHARE2NonCommercial%20NoRedistribution-vX201.0.pdf)
### Data Cleaning
**Filtering.** Recordings with a duration of less than 2 seconds were removed. Silent recordings were also removed. For the test set, we applied heuristics to flag incomplete sentences, which were removed after double-checking them. We only kept sentences with a recording in all dialect regions in the test set. In total, we filtered out 1.5% of recordings. **Validation.** We validated each speaker manually. For this, we randomly sampled 10 recordings from each speaker, and checked whether the dialect is correct, the recording is in Swiss German, the translation is correct, and whether the sound quality is high enough. All of the participants passed the manual check.
### Statistics
The corpus contains 343 hours of Swiss German speech in 247,527 separate recordings, each annotated with the Standard German text translation. The mean recording length is \(5.0\pm 1.5\) seconds. 217,687 unique sentences were recorded and the vocabulary size is 42,980. Speech recordings were
\begin{table}
\begin{tabular}{l|c c c c}
**Region** & **Pop.** & **Hours** & **Rec.** & **Speakers** \\ \hline Basel (BS) & 0.4M & 47.5 & 34,169 & 44 \\ Bern (BE) & 1.2M & 48.7 & 35,683 & 46 \\ Grisos (GR) & 0.2M & 44.3 & 30,931 & 46 \\ Central (CS) & 0.8M & 49.1 & 36,402 & 43 \\ Eastern (ES) & 0.9M & 52.6 & 38,182 & 47 \\ Valais (VS) & 0.1M & 51.8 & 36,457 & 44 \\ Zurich (ZH) & 1.6M & 49.3 & 35,703 & 46 \\ \end{tabular}
\end{table}
Table 1: Corpus statistics per dialect region. Population is an approximation and only includes German-speaking people.
Figure 1: Percentage of recordings by age group and gender
provided by 316 different speakers, of which 51% identified as female and 49% as male. No speaker identified as non-binary. Figure 1 shows the distribution of the recordings over the age groups, as well as the gender distributions per age group. The age groups from the thirties to the sixties are well represented, while the twenties are overrepresented and the teens as well as seventies are underrepresented. The age groups eighties and above are not represented at all.
Table 1 shows the corpus statistics per dialect region. While the German-speaking population differs by a factor of up to 16 between regions, the number of recordings per region is a lot more balanced, differing by a factor of not more than 1.2.
### Splits
Table 2 shows the different corpus splits. We provide training, validation, and test splits. There is no speaker overlap between training, validation, and test. There are no common sentences between test and either training or validation. There is, however, an intersection of 835 sentences between training and validation. There are 2 different training splits. train_all contains all training data, 276 hours of speech. train_balanced is a subset of train_all with 239 hours of speech that is balanced in the number of recordings per dialect region. For GR, the region with the fewest recordings, the recordings of all speakers are included in train_balanced. For the other regions, we randomly chose speakers and added their recordings until the number of GR recordings was reached. train_balanced includes 33-35 hours of speech, 24,088-25,183 recordings, and 25-32 speakers per region.
Like train_balanced, the validation split, with 34 hours of speech, is balanced in the number of recordings per dialect region. We randomly chose 3 speakers per region with at least 1,000 recordings. The test set comprises 34 hours of speech. Importantly, the same 3,515 sentences were recorded in all 7 dialect regions to allow a fair comparison between different dialects. The test split contains at least 8 different speakers per region to provide adequate speaker diversity in each region. For this reason, the mean number of recordings per speaker is markedly lower than in the other splits.
## 5 Automatic Speech Recognition Baseline
We train a baseline model to demonstrate the use of the STT4SG-350 corpus for Swiss German ASR. We fine-tune XLS-R (1B)8 Babu et al. (2021) on the train_balanced split. XLS-R is a model based on wav2vec 2.0 Baevski et al. (2020) with 965 million parameters pretrained on 436K hours of unlabeled speech data covering more than 128 languages. Swiss German was not part of the training data. We provide the fine-tuning details and experimental setup in appendix C.
Footnote 8: Apache-2.0 license
We report the results of our fine-tuned model on three publicly available Swiss German datasets and the STT4SG-350 validation and test sets in Table 3. The model achieves state-of-the-art results on the All Swiss German Dialects Test Set (ASGDTS) Pluss et al. (2021) and SDS-200 Pluss et al. (2022), and improves the best reported BLEU scores on the test sets by 43% and 9%, respectively. Our model is 6% behind the best reported BLEU score on the SPC test set Pluss et al. (2021). These results highlight the benefit of the STT4SG-350 dataset on test data from different domains.
## 6 Conclusion
We have described STT4SG-350, which is, to the best of our knowledge, the largest public speech corpus for Swiss German with 343 hours of speech. Our ASR baseline model trained on the corpus achieves a BLEU score of 74.7 on the test set. In addition, it beats the best published BLEU scores
\begin{table}
\begin{tabular}{l|l l l|l} & train\_all (bal) & valid & test & full \\ \hline
**Hours** & 276 (239) & 34 & 34 & 343 \\
**Rec.** & 2004 (173K) & 23K & 25K & 248K \\
**Unique sent.** & 192K (167K) & 23K & 4K & 218K \\
**Speakers** & 219 (192) & 21 & 76 & 316 \\
**Avg. Rec./speaker** & \(912\) (\(902\)) & \(1106\) & \(324\) & \(783\) \\ \end{tabular}
\end{table}
Table 2: Corpus statistics per split. For the train set, the balanced (bal) version is in parentheses.
\begin{table}
\begin{tabular}{l l l|l l}
**Dataset** & \multicolumn{3}{c|}{WER} & \multicolumn{3}{c}{BLEU} \\ & \multicolumn{1}{c|}{**validation**} & \multicolumn{1}{c|}{**test**} & \multicolumn{1}{c}{**validation**} & \multicolumn{1}{c}{**test**} \\ \hline ASGDTS & \(19.9\pm.1\) & \(20.7\pm.3\) & \(67.0\pm.2\) & \(66.0\pm.4\) \\ ASGDTS SOTA & \(38.7\) & - & \(41.9\) & \(46.0\) \\ \hline SDS-200 & \(18.4\pm.1\) & \(18.2\pm.1\) & \(69.9\pm.1\) & \(69.6\pm.1\) \\ SDS-200 & SOTA & \(21.7\) & \(21.6\) & \(63.9\) & \(64.0\) \\ \hline SPC & - & \(30.2\pm.1\) & - & \(54.9\pm.2\) \\ SPC SOTA & - & \(23.7\) & - & \(60.7\) \\ \hline STT4SG-350 & \(13.6\pm.1\) & \(14.0\pm.1\) & \(75.0\pm.1\) & \(74.7\pm.1\) \\ \end{tabular}
\end{table}
Table 3: Performance of the XLS-R Wav2Vec 1B model fine-tuned on the STT4SG-350 train_balanced split. We report the mean and standard deviation over five different random seeds. ASGDTS: validation = public split, test = private split. We compare each dataset to the state-of-the-art, i.e., ASGDTS SOTA Arabsky et al. (2021), SDS-200 SOTA Pluss et al. (2022), and SPC SOTA Schraner et al. (2022).
on 2 other test sets, demonstrating the quality of the corpus.
STT4SG-350 is balanced across the 7 dialect regions, and the test set allows a fair comparison of ASR performance on different dialects. We intend to take advantage of these properties in future work and conduct in-depth experiments to explore differences in ASR quality between dialects. Subsequently, we want to find ways to improve performance for underrepresented dialects.
## Acknowledgements
This work was supported by Swiss National Science Foundation within the project "End-to-End Low-Resource Speech Translation for Swiss German Dialects (E2E_SG)" [205121_200729/1].
## Limitations
The corpus and therefore also the ASR baseline model only cover read speech. We have not tested the model on spontaneous speech, but we expect it to perform significantly worse on this type of data.
Our data collection process for Swiss German speech with Standard German transcripts is designed to collect large amounts of data in a cost-efficient manner. We estimate costs to be 4 to 6 times lower compared to the transcription of existing recordings. However, there is a downside to our approach. Because it is based on a given Standard German sentence, it can lead to Swiss German speech that's closer to Standard German than the Swiss German encountered in everyday conversations. The severity of the shift towards Standard German depends on the individual speakers and their ability and effort to produce Swiss German representations that are close to how they would speak in everyday conversations.
While we made every effort to include as many different dialects as possible in the corpus, there are still strong dialects with a comparatively low German-speaking population that are insufficiently or not at all represented, e.g. some dialects from the canton of Fribourg. This is due to the huge dialect diversity in Switzerland.
The gender ratio is not balanced for some dialect regions in the test set, especially not for VS, where the test set is female-only because we did not succeed to recruit any male speakers from this region during phase 1 of the data collection. However, preliminary experiments do not show a significant difference between genders in Swiss German ASR performance, so we do not expect this to lead to skewed results.
Our ASR baseline model and other models trained on the corpus may perform below average for children and people above seventy due to the lack of training data for these age groups.
## Ethical Considerations
Participants were specifically recruited to record Swiss German speech for this corpus. The purpose of the recordings was made clear at recruiting time: a training corpus for Swiss German ASR models. Participants were also informed at recruiting time that information about their dialect, age, and gender will be collected. Furthermore, to be able to participate, they had to read and accept our data privacy policy which further detailed the future use of collected data.
|
2308.03537 | Work extractability from energy eigenstates under optimized local
operations | We examine the relationship between the second law of thermodynamics and the
energy eigenstates of quantum many-body systems that undergo cyclic unitary
evolution. Using a numerically optimized control protocol, we analyze how the
work extractability is affected by the integrability of the system. Our
findings reveal that, in nonintegrable systems the number of work-extractable
energy eigenstates converges to zero, even when the local control operations
are optimized. In contrast, in integrable systems, there are exponentially many
eigenstates from which positive work can be extracted, regardless of the
locality of the control operations. We numerically demonstrate that such a
strikingly different behavior can be attributed to the number of athermal
energy eigenstates. Our results provide insights into the foundations of the
second law of thermodynamics in isolated quantum many-body systems, which are
expected to contribute to the development of quantum many-body heat engines. | Shotaro Z. Baba, Nobuyuki Yoshioka, Takahiro Sagawa | 2023-08-07T12:34:09Z | http://arxiv.org/abs/2308.03537v1 | # Work extractability from energy eigenstates under optimized local operations
###### Abstract
We examine the relationship between the second law of thermodynamics and the energy eigenstates of quantum many-body systems that undergo cyclic unitary evolution. Using a numerically optimized control protocol, we analyze how the work extractability is affected by the integrability of the system. Our findings reveal that, in nonintegrable systems the number of work-extractable energy eigenstates converges to zero, even when the local control operations are optimized. In contrast, in integrable systems, there are exponentially many eigenstates from which positive work can be extracted, regardless of the locality of the control operations. We numerically demonstrate that such a strikingly different behavior can be attributed to the number of athermal energy eigenstates. Our results provide insights into the foundations of the second law of thermodynamics in isolated quantum many-body systems, which are expected to contribute to the development of quantum many-body heat engines.
## I Introduction
The second law of thermodynamics in isolated quantum many-body systems has drawn significant attention in recent years, driven by a crucial question in statistical physics: how does macroscopic irreversibility originate from microscopically reversible dynamics [1; 2; 3; 4; 5; 6]? In the context of work extraction, in particular, the exploration of the relationship between the second law of thermodynamics and thermal pure quantum states, which are indistinguishable from the Gibbs ensemble at a macroscopic level, constitutes a fundamental problem in statistical mechanics [7; 8; 9; 10; 11; 12].
One of the most well-known expressions of the second law of thermodynamics is embodied in Planck's principle, which claims that it is impossible to extract work from the Gibbs state via adiabatic cycles. Correspondingly, it is understood to be unattainable to extract work from canonical ensemble through any cyclic operations, which is called the passivity of the Gibbs ensemble [13; 14; 15]. In contrast, there does not exist any no-go principle to prevents work extraction from pure quantum states; the second law can be violated if one is able to perform arbitrary unitary operations with arbitrary precision. Note that this does not contradict the eigenstate thermalization hypothesis [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125], since it only states the macroscopic indistinguishability from the thermal equilibrium.
Despite the great amount of effort by the existing works, our understanding on the relationship between the second law of thermodynamics and pure quantum many-body states is limited, in particular when all control operations are subject to local constraint (See Fig. 1). A prior study by Ref. [7] has investigated the connection between the work extractability from single energy eigenstates that undergo a simple quench dynamics. Ref. [7] reveals that, in a nonintegrable system, it is impossible with quench dynamics to extract work from any energy eigenstate corresponding to a positive temperature, while we can extract work from an exponentially large number of eigenstates if both the initial and the quenched Hamiltonians are integrable. While this previous work shows that the integrability of the model yields a qualitative difference in the work extractability, the control operation is limited to a simple class of quench dynamics. In order to address more general situations, one needs to carefully examine the entire degrees of freedom in the control operations.
In the present work, we analyze the work extractability of energy eigenstates subject to numerically optimized cyclic control operations. As we summarized in Table 1, for nonintegrable systems, the number of work
Figure 1: Graphical description of the object of this study: We consider the work extraction from energy eigenstates of the quantum many-body systems. We investigate the dependence of the extraction on the locality of the control operators and integrability of the system.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Control locality} & Initial & Hamiltonian & \(\hat{H}(0)\) \\ \cline{2-3} & Nonintegrable & Integrable \\ \hline Local & - & \(O(\exp(cL))\) \\ Non-local & \(O(\exp(cL))\) & \(O(\exp(cL))\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The size scaling of \(D_{\text{pos}}\), the count of work-extractable energy eigenstates within a fixed energy shell that corresponds to a positive temperature. The count \(D_{\text{pos}}\) converges to zero under large system size in nonintegrable systems if the control operation is local, while it grows exponentially in other cases.
extractable states corresponding to positive temperatures converges to zero even with the optimized protocol if the control operations are local, while we find that there are exponentially many work-extractable states when control operations are global. In sharp contrast, integrable systems allow work extraction from exponentially many energy eigenstates, whether the control operations are local or global. We further numerically demonstrate that such a difference in work extractability can be attributed to the distribution of the entanglement entropy (EE); integrable systems have exponentially many athermal states so that we may reduce the energy of the state without increasing the EE, which is a prohibited scenario in nonintegrable systems since almost all eigenstates are expected to be thermal.
The remainder of the paper is organized as follows. In Sec. II, we describe the setup and the algorithms to optimize the protocols extracting work. In Sec. III, we present our main numerical results. Finally, we give the conclusion and discussion in Sec. IV.
## II Setup
### Work extraction from energy eigenstates
We present the definition of work extractable eigenstates assuming cyclic control operations. Let \(\hat{H}\) be a Hamiltonian that satisfies \(\hat{H}|E_{\alpha}\rangle=E_{\alpha}|E_{\alpha}\rangle\) where the \(\alpha\)-th eigenstate with energy \(E_{\alpha}\) is denoted as \(|E_{\alpha}\rangle\). Under some unitary time evolution \(U(t)\), we define the work extraction from the \(\alpha\)-th eigenstate at time \(t\) as
\[W_{\alpha}(t): = \mathrm{Tr}\left[\hat{H}\rho_{\alpha}(0)\right]-\mathrm{Tr}\left[ \hat{H}\rho_{\alpha}(t)\right] \tag{1}\] \[= E_{\alpha}-\mathrm{Tr}\left[\hat{H}U(t)|E_{\alpha}\rangle\langle E _{\alpha}|U^{\dagger}(t)\right], \tag{2}\]
where we have denoted the time evolved eigenstate as \(\rho_{\alpha}(t):=U(t)\left|E_{\alpha}\right\rangle\left\langle E_{\alpha} \right|U^{\dagger}(t)\). In particular, we assume that the unitary \(U\) is discretized as
\[U(t)=\prod_{n=1}^{N_{t}}\exp(-i\delta t\hat{H}(n\delta t)), \tag{3}\]
where \(\hat{H}(t)\) is a time-dependent Hamiltonian that controls the dynamics of the system. Here, the time step is homogeneously given as \(\delta t\), and \(N_{t}=t/\delta t\) is the number of discrete steps. In practice, we consider an implementable set of local operations \(\mathcal{B}=\{\hat{O}_{i}\}\) and take their linear combination to constitute the time-dependent Hamiltonian as
\[\hat{H}(t)=\sum_{\hat{O}_{i}\in\mathcal{B}}\gamma_{i}(t)\hat{O}_{i}, \tag{4}\]
where \(\gamma_{i}(t)\) denotes the coefficient of the \(i\)-th operator at time \(t\).
We also introduce a metric to measure the work extractability. We indicate the number of eigenstates in some energy shell from which positive work is extracted as
\[D_{\mathrm{pos}}(t):=\left|\{|E_{\alpha}\rangle\,\mid\,W_{\alpha}(t)\geq \varepsilon L,\ E_{\alpha}\in\mathrm{shell}\}\right|, \tag{5}\]
where the threshold \(\varepsilon\) is introduced for the purpose of numerical stability.
### Optimization of control operations
In order to quantify the performance of control protocols, we introduce a reward function. As an example of a smooth function that explicitly rewards \(W_{\alpha}>\varepsilon L\), we define the following:
\[r(t):=\sum_{E_{\alpha}\in\mathrm{shell}}\sigma_{a}(w_{\alpha}(t)- \varepsilon)+c(w_{\alpha}(t)-\delta)\theta(\delta-w_{\alpha}(t)), \tag{6}\]
where \(w_{\alpha}(t):=W_{\alpha}(t)/L\) is the work density with \(L\) being the system size, \(\sigma_{a}(x):=1/(1+\exp(-ax))\) is a sigmoid function, \(\theta(x)\) is the unit step function, and \(c,\varepsilon,\delta\) are some hyperparameters that determine the behavior of the reward function. Concretely, \(a\) in the sigmoid function controls the width of its ascending segment, while \(c\) regulates the slope of the linear function so that work below a certain threshold \(\delta\) is penalized. In the following, we fix the hyperparameters as \(a=30,c=0.1,\varepsilon=0.15,\delta=0.3\). While we expect that the main result is not affected significantly by the choice of the reward function, we leave this as a future work.
To reveal the work extractability under optimal control techniques, we consider two optimization methodologies: the gradient-based algorithm and the deep reinforcement learning (RL) algorithm. In the gradient-based algorithm we compute the gradient \(\partial r/\partial\gamma_{i}\) with a constraint such that the Frobenius norm of the control Hamiltonian is bounded as \(\|\hat{H}(t)\|_{2}\leq C\), where the norm upper bound is fixed as \(C=\sqrt{2Ld}\) with \(d=2^{L}\) being the full Hilbert space dimension. As an implementable operation set, we consider \(\mathcal{B}_{k}\) as a set of translationally invariant and spatial inversion-symmetric operators that act on at most \(k\) contiguous sites. On the other hand, the RL algorithm constructs a strategy to choose a unitary from a discrete set of unitaries \(\{U_{m}\}\) at each time step, so that we can maximize the discounted reward expectation value that is estimated using a deep neural network. In this work, \(\{U_{m}\}\) is generated from \(\mathcal{B}_{2}\). See Appendix B for details of the optimization methods.
## III Main results
In this study, we adopt as the target Hamiltonian the one-dimensional quantum Ising model under periodic
boundary condition:
\[\hat{H}=\sum_{l=1}^{L}\hat{\sigma}_{l}^{z}\hat{\sigma}_{l+1}^{z}+h\hat{\sigma}_{l} ^{z}+g\hat{\sigma}_{l}^{x}, \tag{7}\]
where \(\hat{\sigma}_{l}^{x,z}\) is the Pauli operator acting on the \(l\)-th site, \(h\) and \(g\) are the strength of the longitudinal and transverse magnetic fields, respectively. In the subsequent analysis, we exclusively consider \((h,g)=(0.9045,0.809)\) as the nonintegrable case and \((h,g)=(0,0.5)\) as the integrable case. Furthermore, we focus on the energy shell specified from energy density \(E/L\in[-0.25,-0.1]\), and we limit our discussion to the zero-momentum sector and the inversion-symmetric sector.
Figure 3: The system-size scaling of \(D_{\text{pos}}\) generated by the control protocol optimized by the gradient-based method. (a)–(d) The dynamics of \(D_{\text{pos}}\) during the protocol for \(L=10,12,14,16\). The integrability of the initial/final Hamiltonians and the locality of the operation are (a) nonintegrable and local (\(k=4\)), (b) integrable and local (\(k=4\)), (c) nonintegrable and global (\(k=L/2\)), and (d) integrable and global (\(k=L/2\)). The absence of the line for \(L=16\) in panel (a) represents that there is no energy eigenstate from which positive work is extracted. The scaling of \(D_{\text{pos}}(t)\) at \(t=1\) is summarized in (e). The inset of panel (e) shows that the \(D_{\text{pos}}(t)\) increase exponentially with the system size except for the nonintegrable system under local control.
Figure 2: The time evolution of \(D_{\text{pos}}\) generated by control protocols optimized by gradient-based method, RL, and simple quench. The initial and final Hamiltonians are (a) nonintegrable and (b) integrable. The results by the gradient-based method with various operation set \(\mathcal{B}_{k}\) are shown by real lines, while the results from the RL-optimized protocol and simple quench dynamics are shown by blue dashed and orange dotted lines, respectively. Note that the RL algorithm performs optimization of the protocol corresponding to \(k=2\). The black dash-dotted lines represent the number of eigenstates in the given energy shell. The system size is \(L=12\). The threshold is taken as \(\varepsilon=0.15\), which is identical to the width of the energy shell (see Appendix A for the dependence on the choice of \(\varepsilon\)). The quench dynamics is performed under the Hamiltonian (7) with \((h,g)=(0,1.5)\), which is also employed in Ref. [7].
### System-size scaling of the optimized work extraction
Firstly, we show in Fig. 2 the time evolution of \(D_{\text{pos}}(t)\) under various control protocols. It is consistent with Ref. [7] that the simple quench dynamics in nonintegrable systems does not extract work at all. Meanwhile, as we can see from Fig. 2(a), the optimization of the control protocol allows us to extract work even in nonintegrable systems. It is noteworthy that \(D_{\text{pos}}(t)\) increases as we increase the locality \(k\) of the control protocol. This is somewhat expected behavior, since with larger \(k\) we have higher expressibility in the time evolution unitary. As long as the optimization method is successfully performed, we expect \(D_{\text{pos}}\) to grow with \(k\)m which is indeed observed in our numerical results. In particular, with \(k=8\) we can extract work from the entire energy shell in \(L=12\).
While the gradient-based method and the RL method are different from each other in a sense that the latter only performs discrete optimization, we find that the complex procedure of the RL algorithm allows one to avoid the local minima, so that the optimized protocol for \(k=2\) achieves higher \(D_{\text{pos}}\) by the RL compared to the gradient-based method (see Fig. 2(b)). This is a remarkable benefit of utilizing the RL algorithm, considering the fact that the expressibility of the control unitary is limited to discrete operation set. Note that it is also in agreement with previous works that the RL method is expected to provide a powerful way to determine the quantum control for many-body systems [26; 27]. However, here we focus on small or medium-size systems to elaborate on the qualitative difference in the work extractability, and therefore rather focus on the gradient-based method, since it is numerically less demanding for such system sizes.
Next, we further investigate how the work extractability is affected by the integrability of the Hamiltonian and the locality of control Hamiltonian. As is shown in Fig. 3(a) and (b), the integrability strongly impacts the size scaling when the control operations are local. In the nonintegrable case, we find that \(D_{\text{pos}}\) becomes zero for \(L=16\), and further expect that this holds for larger system size \(L\) as we further elaborate in the next section. In contrast, under global control, \(D_{\text{pos}}\) increases along with the system size \(L\) regardless of the integrability, as shown in Fig. 3(c), (d). We summarize such behaviors in Fig. 3(e) by plotting the size scaling of \(D_{\text{pos}}\) at \(t=1\), which increases exponentially except for the nonintegrable system under local operation.
We remark that the qualitative difference originating from the integrability is exhibited not only under local control operations, but also under global operations as well. Namely, we observe that the control time \(t\) to achieve some fixed value of \(D_{\text{pos}}\) remains constant in integrable systems, while it takes longer in nonintegrable systems. It is an interesting open question to seek how the required time \(t\) scales with the system size.
Figure 4: The relationship between the work extractability and the EE. The vertical axis represents the half-chain EE \(S(\rho_{\alpha}(t))\), where the horizontal axis shows the energy density. In panel (a), we show the results for eigenstates with positive work extraction in the integrable case with local control (\(k=4\)). We observe that the values of EE are all increased. Other panels (b)-(e) show results for both positive and negative work extraction. Here each panel represents (b) nonintegrable case under local control (\(k=4\)), (c) integrable case under local control (\(k=4\)), (d) nonintegrable case under global control (\(k=L/2\)), and (e) integrable case under global control (\(k=L/2\)), respectively. All data in this figure correspond to the data at \(L=16\) and \(t=1\) in Fig. 3.
### Work extractability and
athermal entanglement entropy
To understand the striking difference in the work extractability, we focus on the number of athermal states, namely the energy eigenstates whose EE is significantly lower than those of thermal pure quantum states. In nonintegrable systems, it is known that almost all eigenstates are thermal, i.e., the EE converges to that of the canonical ensemble in the thermodynamic limit [28, 29]. Since the thermal EE increases monotonically with energy if one focuses on the energy corresponding to a positive temperature, one must reduce the EE to extract work from the system. However, it has been pointed out that, in general it requires exponentially long time under local operation to decrease the EE of the state [30, 31], and thus there is no efficient way to extract work from any energy eigenstate. In contrast, such a property is not generally present in integrable models; there can be exponentially many eigenstates whose EE is lower than the value of thermal EE [29, 32]. This means that there is no principle that prohibits one from extracting work from the system even with local operations.
To examine our conjecture, we analyze the change in the EE before and after the control operation as \(\Delta S=S(\rho_{\alpha}(0))-S(\rho_{\alpha}(t))\) where \(S\) is the half-chain EE. In Fig. 4(a), we illustrate that all work-extractable energy eigenstates in integrable system encounter increase in the EE under local control (Fig. 4(c) also shows data for non-work-extractable states). In other words, we are allowed to increase the EE with the control unitary. This does not necessarily require exponentially long control time, and thus expected to be achievable.
We emphasize that such a scenario is not allowed in nonintegrable systems. Figure 4(b) shows that, in nonintegrable systems, the EEs of most eigenstates within the energy shell are distributed near the thermal EEs, and the fluctuation is suppressed to be exponentially small [33]. This means that, if one desires to extract work, one must reduce the EE of the state by either employing global control, as shown in Fig. 4(d) and (e), or exponentially long unitaries with local terms.
Figure 5 shows the distribution of work and entropy change before and after the control operation of \(t=1\). We can again see from Fig. 5(a) and (b) that the EE increases under local operation, and therefore work can be extracted from many eigenstates only in integrable systems. In contrast, Fig. 5(c) and (d) reveal the EE can decrease under global operations, and therefore work extraction can be realized even in nonintegrable systems.
We summarize that these findings are consistent with the proposed scenario that, the qualitative difference in the scaling of athermal eigenstates is directly related with the work extractability in integrable and nonintegrable systems under local operations. Furthermore, these results also imply that we expect \(D_{\text{pos}}\) to be zero even for \(L>16\) in nonintegrable systems under local control, because the number of athermal eigenstates converges to zero. Meanwhile, this is not the case when one introduces demanding operations such as global operations or exponentially long circuits; we can reduce the EE so that it is possible to extract work.
## IV Conclusion
In this work, we have utilized numerically optimized quantum control protocols to analyze the work extractability from energy eigenstates of isolated quantum many-body system. Under local control, we find that the integrability is crucial to allow work extraction from exponentially many eigenstates. Conversely, under global control, such a qualitative difference is not observed when the evolution time is sufficiently long. By performing further analysis on the EE, we further find a convincing argument that the work extractability is related with the number of athermal eigenstates. Namely, large fluctuation of the EE from the thermal EE in integrable systems is crucial to allow work extraction, while such a mechanism is not present in nonintegrable systems.
We envision two intriguing future directions. First, it is interesting to explore what is the control time required for positive work extraction. As pointed out in Sec. III.1,
Figure 5: The relationship between the EE and the extracted work of the energy eigenstates by the optimized protocols. The vertical and horizontal axes represent the changes of the EE and the extracted work density, respectively. Each panel shows the result for (a) nonintegrable and local, (b) integrable and local, (c) nonintegrable and global, and (d) integrable and global control protocol. The red markers represent the eigenstates counted as \(D_{\text{pos}}\). The histograms on the left of the panels only show the distribution of the eigenstates corresponding to \(D_{\text{pos}}\). All data in this figure correspond to the data at \(L=16\) and \(t=1\) in Fig. 3.
our fixed duration analysis indicate that positive work extraction requires longer time under larger size in nonintegrable systems. More detailed understanding on the scaling of the control time is an important open problem. For instance, one may perform numerical investigation for large-scale systems using variational methods such as tensor network [34; 35; 36] or artificial neural networks [37; 38; 39; 40; 41]. Another interesting future direction is to study how general our findings hold among various quantum many-body systems. We envision that the discrepancy between the integral and nonintegral systems persists under more general local Hamiltonians, which can be naturally expected especially for translationally invariant models. While the current study has focused on an one-dimensional model, it is intriguing to explore higher-dimensional systems.
_Acknowledgments.--_ The authors wish to thank fruitful discussion with Toshihiro Yada. S. B. is supported by Materials education program for the future leaders in research, industry, and Technology (MERIT) of The University of Tokyo. N.Y. wishes to thank JST PRESTO No. JPMJPR2119 and JST Grant Number JPMJPF2221. T.S. is supported by JSPS KAKENHI Grant Number JP19H05796, JST CREST Grant Number JPMJCR20C1, Japan, and JST ERATO-FS Grant Number JPMJER204, Japan. N.Y. and T.S. are also supported by Institute of AI and Beyond of The University of Tokyo. This work is supported by IBM Quantum. The RL is performed on AI Bridging Cloud Infrastructure (ABCI) of National Institute of Advanced Industrial Science and Technology (AIST).
## Appendix A The threshold dependence of the system-size scaling
We discuss the relationship between the threshold \(\varepsilon\) in Eq. (5) and the finite-size scaling of \(D_{\text{pos}}\) that is defined in the main text as
\[D_{\text{pos}}:=\left|\{|E_{\alpha}\rangle\in\text{shell}|W_{\alpha}(t)\geq \varepsilon L\}\right|. \tag{10}\]
Note that we fix the parameters in the reward function \(a,c,\delta\) (6) as the ones used in the main text.
In Fig. 11, we confirm that the main result as summarized in Table 1 in the main text is robust under the variation of \(\varepsilon.\) While we observe that \(D_{\text{pos}}\) seems to saturate for \(\varepsilon=0.175\), we argue that this is an artifact due to the vanishing gradient in the reward function in the gradient-based method, rather than the physical phenomena itself. Under the current choice of the reward function, the gradient-based method is valid when \(\varepsilon\) does not exceed that of the energy shell width, and therefore we have taken \(\varepsilon=0.15\) in the main text.
## Appendix B Optimization algorithms
### Gradient-based algorithm
We explain the details of the gradient-based algorithm employed in our study. We will first discuss how to calculate the coefficients \(\{\gamma_{i}(t)\}\) in the time-dependent Hamiltonian at each time step. Next, we introduce the set of operators \(\mathcal{B}=\{\hat{O}_{i}\}\) that constitutes the time-dependent Hamiltonian.
#### b.1.1 Optimizing time-dependent Hamiltonian
Given the reward function defined as in Eq. (6) in the main text, we determine the coefficients \(\{\gamma_{i}(t)\}_{i}\) at each time step \(t\) such that the following optimization problem is solved:
maximize \[\frac{dr}{dt}\] (11) subject to \[\|\hat{H}(t)\|_{2}\leq C.\]
Recall that in the gradient-vased algorithm, the time-evolving Hamiltonian \(\hat{H}(t)\) (4) in the main text consists of a linear combination of Hermitian operators:
\[\hat{H}(t)=\sum_{\hat{O}_{i}\in\mathcal{B}}\gamma_{i}(t)\hat{O}_{i}, \tag{12}\]
where \(\mathcal{B}=\{\hat{O}_{i}\}_{i}\) denotes a set of Hermitian operators. We optimize the coefficients \(\gamma_{i}(t)\) to maximize \(dr/dt\), the derivative of the reward function (6), and obtain the optimized Hamiltonian at each time step.
In the following, we derive the explicit representation of the Karush-Kuhn-Tucker (KKT) conditions, the generalization of the Lagrange conditions to the problem subjected to inequality constraints, for the problem defined in Eqs. (11). First, consider the derivative of the reward function (6):
\[\frac{dr}{dt}=\sum_{\alpha}\frac{\partial r}{\partial w_{\alpha}}\frac{dw_{ \alpha}}{dt}. \tag{13}\]
The derivative of \(w_{\alpha}\) in Eq. (13) is calculated as
\[\frac{dw_{\alpha}}{dt}= L^{-1}\frac{d}{dt}\left[E_{\alpha}-\operatorname{Tr}\left[\hat{H}( \tau)\rho_{\alpha}(t)\right]\right]\] \[= -L^{-1}\operatorname{Tr}\left[\hat{H}(\tau)\frac{dU(t)}{dt} \left|E_{\alpha}\right\rangle\left\langle E_{\alpha}\right|U^{\dagger}(t)\right]\] \[-L^{-1}\operatorname{Tr}\left[\hat{H}(\tau)U(t)\left|E_{\alpha} \right\rangle\left\langle E_{\alpha}\right|\frac{dU^{\dagger}(t)}{dt}\right]\] \[= iL^{-1}\operatorname{Tr}\left[\hat{H}(\tau)[\hat{H}(t),\rho_{ \alpha}(t)]\right]\] \[= iL^{-1}\sum_{i}\gamma_{i}(t)\operatorname{Tr}\left[\hat{H}( \tau)[\hat{O}_{i},\rho_{\alpha}(t)]\right]\] \[= iL^{-1}\sum_{i}\gamma_{i}(t)\operatorname{Tr}\left[[\hat{H}( \tau),\hat{O}_{i}]\rho_{\alpha}(t)\right], \tag{14}\]
where \([A,B]=AB-BA\) is the commutator, and the substitution of the time-dependent Hamiltonian (12) yields the fifth line. By substituting Eq. (14) into Eq. (13), the derivative of the reward function reads
\[\frac{dr}{dt}=\sum_{i}\gamma_{i}(t)Y_{i}(t), \tag{15}\]
where we have defined
\[Y_{i}(t):=iL^{-1}\sum_{\alpha}\frac{\partial r}{\partial w_{\alpha}} \operatorname{Tr}\left[[\hat{H}(\tau),\hat{O}_{i}]\rho_{\alpha}(t)\right]. \tag{16}\]
Note that the partial derivative of \(r\) in Eq. (16) is calculated as
\[\frac{\partial r}{\partial w_{\alpha}} =\frac{\partial}{\partial w_{\alpha}}\left[\sigma_{a}(w_{\alpha} -\varepsilon)+c(w_{\alpha}-\delta)\theta(\delta-w_{\alpha})\right]\] \[=\begin{cases}a\sigma_{a}(w_{\alpha}-\varepsilon)\left\{1-\sigma _{a}(w_{\alpha}-\varepsilon)\right\}+c&(w_{\alpha}<\delta)\\ a\sigma_{a}(w_{\alpha}-\varepsilon)\left\{1-\sigma_{a}(w_{\alpha}- \varepsilon)\right\}&(w_{\alpha}\geq\delta),\end{cases} \tag{17}\]
and Eq. (16) can be calculated using the expectation values regarding \(\{\rho_{\alpha}(t)\}_{\alpha}\) at each time step.
Substituting the time-dependent Hamiltonian (12)
Figure 12: The abstract deep NN architecture used for the RL in the present work. At each time step \(t_{i}\), the deep NN takes the time-step index \(i\) as input. After intermediate computations by fully-connected layers, LSTM [42], and the dueling network [43], the deep NN outputs the estimate of the Q function, from which we determine the next action.
\begin{table}
\begin{tabular}{l r} \hline \hline Reward discount \(\eta\) & 0.997 \\ Minibatch size & 608 \\ Sequence length & 40 \\ Optimizer & Adam [44] \\ Optimizer setting & learning rate & \(10^{-4}\) \\ & \(\varepsilon\) & \(10^{-3}\) \\ & \(\beta\) & \((0.9,0.999)\) \\ Replay ratio & & 1 \\ Gradient norms clip & 80 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The hyperparameters used in the NNs. The agent performs updates on batches of (minibatch size \(\times\) sequence length) observations. Replay ratio means the effective number of times each experienced observation is being replayed for the training. See Ref. [45] and its previous non-LSTM version, Ref. [46], for the details of the hyperparameters. The other parameters follow the ones in Ref. [47].
into the constraint in problem (11) yields
\[C^{2} \geq\|\hat{H}(t)\|_{2}^{2}\] \[=\operatorname{Tr}\left[\hat{H}^{\dagger}(t)\hat{H}(t)\right]\] \[=\sum_{i,j}\gamma_{i}^{*}(t)\gamma_{j}(t)\operatorname{Tr}\left[ \hat{O}_{i}^{\dagger}\hat{O}_{j}\right]\] \[=Ld\sum_{i}|\gamma_{i}(t)|^{2}, \tag{12}\]
where we assume that \(\operatorname{Tr}[\hat{O}_{i}^{\dagger}\hat{O}_{j}]=dL\delta_{ij}\). As we later see in the next subsection, this assumption is valid for the set \(\mathcal{B}\) considered in this work.
We define the Lagrange function as
\[\mathcal{L}:=-\sum_{i}\gamma_{i}(t)Y_{i}(t)+\lambda\left(Ld\sum_{i}|\gamma_{i} (t)|^{2}-C^{2}\right). \tag{13}\]
The KKT condition for the optimization problem (11) is expressed as
\[\frac{\partial\mathcal{L}}{\partial\gamma_{i}}=-Y_{i}(t)+2\lambda Ld \gamma_{i}=0\text{ for any }i, \tag{14a}\] \[Ld\sum_{i}|\gamma_{i}(t)|^{2}-C^{2}\leq 0,\] (14b) \[\lambda\left(Ld\sum_{i}|\gamma_{i}(t)|^{2}-C^{2}\right)=0,\] (14c) \[\lambda\geq 0. \tag{14d}\]
Regarding the constraint, we consider the two cases: (i) \(\|\hat{H}(t)\|_{2}=C\) and (ii) \(\|\hat{H}(t)\|_{2}<C\). In the case (i), the condition (14) becomes
\[\gamma_{i}=\frac{Y_{i}(t)}{2\lambda Ld}\text{ for any }i, \tag{15a}\] \[Ld\sum_{i}|\gamma_{i}(t)|^{2}=C^{2},\] (15b) \[\lambda\geq 0. \tag{15c}\]
Substituting Eq. (15a) into Eq. (15b), we obtain
\[\lambda=\frac{\|Y(t)\|}{2C\sqrt{Ld}}, \tag{16}\]
where \(\|Y(t)\|=\sqrt{\sum_{i}|Y_{i}(t)|^{2}}\). By combining Eq. (15a) with Eq. (16), the coefficients \(\gamma\) are expressed as
\[\gamma_{i}(t)=\frac{CY_{i}(t)}{\sqrt{Ld}\|Y(t)\|}. \tag{17}\]
At each time step, \(\gamma_{i}(t)\) is calculated using the expectation values of \(\hat{H}(\tau)\) and \([\hat{H}(\tau),\hat{O}_{i}]\).
In the case (ii), \(\lambda\) must be zero because of Eq. (14c). By substituting \(\lambda=0\) into Eq. (14a), we obtain the condition: \(Y_{i}(t)=0\) for any \(i\). This equality holds when \(\rho_{\alpha}(t)\) commutes with \(\hat{H}(\tau)\) for any \(\alpha\), which results in the vanishing of the gradient of the reward function (see Eq. (10)). Note that it is also natural to expect that the case (ii) is essentially not relevant, since the optimization is done such that the gradient is maximized.
In our analysis, the initial states are the energy eigenstates of \(\hat{H}(0)=\hat{H}(\tau)\). Therefore, \(\rho_{\alpha}(t)\) commutes with \(\hat{H}(\tau)\) for any \(\alpha\), leading to the vanishing of the gradient of the reward function. To perform the gradient-based algorithm, we add a perturbation to the dynamics at the beginning of the simulation. Specifically, we perform a time evolution generated by \(\sum_{l}\hat{\sigma}_{l}^{x}\) for a small duration \(\delta t=0.001\).
#### b.2.2 The operator set for gradient-based algorithm
In this section, we construct \(\mathcal{B}_{k}\), which is a translationally invariant, spatial inversion-symmetric and orthogonal operator set that act on at most \(k\) contiguous sites.
The first step is to construct a translationally invariant operator set \(\mathcal{A}_{k}\). Consider the \(L\)-qubit Pauli group \(\mathcal{P}_{L}:=\{\pm 1,\pm i\}\cdot\{\hat{\sigma}^{a},\hat{\sigma}^{x},\hat{ \sigma}^{y},\hat{\sigma}^{z}\}^{\otimes L}\) where \(\hat{\sigma}^{a}\) (\(a=0,x,y,z\)) are Pauli operators. For a given Pauli operator \(\hat{P}_{a}\in\mathcal{P}_{L}\) that acts on at most \(k\) contiguous sites, we take a linear combination of translated operators such that \(\hat{Q}_{a}:=\sum_{l=0}^{L-1}\hat{T}^{l}\hat{P}_{a}\hat{T}^{-l}\), where \(\hat{T}\) is one-site translation operator, e.g., \(T\hat{\sigma}_{l}^{x}\hat{T}^{-1}=\hat{\sigma}_{l+1}^{x}\). Then, from the set of operator \(\{\hat{Q}_{a}\}_{a}\), we choose the elements of \(\mathcal{A}_{k}\) so that there is no duplication. Note that we regard the operators as identical such that only their global phases differ.
Next, we create an inversion-symmetric operator subset from \(\mathcal{A}_{k}\), where \(\hat{R}\) is the spatial inversion operator. Namely, we remove inversion-asymmetric elements from \(\mathcal{A}_{k}\) and adopt \(\hat{Q}_{a}^{{}^{\prime}}=\left(\hat{Q}_{a}+\hat{R}\hat{Q}_{a}\hat{R}\right)/ \sqrt{2}\) as the element of \(\mathcal{B}_{k}\) in such a way that there is no redundancy.
Finally, we confirm that the elements in \(\mathcal{B}_{k}\) satisfy the orthogonality. Concretely, we straightforwardly obtain the following:
\[\operatorname{Tr}\left[\hat{Q}_{a}^{\dagger}\hat{Q}_{b}\right] =\operatorname{Tr}\left[\sum_{l=0}^{L-1}\hat{T}^{l}\hat{P}_{a}^{ \dagger}\hat{T}^{-l}\sum_{l^{\prime}=0}^{L-1}\hat{T}^{l^{\prime}}\hat{P}_{b} \hat{T}^{-l^{\prime}}\right]\] \[=\sum_{l,l^{\prime}=0}^{L-1}\operatorname{Tr}\left[\hat{P}_{a}^{ \dagger}\hat{T}^{-(l-l^{\prime})}\hat{P}_{b}\hat{T}^{l-l^{\prime}}\right]\] \[=\sum_{l,l^{\prime}=0}^{L-1}d\delta_{a,b}=\begin{cases}Ld&(a=b)\\ 0&(a\neq b).\end{cases} \tag{18}\]
Here, we used the fact that we have chosen \(\hat{Q}_{a}\in\mathcal{A}_{k}\) such that there is no redundancy, which implies that \(\hat{P}_{a}\) does not coincide with any \(\hat{P}_{b\neq a}\) under any translation operation. Following similar calculation, we also confirm that the inversion-symmetrized elements \(\hat{Q}_{a}^{{}^{\prime}}\) satisfy the orthogonality and the norm \(\|\hat{Q}_{a}^{{}^{\prime}}\|_{2}=\sqrt{Ld}\). As a result,
we have verified the orthogonality of \(\mathcal{B}_{k}\) and the norm of elements \(\|\hat{Q}_{a}\|_{2}=\|\hat{Q}^{\prime}_{a}\|_{2}=\sqrt{Ld}\).
### Deep reinforcement learning
When adopting deep RL, we construct the protocol with a unitary sequence, in which each element corresponds to the time evolution at each time step, chosen from a fixed set of unitaries. The elements of the fixed set of unitaries are generated by the following Hermitian operator set \(\{\hat{H}_{m}\}_{m}\):
\[\sum_{l}J\sigma_{l}^{z}\sigma_{l+1}^{z}+h_{l}\sigma_{l}^{z},\ \sum_{l}J \sigma_{l}^{z}\sigma_{l+1}^{z}+h_{\mathrm{N}}\sigma_{l}^{z}, \tag{15}\] \[\sum_{l}\hat{\sigma}_{l}^{x}\hat{\sigma}_{l+1}^{y}+\hat{\sigma}_{ l}^{y}\hat{\sigma}_{l+1}^{x},\ \sum_{l}\hat{\sigma}_{l}^{y}\hat{\sigma}_{l+1}^{z}+\hat{\sigma}_{l}^{z}\hat{ \sigma}_{l+1}^{y},\] \[\sum_{l}g_{l}\sigma_{l}^{x},\ \sum_{l}g_{\mathrm{N}}\sigma_{l}^{x} \ \sum_{l}\hat{\sigma}_{l}^{y},\]
where \((J,\ h_{\mathrm{I}},\ h_{\mathrm{N}},\ g_{\mathrm{I}},\ g_{\mathrm{N}})=(1,\ 0,\ 0.9045,\ 0.5,\ 0.809)\). Note that the norm of these terms satisfies the upper bound \(\sqrt{2Ld}\) adopted in the main text. These terms, which are linear combinations of elements in \(\mathcal{B}_{2}\), are also used in Ref. [26; 27]. In Sec. III.1, we set the time-step length as \(0.04\) and the number of time-step is \(600\).
Deep reinforcement learning, specifically deep Q-learning, utilizes a NN to approximate the following optimal action-value function Q [48; 49]:
\[Q^{*}(t_{i},m)=\max_{\pi}\mathbb{E}_{\pi}\left[r_{t_{i}}+\sum_{n=1}^{\infty} \eta^{n}r_{t_{i+n}}\middle|\hat{H}(t_{i})=\hat{H}_{m},\ \pi\right], \tag{16}\]
which denotes the maximum sum of rewards \(r_{t}\) discounted by \(\eta\) (\(0<\eta<1\)) in a stochastic policy that selects actions based on a probability distribution as \(\pi\left(m|t_{i}\right)=\Pr\left(m|t_{i}\right)\). A potent variant of Q-learning harnesses the capabilities of deep NNs to represent the action-value function, which is hence referred to as deep RL algorithm [50; 51].
In this paper, we direct our focus toward a non-distributed implementation [47] of a deep RL algorithm termed R2D2 [45]. R2D2 is a form of deep Q-learning algorithm, and assumes that the agent can obtain partial information about the state of the environment. Figure 2 shows the overall picture. The details regarding the network structure are shown in Table 2 and Table 3.
|
2306.13288 | Transport equations and flows with one-sided Lipschitz velocity fields | We study first- and second-order linear transport equations, as well as ODE
and SDE flows, with velocity fields satisfying a one-sided Lipschitz condition.
Depending on the time direction, the flows are either compressive or expansive.
In the compressive regime, we characterize the stable continuous distributional
solutions of both the first and second-order nonconservative transport
equations as the unique viscosity solution. Our results in the expansive regime
complement the theory of Bouchut, James, and Mancini, and we provide a complete
theory for both the conservative and nonconservative equations in Lebesgue
spaces, as well as proving the existence, uniqueness, and stability of the
regular Lagrangian ODE flow. We also provide analogous results in this context
for second order equations and SDEs with degenerate noise coefficients that are
constant in the spatial variable. | Pierre-Louis Lions, Benjamin Seeger | 2023-06-23T04:26:03Z | http://arxiv.org/abs/2306.13288v1 | # Transport equations and flows with one-sided Lipschitz velocity fields
###### Abstract
We study first- and second-order linear transport equations, as well as ODE and SDE flows, with velocity fields satisfying a one-sided Lipschitz condition. Depending on the time direction, the flows are either compressive or expansive. In the compressive regime, we characterize the stable continuous distributional solutions of both the first and second-order nonconservative transport equations as the unique viscosity solution. Our results in the expansive regime complement the theory of Bouchut, James, and Mancini [23], and we provide a complete theory for both the conservative and nonconservative equations in Lebesgue spaces, as well as proving the existence, uniqueness, and stability of the regular Lagrangian ODE flow. We also provide analogous results in this context for second order equations and SDEs with degenerate noise coefficients that are constant in the spatial variable.
###### Contents
* 1 Introduction
* 1.1 Main results
* 1.1.1 The compressive regime
* 1.1.2 The expansive regime
* 1.1.3 SDEs and second order equations
* 1.2 Applications and further study
* 1.3 Notation
* 2 The ODE flow
* 2.1 The backward flow
* 2.2 The Jacobian for the backward flow
* 2.3 The forward flow as the right-inverse of the backward flow
* 2.4 Compressive stochastic flows
* 2.5 Small noise approximations
The compressive regime * 3.1 The nonconservative equation * 3.1.1 Representation formula * 3.1.2 Viscosity solutions * 3.1.3 (Non)equivalence of distributional and viscosity solutions * 3.2 The conservative equation * 3.2.1 Duality solutions * 3.2.2 On the failure of renormalization * 3.2.3 Equivalence of duality and distributional solutions
* 4 The expansive regime
* 4.1 The conservative equation
* 4.1.1 Representation formula
* 4.1.2 Vanishing viscosity approximation
* 4.2 The nonconservative equation
* 4.2.1 \(L^{p}\) and \(BV\) estimates
* 4.2.2 Duality solutions
* 4.2.3 Renormalization
* 4.3 The forward ODE flow
* 4.3.1 Properties of the right inverse
* 4.3.2 The regular Lagrange property
* 4.4 Characterizations
* 4.4.1 The nonconservative equation: sup and inf convolutions
* 4.4.2 The conservative equation: uniqueness of nonnegative solutions
* 4.4.3 Uniqueness of regular Lagrangian flows
* 4.5 Some remarks for second order equations
* 4.5.1 The expansive stochastic flow with constant noise coefficient
* 4.5.2 A priori estimates for the second-order nonconservative equation
* 4.5.3 Representation formula for the Fokker-Planck equation
## 1 Introduction
For a fixed, finite time horizon \(T>0\) and a velocity field \(b:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d}\), we study the linear transport equation
\[\partial_{t}u+b(t,x)\cdot\nabla u=0\quad\text{in }[0,T]\times\mathbb{R}^{d}, \tag{1.1}\]
along with the dual, continuity equation
\[\partial_{t}f+\operatorname{div}(b(t,x)f)=0\quad\text{in }[0,T]\times\mathbb{R}^{d}, \tag{1.2}\]
and the associated ordinary differential equation (ODE) flow
\[\partial_{t}\phi_{t,s}(x)=b(t,\phi_{t,s}(x)),\quad(s,t,x)\in[0,T]\times[0,T] \times\mathbb{R}^{d},\quad\phi_{s,s}=\operatorname{Id}. \tag{1.3}\]
The goal of the paper is to analyze the three problems, and the relations between them, for vector fields \(b\) satisfying the one-sided Lipschitz condition
\[\begin{cases}(b(t,x)-b(t,y))\cdot(x-y)\geq-C(t)|x-y|^{2}\quad\text{for a.e. }(t,x,y)\in[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{d}\\ \text{for some nonnegative }C\in L^{1}([0,T]).\end{cases} \tag{1.4}\]
When \(b\) is Lipschitz continuous in the space variable, the ODE flow (1.3) admits a unique global solution, and, through the method of characteristics, (1.1) and (1.2) are uniquely solved for any given smooth initial
or terminal data. Moreover, the flow is a diffeomorphism, and therefore the solution operators for either the initial value problem (IVP) or terminal value problem (TVP) for (1.1) and (1.2) are continuous on \(L^{p}_{\rm loc}\) for any \(p\in[1,\infty]\).
Under the assumption (1.4), the time direction plays a nontrivial role, and there is a fundamental difference between the solvability of the flow (1.3) forward versus backward in time. Indeed, \(b\) need not even be continuous, and (1.4) is equivalent to
\[\frac{\nabla b(t,\cdot)+\nabla b(t,\cdot)^{T}}{2}\geq-C(t)\operatorname{Id} \quad\text{ in the sense of distributions}.\]
In particular, the distribution \(\operatorname{div}b\) is a signed measure that is bounded from below, but not in general absolutely continuous with respect to Lebesgue measure. Thus, when \(t<s\), the flow (1.3) is expected to concentrate at sets of Lebesgue measure zero, while the formation of vacuum is witnessed for \(t>s\).
A general study of transport equations and ODEs with irregular velocity fields was initiated by DiPerna and the first author [37], who introduced the notion of renormalized solutions to prove the well-posedness for (1.1) and (1.2) and the almost-everywhere solvability of the flow (1.3) for \(b\) with Sobolev regularity. The DiPerna-Lions theory was extended to equations where only \(\operatorname{Sym}(\nabla b)\in L^{1}\), to Vlasov equations with \(BV_{\rm loc}\) velocity fields [18], and to two-dimensional problems with a Hamiltonian structure [19, 2, 3, 42, 1, 1]. Using deep results from geometric measure theory, the renormalization property was extended to the very general case where \(b\in BV_{\rm loc}\) and \(\operatorname{div}b\in L^{1}\) by Ambrosio [4], who also provided a new, measure-theoretic viewpoint on the relationship between uniqueness of nonnegative solutions of (1.2) and the unique solvability of the flow (1.3) through the idea of superposition. Further developments include equations with velocity fields having a particular structure allowing for less regularity [28, 47] and velocity fields belonging to \(SBD\) (i.e. \(\operatorname{Sym}(\nabla b)\) is a signed measure with no singular Cantor-like part) [12]. Fine regularity properties of DiPerna-Lions flows were established in [15, 33], and the study of so-called "nearly incompressible flows" [14] led to the resolution by Bianchini and Bonicatto [17] of Bressan's compactness conjecture [24, 25]; see also [44] for related results. For many more details and references, we refer the reader to the surveys [16, 6, 11, 5].
In the majority of these works, the divergence \(\operatorname{div}b\) is assumed to be bounded, or at least absolutely continuous with respect to Lebesgue measure. This is not the case in general for velocity fields satisfying (1.4), and so the equations (1.1) and (1.2) do not even have a sense as distributions, because the products \((\operatorname{div}b)u\) and \(bf\) are ill-defined for general \(u\in L^{1}_{\rm loc}\) or measures \(f\). The DiPerna-Lions theory does not, therefore, cover this situation. Moreover, the choice of an appropriate function space of solutions is very sensitive to whether the equations are posed as initial or terminal value problems.
The problems (1.1)-(1.3) for velocity fields with a one-sided Lipschitz condition have been approached with a variety of methods [20, 23, 27, 55, 56, 57, 30]. Our main purpose is to complement these works, and in particular the theory of Bouchut, James, and Mancini [23], by providing complete characterizations of the stable solutions to all three problems in both the compressive and expansive regimes. We also provide some results on the corresponding parabolic equations with a degenerate, second-order term, as well as the SDE analogue of (1.3) for both the velocity field \(b\) and \(-b\).
### Main results
We relegate a full description of the results, discussions, and examples to the body of the paper. Here, we briefly outline the different sections and the types of results proved within them, and we compare them to the existing literature.
#### 1.1.1 The compressive regime
In Section 2, we record properties of the backward Filippov flow for (1.3), as well as for its Jacobian \(J_{t,s}(x):=\det(\nabla\phi_{t,s}(x))\), which is well-defined in \(L^{\infty}\) for a.e. \(t\leq s\) and \(x\in\mathbb{R}^{d}\). We employ measure-theoretic arguments to make sense of the right-inverse of the flow in an almost-everywhere sense, as a preliminary step to understanding the forward, regular Lagrangian flow, and prove several properties, the most important of which is its almost-everywhere continuity.
In Section 3, we turn to the study of the nonconservative equation1
Footnote 1: For a consistent presentation throughout the paper, and in order to emphasize the dual relationship between the two equations, the transport equation (1.1) will always be posed as a terminal value problem, and the continuity equation (1.2) as an initial value problem. The compressive and expansive regimes will be distinguished by the choice of sign in front of the velocity field \(b\).
\[\partial_{t}u-b(t,x)\cdot\nabla u=0\quad\text{in }(0,T)\times\mathbb{R}^{d}, \quad u(T,\cdot)=u_{T}, \tag{1.5}\]
for which the uniqueness of continuous distributional solutions fails in general. We introduce a new PDE characterization of the "good" (stable) solution of (1.5) as the unique viscosity solution, in the sense of Crandall, Ishii, and the first author [32]. This is done by proving a comparison principle for sub and supersolutions. The viscosity solution characterization coincides with the selection of "good" solutions by other authors in particular settings [20, 23, 55, 56, 57, 30], allows for robust stability statements, and, moreover, generalizes to the setting of degenerate parabolic problems (see the discussion below).
The "usual" viscosity solution theory must be modified due to the lack of global continuity of \(b\). In view of the evolution nature of the equations, the \(L^{1}\)-dependence in time does not present a problem, and the equations can be treated with the methods of [43, 51, 53, 54]. To deal with the discontinuity of \(b\) in space, sub and supersolutions must be defined with appropriate semicontinuous envelopes of \(b\) in the space variable. The direction of the one-sided Lipschitz assumption (1.4) accounts for the beneficial inequalities in the proof of the comparison principle.
We then introduce further conditions on the velocity field \(b\) and terminal data \(u_{T}\) that ensure uniqueness of arbitrary continuous distributional solutions. In particular, the interplay between the regularity of \(b\) and \(u_{T}\) plays an important role: if \(b\in C^{\alpha}\) and \(u_{T}\in C^{\beta}\), then distributional solutions are unique if \(\alpha+\beta>1\), while uniqueness may fail in general if \(\alpha+\beta\leq 1\), as we show by example.
The latter half of Section 3 deals with the study of the dual problem to (1.5), namely
\[\partial_{t}f-\operatorname{div}(b(t,x)f)=0\quad\text{in }(0,T)\times\mathbb{R}^{d} \quad f(0,\cdot)=f_{0}. \tag{1.6}\]
Even if \(f_{0}\in L^{1}_{\text{loc}}\), the concentrative nature of the flow causes the measure \(f(t,\cdot)\) to develop a singular part, and therefore we are led to seek measure-valued solutions. This prevents the duality solution of (1.6) from being understood in the distributional sense, due to the lack of continuity of \(b\). Nevertheless, we prove that, if \(b\) is continuous, or if it happens that \(f(t,\cdot)\) is absolutely continuous with respect to Lebesgue measure on the time interval \([0,T]\), then the notions of duality and distributional solutions are equivalent.
An important feature of the continuity equation (1.6) is the failure of renormalization; that is, if \(f\) is a duality solution, the measure \(|f|\) may fail to be a distributional solution, and may even violate conservation of mass. This is in contrast with the DiPerna-Lions theory, and is a direct consequence of the compressive nature of the backward flow, which can lead to cancellation of the positive and negative parts of \(f\). A related phenomenon is the nonuniqueness of distributional solutions of the continuity equation (1.2) with the reverse sign (see below).
#### 1.1.2 The expansive regime
In Section 4, we reverse the sign on the velocity field, and study the corresponding problems
\[\partial_{t}u+b(t,x)\cdot\nabla u=0\quad\text{in }(0,T)\times\mathbb{R}^{d}, \quad u(T,\cdot)=u_{T} \tag{1.7}\]
and
\[\partial_{t}f+\operatorname{div}(b(t,x)f)=0\quad\text{in }(0,T)\times\mathbb{R}^{d}, \quad f(0,\cdot)=f_{0}. \tag{1.8}\]
In view of the lower bound on the divergence of \(b\), we are motivated to seek an \(L^{p}\)-based theory for both equations, based on a priori estimates, or equivalently, on the fact that the characteristic flow (the forward ODE (1.3)) does not concentrate on sets of measure zero.
The initial value problem for the continuity equation (1.8) was studied in [20, 23], where a large part of the analysis is based on the fact that locally integrable distributional solutions are _not_ unique in general2.
The same setting is studied in [27], where the existence and uniqueness of the forward Filippov flow for (1.3) is established for a.e. \(x\in\mathbb{R}^{d}\).
In the first part of Section 4, we identify a unique "good" distributional solution, and prove that the resulting solution operator is continuous on \(L^{p}_{\mathrm{loc}}\) for all \(p\in[1,\infty]\), and stable with respect to regularizations. This coincides with the notion of reversible solution in [20, 23].
We then obtain strong stability results for the Bouchut-James-Mancini duality solutions of the nonconservative problem (1.7) in all \(L^{p}\)-spaces, which allow us to prove the renormalization property. Moreover, we introduce a PDE characterization of this duality solution in terms of regularization by \(\operatorname{ess\,inf}\)- and \(\operatorname{ess\,sup}\)-convolution. An important ingredient in establishing this characterization is the propagation of almost-everywhere continuity, which, in turn, follows from the renormalization property and the almost-everywhere continuity of the forward flow proved in Section 2.
As a consequence of this new characterization, we give a PDE-based proof of the fact that _nonnegative_ distributional \(L^{p}\)-solutions of (1.8) are unique, which was established in [27] using the superposition principle. This result, along with the renormalization property for (1.7), allows us to establish the existence, uniqueness, and stability of the forward regular Lagrangian flow for the ODE (1.3) identified in [27]. As a byproduct, this also provides a full characterization of the Bouchut-James-Mancini notion of "good" (reversible) solution as the pushforward of \(f_{0}\) by the forward flow. Moreover, a distributional solution \(f\) is a reversible solution if and only if \(|f|\) is also a distributional solution (cf. [23, Proposition 3.12], which operates under the criterion that \(f\) be a so-called "Jacobian" solution).
#### 1.1.3 SDEs and second order equations
This paper also contains various results regarding second order versions of (1.1) and (1.2), as well as stochastic differential equation (SDE) flows. SDEs and degenerate second-order Fokker-Planck equations have been studied from many perspectives, using both the DiPerna-Lions theory and adaptations of the superposition principle, by many authors, including Le Bris and Lions [48], Figalli [38], Trevisan [59], and Champagnat and Jabin [29]; see also the book [49]. Just as in the first-order setting, the fact that the measure \(\operatorname{div}b\) may contain a singular part prevents the application of these theories to the present situation.
In the compressive regime, we extend the viscosity solution theory of Section 3 to the second order equation
\[-\partial_{t}u+b(t,x)\cdot\nabla u-\operatorname{tr}[a(t,x)\nabla^{2}u]=0 \quad\text{in }(0,T)\times\mathbb{R}^{d},\quad u(T,\cdot)=u_{T}, \tag{1.9}\]
where \(b\) satisfies the one-sided Lipschitz condition (1.4) and \(a\) is a regular, but possibly degenerate, symmetric matrix. This equation, as well as the dual problem
\[\partial_{t}f-\operatorname{div}(b(t,x)f)-\nabla^{2}\cdot(a(t,x)f)=0\quad \text{in }(0,T)\times\mathbb{R}^{d},\quad f(0,\cdot)=f_{0}, \tag{1.10}\]
can be related to the SDE
\[d_{t}\Phi_{t,s}(x)=-b(t,\Phi_{t,s}(x))dt+\sigma(t,\Phi_{t,s}(x))dW_{t},\quad t >s,\quad\Phi_{s,s}(x)=x, \tag{1.11}\]
which is the SDE analogue of the backward flow for (1.3). Here \(W\) is a given Brownian motion and \(a=\frac{1}{2}\sigma\sigma^{T}\). We establish the existence and uniqueness, for every \(x\in\mathbb{R}^{d}\), of a strong solution in the Filippov sense, and we show that, with probability one, \(\Phi_{t,s}\) is Holder continuous for any exponent less than \(1\).
The situation is more complicated in the expansive regime, namely, for the equations
\[-\partial_{t}u-b(t,x)\cdot\nabla u-\operatorname{tr}[a(t,x)\nabla^{2}u]=0 \quad\text{in }(0,T)\times\mathbb{R}^{d},\quad u(T,\cdot)=u_{T} \tag{1.12}\]
and
\[\partial_{t}f+\operatorname{div}(b(t,x)f)-\nabla^{2}\cdot(a(t,x)f)=0\quad \text{in }(0,T)\times\mathbb{R}^{d},\quad f(0,\cdot)=f_{0}. \tag{1.13}\]
In the first-order setting, the characterization of the "good" distributional solution of the continuity equation (1.8) relies on the Lipschitz continuity of the backward ODE flow. Adapting similar methods for the second order equation (1.13) involves establishing Lipschitz continuity of a stochastic flow like (1.11) with certain time-reversed coefficients (see (4.30) below). While flows of the form (1.11) are Holder continuous for any
exponent less than \(1\), it is an open question as to whether it is Lipschitz with probability one. We relegate a general study of (1.12) and (1.13), and of the stochastic regular Lagrangian flow for
\[d_{t}\Phi_{t,s}(x)=b(t,\Phi_{t,s}(x))dt+\sigma(t,\Phi_{t,s}(x))dW_{t},\quad t>s, \quad\Phi_{s,s}(x)=x, \tag{1.14}\]
to future work. The exception3 is when \(\sigma\) is constant in the \(\mathbb{R}^{d}\)-variable. In this case, we prove that a suitable stochastic flow of the form (1.11) can be inverted, leading, as in the deterministic case, to the existence and uniqueness of a strong solution to (1.14) for a.e. \(x\in\mathbb{R}^{d}\), and a corresponding solution theory for the PDEs (1.12) and (1.13).
Footnote 3: Another case of interest is when the diffusion matrix \(a\) is nondegenerate in which case very general results can be obtained even for locally bounded \(b\); see [38].
### Applications and further study
While interesting in their own right, linear transport equations and ODEs with nonregular velocity fields arise naturally in several equations in fluid dynamics, in which the velocity fields depend nonlinearly on various other physical quantities that are coupled with the transported quantity. Since these equations must be posed a priori in a weak sense, this leads to velocity fields with limited regularity. The DiPerna-Lions and Ambrosio theories have been successfully applied to a number of such situations; see [7, 8, 9, 13, 34, 45, 50, 62]. The one-dimensional Bouchut-James theory of reversible solutions for transport equations with semi-Lipschitz velocity fields has been successfully applied in applications to conservation laws and pressureless gasses; see [21, 22, 40, 41].
Nonlinear transport equations also arise in certain models for large population dynamics, specifically mean field games (MFG). In [46], the first author and Lasry introduced a forward-backward system of PDEs modeling a large population of agents in a state of Nash equilibrium. The evolution of the density \(f\) of players is described by a continuity equation (1.8) (or Fokker-Planck equation (1.13)), where the velocity field \(b\) is given by
\[b(t,x)=-\nabla_{p}H(t,x,\nabla u(t,x)). \tag{1.15}\]
Here, \(H\) is a convex Hamiltonian, and \(u\) is the solution of the terminal value problem
\[-\partial_{t}u-\operatorname{tr}[a(t,x)\nabla^{2}u]+H(t,x,\nabla u(t,x))=F[f (t,\cdot)]\quad\text{in }(0,T)\times\mathbb{R}^{d},\quad u(T,\cdot)=G[f(t,\cdot)], \tag{1.16}\]
which is a Hamilton-Jacobi-Bellman equation encoding the optimization problem for a typical agent, and whose influence by the population of agents is described by the coupling functions \(F\) and \(G\). The velocity field (1.15) is the consensus optimal feedback policy of the population of agents at a Nash equilibrium.
When \(a\) is degenerate, or even zero, the function \(u\) has limited regularity, and is no better than semiconcave in the spatial variable in general. Therefore, even if \(H\) is smooth, the velocity field (1.15) may satisfy at most
\[b\in BV_{\text{loc}}\quad\text{and}\quad(\operatorname{div}b)_{-}\in L^{ \infty}. \tag{1.17}\]
This falls just outside the DiPerna-Lions-Ambrosio regime, since the measure \((\operatorname{div}b)_{+}\) may still fail to be absolutely continuous in general. In fact, the well-posedness of a suitable notion of solution for the transport and ODE problems under the general assumptions (1.17) remains an open problem.
Many simple but useful MFG models involve a linear-quadratic Hamiltonian of the form
\[H(t,x,p)=A(t,x)|p|^{2}+B(t,x)\cdot p+C(t,x)\]
for smooth, real-valued \(A,B,C\) with \(A>0\). In this case, it is easy to see that (1.15) satisfies the half-Lipschitz condition (1.4). This situation was studied by Cardaliaguet and Souganidis [27] for first-order, stochastic mean field games systems with common noise. In particular, it is proved there that the uniqueness of probability density solutions of (1.7) gives rise, through the superposition principle, to the uniqueness of optimal trajectories for the probabilistic formulation of the MFG problem, and, moreover, the solution of the stochastic forward-backward system can be used to construct approximate Nash equilibria for the \(N\)-player
game. Our analysis for the Fokker-Planck equation (1.13) may therefore be expected to yield similar results for stochastic MFG systems with common noise and degenerate, spatially-homogenous, idiosyncratic noise, a special case of the equations considered by Cardaliaguet, Souganidis, and the second author in [26].
The second application of nonlinear transport equations in mean field games is involved with the master equation for a MFG with a finite state space. These equations generally take the form
\[\partial_{t}u+b(t,x,u)\cdot\nabla u=c(t,x,u)\quad\text{in }(0,T)\times\mathbb{R}^{d}, \tag{1.18}\]
where \(u\), \(b\), and \(c\) all take values in \(\mathbb{R}^{d}\); coordinate-by-coordinate, (1.18) is written as
\[\partial_{t}u^{i}+b^{j}(t,x,u)\partial_{x_{j}}u^{i}=c^{i}(t,x,u),\quad i=1,2, \ldots,d.\]
Therefore, (1.18) is a nonconservative hyperbolic system, whose general well-posedness is a difficult question in general; note that, when \(d=1\), (1.18) becomes a scalar conservation law.
We do not discuss (1.18) here, but, in a forthcoming paper [52], we study a particular regime of equations taking the form (1.18), using a new theory for linear transport equations with velocity fields \(b\) that are increasing _coordinate by coordinate_, that is, \(\partial_{x_{j}}b^{i}\geq 0\) for \(i\neq j\).
The extension to infinite dimensions, of both the linear problems (1.1)-(1.2), as well as the nonlinear equation (1.18), remains an interesting question, with numerous applications, including the study of mean field game master equations on the Hilbert space of square-integrable random variables. We aim to study these situations in future work.
### Notation
Given a function space \(X(\mathbb{R}^{d})\), or \(X(\Omega)\) for an appropriate subdomain of \(\mathbb{R}^{d}\), \(X_{\text{loc}}\) denotes the space of functions (or distributions) \(f\) such that \(\phi f\in X\) for all \(\phi\in C_{c}^{\infty}(\mathbb{R}^{d})\). If \(X\) is a normed space, the same is not necessarily true for \(X_{\text{loc}}\), but it inherits the topology of local \(X\)-convergence. For example, \(\lim_{n\to\infty}f_{n}=f\) in \(L^{p}_{\text{loc}}(\mathbb{R}^{d})\) means that \(\lim_{n\to\infty}\left\|f_{n}-f\right\|_{L^{p}(B_{R})}=0\) for all \(R>0\). We denote by \(L^{p}_{+}([0,T])\) the subset of \(L^{p}([0,T])\) consisting of nonnegative functions.
Unless other specified, Banach or Frechet spaces of functions are endowed with the strong topology. For a function space \(X\), the subscripts \(X_{\text{w}}\) and \(X_{\text{w}\text{-}\text{\tiny{\star}}}\) indicate the weak (resp. weak-\(\star\)) topology.
For \(1\leq p<\infty\), \(\mathcal{P}_{p}\) is the space of probability measures \(\mu\), with \(\int|x|^{p}\mu(dx)<\infty\), which becomes a complete metric space for the \(p\)-Wasserstein distance \(\mathcal{W}_{p}\).
The transpose of a matrix \(\sigma\) is denoted by \(\sigma^{T}\), and, if \(\sigma\) is a square matrix, its symmetric part is denoted by \(\operatorname{Sym}(\sigma):=\frac{1}{2}(\sigma+\sigma^{T})\). The symbol Id stands for either the identity map or the identity matrix, the precise meaning being clear from context.
## 2 The ODE flow
This section is focused on the solvability and properties of the flow associated to a velocity field \(b\) satisfying4
Footnote 4: The linear growth assumption is a standard way to ensure that the a priori estimates for solutions do not blow up. Otherwise, the results of the paper would need a corresponding local theory, as for example in [10].
\[\begin{cases}\text{for some }C_{0},C_{1}\in L^{1}_{+}([0,T])\text{ and for all }t\in[0,T]\text{ and }x,y\in\mathbb{R}^{d},\\ |b(t,x)|\leq C_{0}(t)(1+|x|)\quad\text{and}\\ (b(t,x)-b(t,y))\cdot(x-y)\geq-C_{1}(t)|x-y|^{2}.\end{cases} \tag{2.1}\]
Because \(b(t,\cdot)\) is not necessarily continuous, the ODE must be interpreted in the Filippov sense [39], that is, abusing notation, we denote by \(b(t,x)\) the convex hull of all limit points of \(b(t,y)\) as \(y\to x\). For \(s\in[0,T]\), we seek absolutely continuous solutions \(t\mapsto\phi_{t,s}(x)\) of the problem
\[\begin{cases}\partial_{t}\phi_{t,s}(x)\in b(t,\phi_{t,s}(x)),\quad t\in[0,T], \\ \phi_{s,s}(x)=x.\end{cases} \tag{2.2}\]
**Remark 2.1**.: _If \(\dot{X}(t)\in b(t,X(t))\) and_
\[\tilde{X}(t):=\exp\left(\int_{0}^{t}C_{1}(s)ds\right)X(t)\quad\text{and}\quad \tilde{b}(t,x):=\exp\left(\int_{0}^{t}C_{1}(s)ds\right)b\left(t,\exp\left(-\int_ {0}^{t}C_{1}(s)dsx\right)\right),\]
_so that \(\dot{\tilde{X}}\in\tilde{b}(t,\tilde{X}(t))\), then \(\tilde{b}\) satisfies (2.1) with \(C_{1}\equiv 0\) and a possibly different \(C_{0}\). In other words, with a change of variables, one may always assume \(b\) is monotone without loss of generality._
We sometimes use the following characterization and properties of half-Lipschitz maps; see [23, Lemma 2.2].
**Lemma 2.1**.: _A vector field \(B:\mathbb{R}^{d}\to\mathbb{R}^{d}\) satisfies_
\[(B(x)-B(y))\cdot(x-y)\geq-C|x-y|^{2}\quad\text{for some $C\geq 0$ and all $x,y\in\mathbb{R}^{d}$}\]
_if and only if \(\operatorname{Sym}(\nabla B)\geq-C\operatorname{Id}\) in the sense of distributions. We then also have \(B\in BD_{\operatorname{loc}}(\mathbb{R}^{d})\), and_
\[\operatorname{Sym}(\nabla B)-(\operatorname{tr}\nabla B)\operatorname{Id} \leq(d-1)C\operatorname{Id}.\]
The space \(BD(\mathbb{R}^{d})\) of bounded deformations is the space of vector fields \(B:\mathbb{R}^{d}\to\mathbb{R}^{d}\) such that the symmetric part of the distribution \(\nabla B\) is a locally bounded Radon measure, and, as such, is a strictly larger space than \(BV(\mathbb{R}^{d})\). For more details, see [58].
We fix a family of regularizations such that
\[\begin{cases}(b^{\varepsilon})_{\varepsilon>0}\subset L^{1}([0,T],C^{0,1}( \mathbb{R}^{d})),\quad\lim_{\varepsilon\to 0}b^{\varepsilon}=b\text{ a.e. in }[0,T]\times\mathbb{R}^{d},\text{ and}\\ b^{\varepsilon}\text{ satisfies (\ref{eq:B}) uniformly in }\varepsilon>0.\end{cases} \tag{2.3}\]
For example, we may take \(b^{\varepsilon}(t,\cdot)=b(t,\cdot)*\rho_{\varepsilon}\) for \(\rho_{\varepsilon}=\varepsilon^{-d}\rho(\cdot/\varepsilon)\), with \(\rho\in C^{\infty}_{+}(\mathbb{R}^{d})\), \(\operatorname{supp}\rho\in B_{1}\), and \(\int\rho=1\).
### The backward flow
We begin the analysis with the backward flow, that is, (2.2) for \(t<s\). This is the time-direction for which the one-sided Lipschitz condition (2.1) yields a unique, Lipschitz flow. We record its properties here and refer to [30, 55, 56, 39] for the proofs; see also the work of Dafermos [35] for the connection to generalized characteristics of conservation laws.
**Lemma 2.2**.: _For every \((s,x)\in[0,T]\times\mathbb{R}^{d}\), there exists a unique solution \(\phi_{t,s}(x)\) of (2.2) defined for \((t,x)\in[0,s]\times\mathbb{R}^{d}\), satisfying the Lipschitz bound_
\[|\phi_{t,s}(x)-\phi_{t,s}(y)|\leq\exp\left(\int_{t}^{s}C_{1}(r)dr\right)|x-y| \quad\text{for all $0\leq t\leq s\leq T$ and $x,y\in\mathbb{R}^{d}$}. \tag{2.4}\]
_Moreover, there exists a constant \(C>0\) depending only on \(T\) and \(C_{0}\) from (2.1) such that_
\[|\phi_{t,s}(x)|\leq C(|x|+1)\quad\text{for all $0\leq t\leq s\leq T$ and $x\in\mathbb{R}^{d}$}, \tag{2.5}\]
_and_
\[\begin{cases}|\phi_{t_{1},s}(x)-\phi_{t_{2},s,x}|\leq C(1+|x|)|t_{1}-t_{2}| \quad\text{and}\\ |\phi_{t,s_{1}}(x)-\phi_{t,s_{2}}(x)|\leq C(1+|x|)|s_{1}-s_{2}|\\ \text{for all $t_{1},t_{2}\in[0,s],\;s_{2},s_{2}\in[t,T],\text{ and $x\in\mathbb{R}^{d}$}$}. \end{cases} \tag{2.6}\]
_For all \(0\leq r\leq s\leq t\leq T\), \(\phi_{r,s}\circ\phi_{s,t}=\phi_{r,t}\). If \((b^{\varepsilon})_{\varepsilon>0}\) are regularizations satisfying (2.3), then the corresponding backward flows \(\phi^{\varepsilon}\) converge locally uniformly as \(\varepsilon\to 0\) to \(\phi\)._
**Remark 2.2**.: _The a priori local boundedness and time-regularity estimates (2.5) and (2.6), depending only on \(C_{0}\) and not \(C_{1}\), do not require the half-Lipschitz assumption on \(b(t,\cdot)\), and are therefore satisfied for any limiting solutions of the ODE when \(b\) satisfies the first condition in (2.1). On the other hand, the half-Lipschitz assumption is crucial for the Lipschitz continuity of the flow (2.4), as well as the uniqueness of the solution._
**Remark 2.3**.: _Consider the backward flow in \(\mathbb{R}\) corresponding to \(b(t,x)=b(x)=\operatorname{sgn}x\), which is given, for \(x\in\mathbb{R}\) and \(s<t\), by_
\[\phi_{s,t}(x)=\begin{cases}x+(t-s)&\text{if }x<-t-s,\\ 0&\text{if }|x|\leq t-s,\text{ and }\\ x-(t-s)&\text{if }x>t-s.\end{cases} \tag{2.7}\]
_This demonstrates that, in general, the trajectories of the backward flow may concentrate on sets of measures \(0\), in particular, where \(b\) has jump discontinuities._
_We will often consider the examples \(b(x)=\operatorname{sgn}x\) in subsequent parts of the paper in order to illustrate certain general phenomena and to present counterexamples. Note that, by Remark 2.1, one can consider similar examples for arbitrary \(C_{1}\in L^{1}_{+}([0,1])\)._
### The Jacobian for the backward flow
In view of the Lipschitz regularity (2.4), \(\nabla_{x}\phi_{t,s}\in L^{\infty}\) for \(t\leq s\), and so we can define the Jacobian
\[J_{t,s}(x):=\det(\nabla_{x}\phi_{t,s}(x))\quad\text{for }0\leq t\leq s\leq T \quad\text{ and a.e. }x\in\mathbb{R}^{d}. \tag{2.8}\]
**Lemma 2.3**.: _Let \(J\) be defined as in (2.8). Then \(J\geq 0\),_
\[\begin{cases}J_{\cdot,s}\in L^{\infty}([0,s]\times\mathbb{R}^{d})\cap C([0,s],L^{1}_{\rm loc}(\mathbb{R}^{d}))\quad\forall s\in[0,T]\quad\text{and}\\ J_{t,\cdot}\in L^{\infty}([t,T]\times\mathbb{R}^{d})\cap C([t,T],L^{1}_{\rm loc }(\mathbb{R}^{d}))\quad\forall t\in[0,T],\end{cases} \tag{2.9}\]
\[\|J_{t,s}\|_{L^{\infty}}\leq\exp\left(d\int_{t}^{s}C_{1}(r)dr\right)\quad \text{for all }0\leq t\leq s\leq T, \tag{2.10}\]
_and, for all \(R>0\), there exists a modulus of continuity \(\omega_{R}\), which depends on \(b\) only through the constants \(C_{0}\) and \(C_{1}\) in (2.1), such that_
\[\begin{cases}\|J_{t_{1},s}-J_{t_{2},s}\|_{L^{1}(B_{R})}\leq\omega_{R}(|t_{1}- t_{2}|)\quad\text{for all }t_{1},t_{2}\in[0,s]\quad\text{and}\\ \|J_{t,s_{1}}-J_{t,s_{2}}\|_{L^{1}(B_{R})}\leq\omega_{R}(|s_{1}-s_{2}|)\quad \text{for all }s_{1},s_{2}\in[t,T].\end{cases} \tag{2.11}\]
_If \((b^{\varepsilon})_{\varepsilon>0}\) are as in (2.3), \((\phi^{\varepsilon})_{\varepsilon>0}\) are the corresponding solutions of (2.2), and, for \(\varepsilon>0\), \(J^{\varepsilon}=\det(\nabla_{x}\phi^{\varepsilon})\), then_
\[\lim_{\varepsilon\to 0}J^{\varepsilon}_{\cdot,s}=J_{\cdot,s}\quad\text{ weak-$\star$ in }L^{\infty}([0,s]\times\mathbb{R}^{d})\quad\text{and}\quad\lim_{ \varepsilon\to 0}J^{\varepsilon}_{t,\cdot}=J_{t,\cdot}\quad\text{weak-$\star$ in }L^{\infty}([t,T]\times\mathbb{R}^{d}). \tag{2.12}\]
Proof.: It suffices to prove all statements about \(J_{\cdot,s}\) on \([0,s]\). The arguments are exactly the same for the other halves using the fact that \(s\mapsto\phi_{t,s}\) is the forward flow corresponding to the velocity \(-b\).
The convergence (2.12) goes through by compensated compactness arguments for determinants; see the Appendix of [23]. The nonnegativity of \(J\) now follows, because \(J^{\varepsilon}\geq 0\) for all \(\varepsilon\).
For fixed \(\varepsilon>0\) and \((s,x)\in[0,T]\times\mathbb{R}^{d}\), we have
\[\partial_{t}J^{\varepsilon}_{t,s}(x)=\operatorname{div}_{x}b^{\varepsilon}(t, \phi^{\varepsilon}_{t,s}(x))J^{\varepsilon}_{t,s}(x)\quad\text{for }t\in[0,s].\]
Then (2.3) implies \(\partial_{t}J^{\varepsilon}_{t,s}(x)\geq-dC_{1}(t)J^{\varepsilon}_{t,s}(x)\), and so
\[\frac{\partial}{\partial t}\left(J^{\varepsilon}_{t,s}(x)e^{-d\int_{t}^{s}C_{ 1}(r)dr}\right)\geq 0.\]
In particular, for \(t_{1}<t_{2}\leq s\) and \(R>0\),
\[\int_{B_{R}}|J^{\varepsilon}_{t_{2},s}-J^{\varepsilon}_{t_{1},s}|\leq e^{\int_{t _{1}}^{t_{2}}C_{1}(r)dr}\int_{B_{R}}J^{\varepsilon}_{t_{2},s}-\int_{B_{R}}J^{ \varepsilon}_{t_{1},s}+\left(e^{\int_{t_{1}}^{t_{2}}C_{1}(r)dr}-1\right)\int_{ B_{R}}J^{\varepsilon}_{t_{2},s}.\]
Identifying the modulus of continuity \(\omega_{R}\) in the statement of the Lemma then reduces to proving the uniform-in-\(\varepsilon\) continuity of
\[[0,s]\ni t\mapsto\int_{B_{R}}J^{\varepsilon}_{t,s}(x)dx;\]
note that \(\int_{B_{R}}J^{\varepsilon}_{s,s}(x)dx=|B_{R}|\), so this will also imply that \(\int_{B_{R}}J^{\varepsilon}_{t,s}(x)dx\) is bounded uniformly in \(\varepsilon\).
In view of the \(L^{\infty}\)-boundedness of \(J^{\varepsilon}\), it suffices to prove the uniform-in-\(\varepsilon\) continuity in \(t\) of \(\int f(x)J^{\varepsilon}_{t,s}(x)dx\) for any \(f\in C_{c}(\mathbb{R}^{d})\). The change of variables formula gives
\[\int f(x)J^{\varepsilon}_{t,s}(x)dx=\int f(\phi^{\varepsilon}_{s,t}(x))dx.\]
Note that \(\partial_{t}\phi^{\varepsilon}_{s,t}(x)=-b^{\varepsilon}(t,\phi^{\varepsilon }_{s,t}(x))\), and the Lipschitz constant in \(t\) of \(\phi^{\varepsilon}_{s,t}(x)\) depends only on an upper bound for \(|x|\) and the constant \(C_{0}\) in (2.1), and, therefore, is independent of \(\varepsilon\).
When \(d=1\), the \(L^{\infty}\)-weak-\(\star\) convergence of \(J^{\varepsilon}=\partial_{x}\phi^{\varepsilon}\) to \(J\) can be strengthened via an Aubin-Lions type compactness result.
**Proposition 2.1**.: _Assume \(d=1\), and let \(J^{\varepsilon}\) and \(J\) be as in Lemma 2.3. Then_
\[\lim_{\varepsilon\to 0}J^{\varepsilon}_{\cdot,s}=J_{\cdot,s}\quad\text{strongly in }L^{1}_{\mathrm{loc}}([0,s]\times\mathbb{R})\quad\text{and}\quad\lim_{ \varepsilon\to 0}J^{\varepsilon}_{t,\cdot}=J_{t,\cdot}\quad\text{strongly in }L^{1}_{ \mathrm{loc}}([t,T]\times\mathbb{R}).\]
Proof.: Fix \(t\in[0,T]\) and \(R>0\). Lemma 2.2 implies that there exists \(M\) independent of \(\varepsilon\) such that \(|\phi^{\varepsilon}_{t,s}(x)|\leq M\) for all \(s\in[t,T]\) and \(x\in[-R,R]\). Upon redefining \(b\) outside of \([0,T]\times[-2R,2R]\), we find that \(\phi_{t,s}(x)\), and therefore \(J_{t,s}(x)\), is unchanged, and therefore, in order to prove the \(L^{1}\)-convergence in \([t,T]\times[-R,R]\), we may assume without loss of generality that \(b\) is bounded uniformly. Applying the transformation \(\tilde{\phi}_{t,s}(x)=\phi_{t,s}(x)-\int_{t}^{s}C(r)dr\) for an appropriate \(C\in L^{1}_{+}([0,T])\) depending on \(C_{0}\) from (2.1), we may also assume \(b\geq 1\).
For \((s,x)\in[t,T]\times\mathbb{R}\), set \(f^{\varepsilon}(s,x)=J^{\varepsilon}_{t,s}(x)\). Then \(f^{\varepsilon}\) solves the continuity equation
\[\partial_{s}f^{\varepsilon}+\partial_{x}\left(b^{\varepsilon}(s,x)f^{ \varepsilon}\right)=0\quad\text{in }[t,T]\times\mathbb{R}\quad\text{and}\quad f^{ \varepsilon}(t,\cdot)=1.\]
For a standard mollifier \(\rho\in C^{\infty}_{c}([-1,1])\), let \(\rho_{n}=n\rho(\cdot/n)\) and \(f^{\varepsilon,n}=\rho_{n}*_{t}f^{\varepsilon}\) be the mollification of \(f^{\varepsilon}\) only in the time variable. We then have
\[\partial_{s}f^{\varepsilon,n}+\partial_{x}\left[\rho_{n}*_{t}(b^{\varepsilon} f^{\varepsilon})\right]=0\quad\text{in }\left[t+\frac{1}{n},T\right]\times\mathbb{R}\]
and, for any \(R>0\),
\[\sup_{s\in[t+1/n,T]}\|\partial_{x}\left[\rho_{n}*_{t}(b^{\varepsilon}f^{ \varepsilon})\right](s,\cdot)\|_{L^{1}([-R,R])}\leq\sup_{s\in[t+1/n,T]}\| \partial_{s}f^{\varepsilon,n}(s,\cdot)\|_{L^{1}([-R,R])}\leq n\left\|\rho^{ \prime}\right\|_{L^{1}(\mathbb{R})}\omega_{R}\left(\frac{1}{n}\right),\]
where \(\omega_{R}\) is as in (2.11). It follows that, for fixed \(n\in\mathbb{N}\), \((\rho_{n}*_{t}(b^{\varepsilon}f^{\varepsilon}))_{\varepsilon>0}\) is precompact in \(L^{1}([t,T]\times[-R,R])\), and so, because
\[\lim_{n\to\infty}\rho_{n}*_{t}(b^{\varepsilon}f^{\varepsilon})=bf\]
in \(L^{1}([t,T]\times[-R,R])\), uniformly in \(\varepsilon\), we conclude that \((b^{\varepsilon}f^{\varepsilon})_{\varepsilon>0}\) is precompact in \(L^{1}([t,T]\times[-R,R])\). This implies that, as \(\varepsilon\to 0\), \(b^{\varepsilon}f^{\varepsilon}\) converges strongly in \(L^{1}([t,T]\times[-R,R])\) to \(bf\).
Fix any subsequence \((\varepsilon_{n})_{n\geq 0}\) approaching zero as \(n\to\infty\). Then there exists a further subsequence such that \(f^{\varepsilon_{n_{k}}}b^{\varepsilon_{n_{k}}}\xrightarrow{k\to\infty}fb\) almost everywhere, and therefore \(f^{\varepsilon_{n_{k}}}\xrightarrow{k\to\infty}f\) a.e. in \([t,T]\times[-R,R]\) because \(b\geq 1\) and
\[f^{\varepsilon}(s,x)-f(s,x) =\frac{b(s,x)f^{\varepsilon}(s,x)-b(s,x)f(s,x)}{b(s,x)}\] \[=\frac{b^{\varepsilon}(s,x)f^{\varepsilon}(s,x)-b(s,x)f(s,x)}{b(s,x )}+\frac{\left(b(s,x)-b^{\varepsilon}(s,x)\right)f^{\varepsilon}(s,x)}{b(s,x)}.\]
The convergence of \(f^{\varepsilon_{n_{k}}}\) to \(f\) in \(L^{1}([t,T]\times[-R,R])\), and therefore the convergence of the full family \((f^{\varepsilon})_{\varepsilon>0}\) to \(f\), is a consequence of the Lebesgue dominated convergence theorem.
**Remark 2.4**.: _The one-dimensional structure is important in the proof of Proposition 2.1, in particular, in deducing from the equicontinuity of \(J^{\varepsilon}\) in time that \((b^{\varepsilon}J^{\varepsilon})_{\varepsilon>0}\) belongs to a precompact subset of \(L^{1}\). It is not immediately clear whether this argument can be extended to multiple dimensions._
### The forward flow as the right-inverse of the backward flow
We next investigate the solvability of (2.2) forward in time. This is done by analyzing the Jacobian \(J\) from the previous subsection in order to invert the backward flow. Similar methods are used in [27], and, by including the Jacobian in the analysis, we obtain additionally the almost-everywhere continuity of the inverse.
We will revisit this topic in Section 4 when we analyze the forward flow, which will arise from the theory of renormalized solutions of the appropriate transport equation.
**Proposition 2.2**.: _For \(t\leq s\), there exists a set \(A_{ts}\subset\mathbb{R}^{d}\) of full measure such that, for all \(y\in A_{ts}\), \(\phi_{t,s}^{-1}(\{y\})\) is a singleton, which we denote by \(\{\phi_{s,t}(y)\}\). Moreover, there exists a version of the map \(\phi_{s,t}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) such that \(\phi_{s,t}\) is continuous a.e._
As an intermediate step, we first prove the following.
**Lemma 2.4**.: _Assume \(0\leq t\leq s\leq T\) and \(K\subset\mathbb{R}^{d}\) is nonempty, compact, and connected. Then \(\phi_{t,s}^{-1}(K)\) is nonempty, compact, and connected._
Proof.: For \(r>0\), denote \(K_{r}:=\bigcup_{y\in K}B_{r}(y)\). Fix a sequence \((b^{n})_{n\in\mathbb{N}}\) satisfying (2.3)5, and let \(\phi_{t,s}^{n}\) denote the corresponding backward flow from the previous subsections.
Footnote 5: That is, we abuse notation and suppose that \(b^{n}=b^{\varepsilon_{n}}\) for \((b^{\ell})_{\varepsilon>0}\) satisfying (2.3) and some \((\varepsilon_{n})_{n\in\mathbb{N}}\) satisfying \(\lim_{n\to\infty}\varepsilon_{n}=0\).
We first show that
\[\phi_{t,s}^{-1}(K)=\bigcap_{r>0}\bigcup_{n\in\mathbb{N}}\bigcap_{k\geq n}( \phi_{t,s}^{k})^{-1}(K_{r}). \tag{2.13}\]
Suppose \(x\in\phi_{t,s}^{-1}(K)\). Then \(y=\phi_{t,s}(x)\in K\). Setting \(y_{n}:=\phi_{t,s}^{n}(x)\), we have \(\lim_{n\to\infty}y_{n}=y\) by Lemma 2.2, which means that, for all \(r>0\), there exists \(n\in\mathbb{N}\) such that, for all \(k\geq n\), \(\phi_{t,s}^{k}(x)\in B_{r}(y)\subset K_{r}\). This proves the \(\subset\) direction of (2.13).
Now suppose \(x\) belongs to the right-hand side of (2.13). Then, for all \(r>0\), there exists \(n\in\mathbb{N}\) such that \(x\in(\phi_{t,s}^{k})^{-1}(K_{r})\) for all \(k\geq n\). Set \(y_{k}:=\phi_{t,s}^{k}(x)\), so that we have \(y_{k}\subset K_{r}\) for all \(k\geq n\). We have \(y:=\lim_{k\to\infty}y_{k}=\lim_{k\to\infty}\phi_{t,s}^{k}(x)=\phi_{t,s}(x)\) by Lemma 2.2. On the other hand, we also have \(y\in\overline{K_{r}}\), and so
\[\phi_{t,s}(x)\subset\bigcap_{r>0}\overline{K_{r}}=K.\]
Thus, the \(\supset\) direction of (2.13) is established.
The continuity of \(\phi_{t,s}\) and the compactness of \(K\) imply that \(\phi_{t,s}^{-1}(K)\) is closed. We note also that \((\phi_{t,s}^{k})^{-1}=\phi_{s,t}^{k}\) satisfies (2.5) uniformly in \(k\), because the bound only depends on the constant \(C_{0}\) in the linear growth bound of (2.1), which is also satisfied by \(-b^{k}\). This along with (2.13) implies that \(\phi_{t,s}^{-1}(K)\) is bounded, and thus compact.
We now show that \(\phi_{t,s}\) is surjective. Using again the bound (2.5) satisfied uniformly in \(k\) for \(\phi_{s,t}^{k}\), we set \(x_{n}:=(\phi_{t,s}^{n})^{-1}(y)=\phi_{s,t}^{n}(y)\) and note that \((x_{n})_{n\in\mathbb{N}}\) is bounded. Passing to a subsequence, we have \(\lim_{k\to\infty}x_{n_{k}}=x\) for some \(x\in\mathbb{R}^{d}\), and then \(y=\phi_{t,s}^{n_{k}}(x_{k})\), so that \(y=\lim_{k\to\infty}\phi_{t,s}^{n_{k}}(x_{k})=\phi_{t,s}(x)\).
Finally, we show \(\phi_{t,s}^{-1}(K)\) is connected. For each \(k\in\mathbb{N}\), \((\phi_{t,s}^{k})^{-1}(K_{r})\) is connected, and therefore so is the intersection \(\bigcap_{k\geq n}(\phi_{t,s}^{k})^{-1}(K_{r})\) for each \(n\). These sets are nested in \(n\), so taking the union in \(n\in\mathbb{N}\) yields a connected set. Taking the intersection over \(r>0\) gives the connectedness of \(\phi_{t,s}^{-1}(K)\)
**Remark 2.5**.: _The fact that the approximate backward flows converge uniformly to \(\phi_{t,s}\) is used in the second-to-last paragraph of the proof, in order to show that \(\phi_{t,s}\) is surjective._
Proof of Proposition 2.2.: We identify the set by
\[A_{t,s}=\left\{y\in\mathbb{R}^{d}:\text{there exists }x\in\phi_{t,s}^{-1}(\{y\}) \text{ such that }\phi_{t,s}\text{ is differentiable at }x\text{ and }J_{t,s}(x)\neq 0\right\}.\]
We first check that \(A_{t,s}\) has full measure. Its complement consists of
\[\mathbb{R}^{d}\backslash A_{ts} =\left\{y\in\mathbb{R}^{d}:J_{t,s}=0\text{ at the points of differentiability of }\phi_{t,s}\text{ on }\phi_{t,s}^{-1}(\{y\})\right\}\] \[\quad\cup\left\{y\in\mathbb{R}^{d}:\phi_{t,s}\text{ is not differentiable anywhere in }\phi_{t,s}^{-1}(\{y\})\right\}.\]
The fact that \(\phi_{t,s}\) is differentiable a.e. and the change of variables formula then give
\[|\mathbb{R}^{d}\backslash A_{t,s}|=\int_{\mathbb{R}^{d}}\mathbf{1}\{\phi_{t, s}(x)\in\mathbb{R}^{d}\backslash A_{t,s}\}J_{t,s}(x)dx=0.\]
It remains to show that \(\phi_{t,s}^{-1}(\{y\})\) is a singleton for all \(y\in A_{t,s}\). By Lemma 2.4, \(\phi_{t,s}^{-1}(\{y\})\) is nonempty, compact, and connected. Suppose \(x,\tilde{x}\in\phi_{t,s}^{-1}(\{y\})\) are such that \(J_{t,s}(x)\neq 0\). A Taylor expansion gives
\[y=\phi_{t,s}(\tilde{x})=\phi_{t,s}(x)+\nabla_{x}\phi_{t,s}(x)\cdot(x-\tilde{x} )+o(|x-\tilde{x}|)=y+\nabla_{x}\phi_{t,s}(x)\cdot(x-\tilde{x})+o(|x-\tilde{x}|).\]
The invertibility of \(\nabla_{x}\phi_{t,s}(x)\) then implies that, if \(|\tilde{x}-x|\) is sufficiently small, then \(\tilde{x}=x\), or, in other words, \(x\) is an isolated point. But then the connected set \(\phi_{t,s}(\{y\})\) must be equal to \(\{x\}\), and we call \(x=\phi_{s,t}(y)\).
For \(y\in A_{t,s}\), we then have \((\phi_{t,s}\circ\phi_{s,t})(y)=y\). Since \(\phi_{t,s}^{-1}(\{y\})\) is nonempty for any \(y\in\mathbb{R}^{d}\), we may define a version of \(\phi_{s,t}\) on all of \(\mathbb{R}^{d}\) by imposing that \(\phi_{s,t}(y)\in(\phi_{t,s}^{-1})(\{y\})\) for any \(y\in A_{t,s}\). For this version, we have \(\phi_{t,s}\circ\phi_{s,t}=\mathrm{Id}\) everywhere on \(\mathbb{R}^{d}\). Suppose now that \(y\in A_{t,s}\) and \(\lim_{n\to\infty}y_{n}=y\) for some sequence \((y_{n})_{n\in\mathbb{N}}\subset\mathbb{R}^{d}\). Then
\[\lim_{n\to\infty}(\phi_{t,s}\circ\phi_{s,t})(y_{n})=(\phi_{t,s}\circ\phi_{s,t })(y).\]
We have
\[(\phi_{s,t}(y_{n}))_{n\in\mathbb{N}}\subset\bigcup_{n\in\mathbb{N}}(\phi_{t, s})^{-1}(\{y_{n}\}),\]
which implies by Lemma 2.4 that \((\phi_{s,t}(y_{n}))_{n\in\mathbb{N}}\) is bounded. Letting \(z\) be a limit point of this set, we have, by continuity of the backward flow, that \(y=\lim_{n\to\infty}y_{n}=\phi_{t,s}(z)\), and therefore \(z=\phi_{s,t}(y)\).
**Remark 2.6**.: _We shall see in Section 4 that the forward flow is always \(BV\) in space. Therefore, the "forward Jacobian" \(J_{t,s}\) for \(t>s\) can only be understood as a measure. Indeed, returning to the example \(b(t,x)=\operatorname{sgn}x\) on \(\mathbb{R}\), the right inverse \(\phi_{t,s}\) of \(\phi_{s,t}\) given by (2.7) is \(\phi_{t,s}(x)=x+(\operatorname{sgn}x)(t-s)\) for \(s\leq t\), which is discontinuous only at \(0\). The backward Jacobian is given by \(J_{s,t}(x)=\mathbf{1}\left\{|x|\geq t-s\right\}\), and the forward one is \(J_{t,s}=1+2(t-s)\delta_{0}\)._
**Remark 2.7**.: _The formula \(\phi_{s,t}\circ\phi_{t,s}=\mathrm{Id}\) makes sense a.e. if \(s<t\), because \(\phi_{s,t}\) is Lipschitz and \(\phi_{t,s}\) is measurable. On the other hand, \(\phi_{t,s}\) is not also a left-inverse, since the formula \(\phi_{t,s}\circ\phi_{s,t}\) does not make sense. In the above example, \(\phi_{s,t}(x)\) is equal to \(0\), for \(|x|\leq t-s\), and \(0\) is a point of discontinuity for \(\phi_{t,s}\). In general, the concentration of \(\phi_{s,t}\) on sets of measure \(0\) forbids applying \(\phi_{t,s}\) as a left-inverse._
### Compressive stochastic flows
We now fix a matrix-valued map
\[\Sigma\in L^{2}([0,T],C^{0,1}(\mathbb{R}^{d};\mathbb{R}^{d\times m})), \tag{2.14}\]
and assume that
\[W:\Omega\times[0,T]\to\mathbb{R}^{m}\quad\text{is a standard Brownian motion on a given probability space }(\Omega,\mathcal{F},\mathbb{P},\mathbb{E}). \tag{2.15}\]
In order to extend the results in the preceding subsections, and, in particular, to bypass the difficulties of the backward time direction, we consider forward SDEs with drift satisfying the opposite of (2.1), that is,
\[B:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d},\quad-B\text{ satisfies \eqref{eq:B_1}}, \tag{2.16}\]
and consider the flow
\[\begin{cases}d_{s}\Phi_{s,t}(x)=B(s,\Phi_{s,t}(x))ds+\Sigma(s,\Phi_{s,t}(x)) dW_{s},\quad s\in[t,T],\\ \Phi_{t,t}(x)=x.\end{cases} \tag{2.17}\]
Once again, (2.17) must be understood in the Filippov sense, which means, for \(s\in[t,T]\),
\[\Phi_{s,t}(x)=x+\int_{t}^{s}\alpha_{r}dr+\int_{t}^{s}\Sigma(r,\Phi_{r,t}(x)) dW_{r},\quad\alpha_{s}\in B(s,\Phi_{s,t}(x)), \tag{2.18}\]
and we remark that our assumptions will allow us to always consider probabilistically strong solutions; that is, we solve (2.18) path by path for almost every continuous \(W\) with respect to the Wiener measure. Depending on the context in later sections (in particular, the time direction of solvability for the transport and continuity equations), we consider different examples for \(B\) and \(\Sigma\) for which these assumptions are satisfied.
**Lemma 2.5**.: _For every \((t,x)\in[0,T]\times\mathbb{R}^{d}\) and \(\mathbb{P}\)-almost surely, there exists a unique strong solution \(\Phi_{s,t}(x)\) of (2.17) defined on \([t,T]\times\mathbb{R}^{d}\). Moreover, for all \(p\in[2,\infty)\), there exists a constant \(C=C_{p}>0\) depending only on the assumptions (2.1) and (2.14) such that_
\[\mathbb{E}|\Phi_{s,t}(x)-\Phi_{s,t}(y)|^{p}\leq C|x-y|^{p}\quad\text{for all }0\leq t\leq s\leq T\text{ and }x,y\in\mathbb{R}^{d}, \tag{2.19}\]
\[\mathbb{E}|\Phi_{s,t}(x)|^{p}\leq C(|x|^{p}+1)\quad\text{for all }-0\leq t\leq s\leq T\text{ and }x\in\mathbb{R}^{d}, \tag{2.20}\]
_and_
\[\begin{cases}\mathbb{E}|\Phi_{s_{1},t}(x)-\Phi_{s_{2},t}(x)|^{p}\leq C(1+|x|) |s_{1}-s_{2}|^{p/2}\\ \text{for all }t\in[0,T]\text{, }s_{1},s_{2}\in[t,T]\text{, and }x\in\mathbb{R}^{d}. \end{cases} \tag{2.21}\]
_With probability one, for all \(0\leq r\leq s\leq t\leq T\), \(\Phi_{t,s}\circ\Phi_{s,r}=\Phi_{t,r}\). If \((b^{\varepsilon})_{\varepsilon>0}\) are regularizations satisfying (2.3), then, with probability one, the corresponding stochastic flows \(\Phi^{\varepsilon}\) converge locally uniformly as \(\varepsilon\to 0\) to \(\Phi\)._
Proof.: For \(\varepsilon>0\), let \(B^{\varepsilon}\) be the convolution of \(B\) in space by a standard mollifier (so that \(b^{\varepsilon}:=-B^{\varepsilon}\) satisfies (2.3)), and let \(\Phi^{\varepsilon}_{t,s}\) denote the corresponding stochastic flow. Ito's formula, the one-sided Lipschitz assumption, and the Lipschitz continuity of \(\Sigma\) yield, for any \(p\geq 2\) and some \(C\in L^{1}_{+}([0,T])\),
\[\frac{\partial}{\partial t}\mathbb{E}|\Phi^{\varepsilon}_{t,s}(x)-\Phi^{ \varepsilon}_{t,s}(y)|^{p}\leq C(t)\mathbb{E}|\Phi^{\varepsilon}_{t,s}(x)- \Phi^{\varepsilon}_{t,s}(y)|^{p},\]
which, along with Gronwall's inequality, leads to the first statement. The other two estimates are proved similarly, with constants independent of \(\varepsilon>0\).
In view of (2.19) and (2.21), the Kolmogorov continuity criterion then yields, for any \(R>0\), \(p\geq 2\) and \(\delta\in(0,1)\), a constant \(C=C_{R,p,\delta}>0\) such that, for all \(s\in[0,T]\), \(\lambda\geq 1\) and \(\varepsilon>0\),
\[\mathbb{P}\left(\sup_{x,y\in B_{R}}\sup_{r,s\in[s,T]}\frac{|\Phi^{\varepsilon} _{t,s}(x)-\Phi^{\varepsilon}_{r,s}(y)|}{|x-y|^{1-\delta}+|t-s|^{\frac{1}{2}(1- \delta)}}>\lambda\right)\leq\frac{C}{\lambda^{p}}.\]
It follow that the probability measures on \(C([s,T]\times\mathbb{R}^{d};\mathbb{R}^{d})\) induced by the random variables \((\Phi^{\varepsilon}_{\cdot,s})_{\varepsilon>0}\) are tight with respect to the topology of locally uniform convergence, and therefore converge weakly along a subsequence as \(\varepsilon\to 0\) to a probability measure that gives rise to a weak (in the probabilistic sense) solution of (2.17), for which the estimates in the statement of the lemma continue to hold.
A similar computation to the one above reveals that, for a fixed probability space and almost every Brownian path \(W\), the solution of (2.17) is unique. The pathwise uniqueness then implies, by a standard argument due to Yamada and Watanabe [61], that there is a unique strong solution for every \(x\in\mathbb{R}^{d}\)
**Remark 2.8**.: _It is an open question whether \(\Phi_{t,s}\) is Lipschitz continuous, even if \(B\) is Lipschitz. When \(B\) is Lipschitz and \(\Sigma\in C^{1,\alpha}\) for some \(\alpha\in(0,1]\), it turns out the flow \(\Phi_{t,s}\) is \(C^{1,\alpha^{\prime}}\) for any \(\alpha^{\prime}\in(0,\alpha)\), but it is not clear how to extend this to the case where \(-B\) satisfies the one-sided Lipschitz bound from below._
_As a consequence, an understanding of the Jacobian \(\det(\nabla_{x}\Phi_{t,s}(x))\), or of the stability with respect to regularizations of \(B\), is considerably more complicated in the stochastic case. The results of Section 4, where we discuss the expansive regime, are therefore constrained to the first-order case, and we relegate the second-order analysis to future work. One exception is when \(\Sigma\) is independent of the spatial variable, in which case a change of variables relates the SDE to an ODE of the form (2.2) with a random \(b\)._
### Small noise approximations
We return to the backward flow \(\phi_{t,s}\), \(0\leq t\leq s\leq T\), from Lemma 2.2. Recall that the backward flow also corresponds to the forward flow for \(-b\); that is,
\[\frac{\partial}{\partial s}\phi_{t,s}(x)=-b(s,\phi_{t,s}(x)),\quad s\geq t, \quad\phi_{t,t}(x)=x. \tag{2.22}\]
For \(\varepsilon>0\), let \(\phi^{\varepsilon}_{t,s}(x)\) denote the following stochastic flow
\[d_{s}\phi^{\varepsilon}_{t,s}(x)=-b(s,\phi^{\varepsilon}_{t,s}(x))ds+ \varepsilon dW_{s}\quad s\geq t,\quad\phi^{\varepsilon}_{t,t}(x)=x, \tag{2.23}\]
where \(W\) is now a \(d\)-dimensional Brownian motion. We note that (2.17) falls under the assumptions of Lemma 2.5, but in fact (2.23) admits a unique strong solution as soon as \(b\) is merely locally bounded [36, 60]. In general, the limiting solutions as \(\varepsilon\to 0\) are not unique; however, we immediately have the following as a consequence of Lemma 2.5.
**Proposition 2.3**.: _For every \(\varepsilon>0\), there exists a unique strong solution of (2.23). Moreover, as \(\varepsilon\to 0\), \(\phi^{\varepsilon}\) converges locally uniformly to \(\phi\)._
_If \(J^{\varepsilon}=\det(\nabla_{x}\phi^{\varepsilon})\), then, as \(\varepsilon\to 0\), \(J^{\varepsilon}\) converges weak-\(\star\) in \(L^{\infty}([s,T]\times\mathbb{R}^{d})\) and weakly in \(C([s,T],L^{1}_{\mathrm{loc}}(\mathbb{R}^{d}))\) to \(J\)._
## 3 The compressive regime
In this section, we consider the transport and continuity equations in the so-called compressive regime. That is, for velocity field \(b\) satisfying (2.1), we study the TVP for the nonconservative equation
\[-\frac{\partial u}{\partial t}+b(t,x)\cdot\nabla u=0\quad\text{in }(0,T)\times \mathbb{R}^{d}\quad\text{and}\quad u(T,\cdot)=u_{T}\quad\text{in }\mathbb{R}^{d}, \tag{3.1}\]
and the IVP for the conservative equation
\[\frac{\partial f}{\partial t}-\operatorname{div}(b(t,x)f)=0\quad\text{in }(0,T)\times \mathbb{R}^{d}\quad\text{and}\quad f(0,\cdot)=f_{0}. \tag{3.2}\]
We recall that \(\operatorname{div}b\) is bounded from below, and therefore, the direction of time for (3.1) and (3.2) does not allow for a solution theory in Lebesgue spaces, due to the concentrative nature of the backward flow analyzed in the previous section. The TVP (3.1) will be solved in the space of continuous functions, while (3.2) is solved in the dual space of locally bounded Radon measures.
We also obtain analogous result for the second-order equations
\[-\frac{\partial u}{\partial t}-\operatorname{tr}[a(t,x)\nabla^{2}u]+b(t,x) \cdot\nabla u=0\quad\text{in }(0,T)\times\mathbb{R}^{d}\quad\text{and}\quad u(T, \cdot)=u_{T} \tag{3.3}\]
and
\[\frac{\partial f}{\partial t}-\operatorname{div}\big{[}\operatorname{div}(a( t,x)f)-b(t,x)f\big{]}=0\quad\text{in }(0,T)\times\mathbb{R}^{d}\quad\text{and}\quad f(0,\cdot)=f_{0}, \tag{3.4}\]
where \(a=\frac{1}{2}\sigma\sigma^{T}\) for \(\sigma:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d\times m}\) satisfying
\[\sup_{x\in\mathbb{R}^{d}}\frac{|\sigma(\cdot,x)|}{1+|x|}+\sup_{y,z\in\mathbb{R }^{d}}\frac{|\sigma(\cdot,y)-\sigma(\cdot,z)|}{|y-z|}\in L^{2}([0,T]). \tag{3.5}\]
### The nonconservative equation
#### 3.1.1 Representation formula
When interpreting (3.1) in the distributional sense, we are constrained to seek solutions that are continuous. Indeed, the distribution
\[b\cdot\nabla u=\operatorname{div}(bu)-(\operatorname{div}b)u\]
pairs the solution \(u\) with \(\operatorname{div}b\), which is a measure in general. The other motivating factor is the formal representation formula for the solution of the TVP (3.1), which is given in terms of the backward flow:
\[u(t,x)=u_{T}(\phi_{t,T}(x))\quad\text{for }(t,x)\in[0,T]\times\mathbb{R}^{d}. \tag{3.6}\]
This formula and the Lipschitz continuity of \(\phi_{t,T}\) given in Lemma 2.2 suggest that the solution operator for (3.1) should preserve continuity. In fact, the formula (3.6) defines a distributional solution, which is uniquely obtained from limits of natural regularizations of the equation.
**Theorem 3.1**.: _If \(u_{T}\in C(\mathbb{R}^{d})\), then the function \(u\) in (3.6) is a distributional solution of (3.1). Moreover, if \((b^{\varepsilon})_{\varepsilon>0}\) satisfy (2.3) and \(u^{\varepsilon}\) is the corresponding solution of (3.1) with velocity field \(b^{\varepsilon}\), then, as \(\varepsilon\to 0\), \(u^{\varepsilon}\) converges locally uniformly to \(u\)._
Proof.: The unique solution \(u^{\varepsilon}\) for the regularized velocity field is given by \(u^{\varepsilon}(t,\cdot)=u_{T}\circ\phi_{t,T}^{\varepsilon}\), where \(\phi^{\varepsilon}\) is the flow corresponding to \(b^{\varepsilon}\). By Lemma 2.2, as \(\varepsilon\to 0\), \(\phi^{\varepsilon}\) converges locally uniformly to \(\phi\), and so the local-uniform convergence to \(u\) follows from the continuity of \(u_{0}\).
Multiplying the equation for \(u^{\varepsilon}\) by some \(\psi\in C^{1}_{c}((0,T)\times\mathbb{R}^{d})\) and integrating by parts gives
\[\int_{0}^{T}\int_{\mathbb{R}^{d}}u^{\varepsilon}(t,x)\left(\partial_{t}\psi(t,x)-b^{\varepsilon}(t,x)\cdot\nabla\psi(t,x)+(\operatorname{div}b^{\varepsilon }(t,x))\psi(t,x)\right)dxdt=0.\]
As \(\varepsilon\to 0\), \(b^{\varepsilon}\to b\) almost everywhere and \(\operatorname{div}b^{\varepsilon}\rightharpoonup\operatorname{div}b\) weakly in the sense of measures, and so the fact that \(u\) is a distributional solution follows.
Turning next to the second-order equation (3.3), we identify a solution candidate with the appropriate stochastic flow. We do so by changing the time direction in \(b\) and \(\sigma\) and considering the SDE
\[\begin{cases}d_{s}\Phi_{s,t}(x)=-b(s,\Phi_{s,t}(x))ds+\sigma(s,\Phi_{s,t}(x) )dW_{s},&s\in[t,T],\\ \Phi_{t,t}=\operatorname{Id},\end{cases} \tag{3.7}\]
where \(W\) is as in (2.15). Note that (3.7) is of the type in (2.17) and thus falls within the assumptions of Lemma 2.5. In particular, if \(u_{T}\) is continuous, then, in view of (2.19)-(2.21), the formula
\[u(t,x)=\mathbb{E}[u_{T}(\Phi_{T,t}(x))] \tag{3.8}\]
defines a continuous function. Moreover, if \(u_{T}\) is Lipschitz, then \(u(t,\cdot)\) is Lipschitz for all \(t>0\), and \(1/2\)-Holder continuous in time. Note that, in this case, the distribution \(\operatorname{tr}[a\nabla^{2}u]=\operatorname{div}(a\nabla u)-\operatorname{ div}a\cdot\nabla u\) makes sense, because \(\nabla u\) and \(\operatorname{div}a\) both belong to \(L^{\infty}\).
The following is proved exactly as for Theorem 3.1, with the use of the estimates in Lemma 2.5.
**Theorem 3.2**.: _Let \(u_{T}\in C(\mathbb{R}^{d})\) and define \(u\) by (3.8). If \((b^{\varepsilon})_{\varepsilon>0}\) satisfy (2.3) and \(u^{\varepsilon}\) is the corresponding solution of (3.3) with velocity \(b^{\varepsilon}\), then, as \(\varepsilon\to 0\), \(u^{\varepsilon}\) converges locally uniformly to \(u\). Moreover, if \(u_{T}\in C^{0,1}\), then_
\[\sup_{(t,x,y)\in[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{d}}\frac{|u(t,x)-u( t,y)|}{|x-y|}+\sup_{(r,s,z)\in[0,T]\times\mathbb{R}^{d}}\frac{|u(r,z)-u(s,z)|}{|r-s|^{1/2 }(1+|z|)}<\infty,\]
_and \(u\) is a distributional solution of (3.3)._
As a special case, we consider, for \(\varepsilon>0\), the "viscous" version of (3.1), that is
\[-\partial_{t}u^{\varepsilon}-\frac{\varepsilon^{2}}{2}\Delta u^{\varepsilon} +b(t,x)\cdot\nabla u^{\varepsilon}=0\quad\text{in }(0,T)\times\mathbb{R}^{d},\quad u^{ \varepsilon}(T,\cdot)=u_{T}. \tag{3.9}\]
This uniformly parabolic equation has a unique classical solution for any \(u_{T}\in C(\mathbb{R}^{d})\), which, moreover, is given by \(u^{\varepsilon}(t,x)=\mathbb{E}[u_{T}(\phi_{t,T}^{\varepsilon}(x))]\), where now \(\phi^{\varepsilon}\) denotes the solution of the SDE (2.23) from the previous section. Arguing just as in Theorem 3.1 and invoking Proposition 2.3 immediately gives the following.
**Theorem 3.3**.: _As \(\varepsilon\to 0\), the solution \(u^{\varepsilon}\) converges locally uniformly to the function \(u\) given by (3.6)._
#### 3.1.2 Viscosity solutions
Although (3.6) and (3.8) are the distributional solutions that arise uniquely through regularization (either of \(b\) or through vanishing viscosity limits), it turns out that distributional solutions are not unique in general (see subsubsection 3.1.3 below). It is then a natural question as to whether the "good" solutions can be characterized other than as limits of regularizations, or by the explicit formulae. For example, this is done for the one-dimensional problem in [55] by introducing a sort of entropy condition.
We give a different characterization here using the theory of viscosity solutions [32], which covers both the first- and second-order problems. We present the results here only in the second-order case, which includes the first-order equations when \(a=0\).
We define, for \((t,x,p)\in[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{d}\),
\[\underline{b}(t,x,p)=\liminf_{z\to x}b(t,z)\cdot p\quad\text{and}\quad \overline{b}(t,x,p)=\limsup_{z\to y}b(t,z)\cdot p.\]
For fixed \((t,\underline{x})\in[0,T]\times\mathbb{R}^{d}\), \(\underline{b}(t,x,\cdot)\) and \(\overline{b}(t,x,\cdot)\) are Lipschitz continuous on \(\mathbb{R}^{d}\), and, for fixed \((t,p)\in[0,T]\), \(\underline{b}(t,\cdot,p)\) and \(\overline{b}(t,\cdot,p)\) are respectively lower and upper semicontinuous.
The following definition of viscosity (sup, super) solutions closely resembles the one in [51].
**Definition 3.1**.: An upper-semicontinuous (resp. lower-semicontinuous) function \(u\) is called a subsolution (resp. supersolution) of (3.3) if, for all \(\psi:[0,T]\times\mathbb{R}^{d}\) that are \(C^{1}\) in \(t\) and \(C^{2}\) in \(x\), it holds that
\[-\frac{d}{dt}\max_{x\in\mathbb{R}^{d}}\left\{u(t,x)-\psi(t,x)\right\}\leq\inf \left\{\operatorname{tr}[a(t,y)\nabla^{2}\psi(t,y)]-\underline{b}(t,y,\nabla \psi(t,y)):y\in\arg\max\{u(t,\cdot)-\psi(t,\cdot)\}\right\}\]
(resp.
\[-\frac{d}{dt}\min_{x\in\mathbb{R}^{d}}\left\{u(t,x)-\psi(t,x)\right\}\geq\sup \left\{\operatorname{tr}[a(t,y)\nabla^{2}\psi(t,y)]-\overline{b}(t,y,\nabla \psi(t,y)):y\in\arg\min\{u(t,\cdot)-\psi(t,\cdot)\}\right\}\Big{)}.\]
If \(u\in C([0,T]\times\mathbb{R}^{d})\) is both a sub and supersolution, we say \(u\) is a solution.
The comparison principle is proved by doubling the space variable. In particular, we have the following lemma, which follows exactly by methods as in [53, 54, 31]. For \((t,x,y)\in[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{d}\), we define the nonnegative matrix
\[A(t,x,y):=\begin{pmatrix}\sigma(t,x)\\ \sigma(t,y)\end{pmatrix}\left(\sigma(t,x)^{T}\quad\sigma(t,y)^{T}\right).\]
**Lemma 3.1**.: _Assume \(u\) and \(v\) are respectively a sub and supersolution of (3.3). Then \(w(t,x,y)=u(t,x)-v(t,y)\) is a subsolution of_
\[-\partial_{t}w-\operatorname{tr}[a(t,x,y)\nabla^{2}_{(x,y)}w]+\underline{b}(t,x,\nabla_{x}w)-\overline{b}(t,x,-\nabla_{y}w)\leq 0.\]
We may now state and prove the comparison principle.
**Theorem 3.4**.: _If \(u\) and \(v\) are respectively a sub and supersolution of (3.3) such that_
\[\sup_{(t,x)\in[0,T]\times\mathbb{R}^{d}}\frac{u(t,x)}{1+|x|}+\sup_{(s,y)\in[0,T]\times\mathbb{R}^{d}}\frac{-v(s,y)}{1+|y|}<\infty,\]
_then \(t\mapsto\sup_{x\in\mathbb{R}^{d}}\left\{u(t,x)-v(t,x)\right\}\) is nondecreasing._
Proof.: Define \(w(t,x,y):=u(t,x)-v(t,y)\), fix \(\delta,\varepsilon>0\), and define \(\Phi_{\delta,\varepsilon}(x,y)=\frac{1}{2\delta}|x-y|^{2}+\frac{1}{2\varepsilon }(|x|^{2}+|y|^{2})\). In view of the growth of \(u\) and \(v\) in \(x\), for all \(t\in[0,T]\), the map \(w(t,\cdot,\cdot)-\Phi_{\delta,\varepsilon}(x,y)\) attains a maximum on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\). Moreover, standard arguments from the theory of viscosity solutions (see for instance [32, Lemma 3.1]) imply that there exist \(\rho_{\delta}>0\) and \(\lambda_{\varepsilon}\) such that \(\lim_{\delta\to 0}\rho_{\delta}^{2}/\delta=\lim_{\varepsilon\to 0} \varepsilon\lambda_{\varepsilon}^{2}=0\), and
\[|x-y|\leq\rho_{\delta}\quad\text{and}\quad|x|+|y|\leq\lambda_{\varepsilon} \quad\text{for all }(x,y)\in\arg\max\left\{w(t,\cdot,\cdot)-\Phi_{\delta, \varepsilon}\right\},\quad t\in[0,T].\]
Therefore, if \(t\in[0,T]\) and \((x,y)\in\arg\max\left\{w(t,\cdot,\cdot)-\Phi_{\delta,\varepsilon}\right\}\), we have, for some \(C\in L^{1}_{+}([0,T])\),
\[\operatorname{tr} [a(t,x,y)\nabla_{(x,y)}^{2}\Phi_{\delta,\varepsilon}(x,y)]\] \[=\operatorname{tr}\left[\left(\frac{1}{\delta}\begin{pmatrix} \operatorname{Id}&-\operatorname{Id}\\ -\operatorname{Id}&\operatorname{Id}\end{pmatrix}+\varepsilon\begin{pmatrix} \operatorname{Id}&0\\ 0&\operatorname{Id}\end{pmatrix}\right)\begin{pmatrix}\sigma(t,x)\\ \sigma(t,y)\end{pmatrix}\begin{pmatrix}\sigma(t,x)^{T}&\sigma(t,y)^{T}\end{pmatrix}\right]\] \[\leq C(t)\left(\frac{\rho_{\delta}^{2}}{\delta}+\varepsilon \lambda_{\varepsilon}^{2}\right)\]
and
\[-\underline{b} (t,x,\nabla_{x}\Phi_{\delta,\varepsilon}(x,y))+\overline{b} \left(t,y,-\nabla_{y}\Phi_{\delta,\varepsilon}(x,y)\right)\] \[=\limsup_{(z,w)\to(x,y)}\left\{-b(t,z)\cdot\left(\frac{x-y}{ \delta}+\beta x\right)+b(t,w)\cdot\left(\frac{x-y}{\delta}-\beta y\right)\right\}\] \[=\limsup_{(z,w)\to(x,y)}\left\{-(b(t,z)-b(t,w))\cdot\frac{z-w}{ \delta}-b(t,z)\cdot\beta z+b(t,w)\cdot\beta w\right\}\] \[\leq C(t)\left(\frac{\rho_{\delta}^{2}}{\delta}+\varepsilon+ \varepsilon\lambda_{\varepsilon}\right).\]
It now follows from Definition 3.1 and Lemma 3.1 that, for some \(C_{\delta,\varepsilon}\in L^{1}_{+}([0,T])\) satisfying \(\lim_{(\delta,\varepsilon)\to(0,0)}C_{\delta,\varepsilon}=0\) in \(L^{1}([0,T])\),
\[t\mapsto\sup_{(x,y)\in\mathbb{R}^{d}\times\mathbb{R}^{d}}\left\{w(t,x,y)-\Phi _{\delta,\varepsilon}(x,y)\right\}-\int_{t}^{T}C_{\delta,\varepsilon}(s)ds\]
is nondecreasing. The result follows upon sending \(\delta\) and \(\varepsilon\) to \(0\).
As a consequence of the comparison theorem, the "good" distributional solution of (3.3) can be uniquely characterized.
**Theorem 3.5**.: _Assume \(u_{T}\in C(\mathbb{R}^{d})\) and \(u_{T}\cdot(1+|\cdot|^{-1})\in L^{\infty}\). Then (3.8) is the unique viscosity solution of (3.3)._
Proof.: The fact that (3.8) defines a viscosity solution is due to Theorem 3.2 and the stability properties of viscosity solutions6. In view of Lemma 2.5 and the growth of \(u_{T}\), we may appeal to Theorem 3.4 to conclude that (3.8) is the only viscosity solution of the terminal value problem (3.3).
Footnote 6: Note that smooth solutions of the equation corresponding to \(b^{\varepsilon}\) satisfying (2.3), or of the viscous equation (3.9), are viscosity solutions in the sense of Definition 3.1.
#### 3.1.3 (Non)equivalence of distributional and viscosity solutions
For \(x\in\mathbb{R}\), set \(b(t,x)=\operatorname{sgn}x\) and \(u_{T}(x)=|x|\). Using the formula (2.7) for the backward flow, the solution (3.6) becomes
\[u(t,x)=(|x|-(T-t))_{+}. \tag{3.10}\]
However, the Lipschitz function
\[v(t,x)=|x|-(T-t) \tag{3.11}\]
is another distributional solution (and in fact satisfies the equation a.e.). It can also be checked directly that (3.11) does not give a viscosity solution of (3.1). Indeed, note that \(v(t,x)-t\) attains a global minimum at any \((t,0)\in[0,T]\times\mathbb{R}\). Applying the supersolution definition with \(\phi(t,x)=t\) yields the contradictory \(-1\geq 0\).
The uniqueness of distributional solutions fails even if \(b\) is continuous. Indeed, if \(0<\alpha<1\) and \(b(t,x)=\operatorname{sgn}x|x|^{\alpha}\) and \(u_{T}(x)=|x|^{1-\alpha}\), then, arguing similarly as in the above example,
\[u(t,x)=\left(|x|^{1-\alpha}-(1-\alpha)(T-t)\right)_{+} \tag{3.12}\]
and
\[v(t,x)=|x|^{1-\alpha}-(1-\alpha)(T-t) \tag{3.13}\]
are two distributional solutions, and (3.12) is the one corresponding to (3.6). Once again, (3.13) can directly be seen to fail the viscosity supersolution property.
In the first example above, \(u_{T}\) is Lipschitz while \(b\) is discontinuous, and, while \(b\) is continuous in the second example, we take \(u_{T}\) to be non-Lipschitz. This should be compared with the following sufficient criterion for equivalence.
**Theorem 3.6**.: _If \(b\in C([0,T]\times\mathbb{R}^{d})\) satisfies (2.1) and \(u_{T}\in C^{0,1}(\mathbb{R}^{d})\), then there exists a unique distributional solution \(u\in C([0,T],C^{0,1}(\mathbb{R}^{d}))\) given by (3.6)._
Proof.: Let \(\rho\in C_{c}^{\infty}\) be a standard mollifier and, for \(\varepsilon>0\), set \(\rho_{\varepsilon}(x)=\varepsilon^{-d}\rho(\varepsilon^{-1}x)\). Let \(u\in C([0,T],C^{0,1}(\mathbb{R}^{d}))\) be a distributional solution of (3.1) and define \(u_{\varepsilon}=u*\rho_{\varepsilon}\). Then
\[-\partial_{t}u_{\varepsilon}+b\cdot\nabla u_{\varepsilon}=r_{\varepsilon} \quad\text{in }(0,T)\times\mathbb{R}^{d}, \tag{3.14}\]
where
\[r_{\varepsilon}(t,x)=\int_{\mathbb{R}^{d}}\left(b(t,y)-b(t,x)\right)\cdot \nabla u(t,y)\rho_{\varepsilon}(x-y)dy.\]
Note that \(r_{\varepsilon}\in C([0,T]\times\mathbb{R}^{d})\), and \(u_{\varepsilon}\) solves (3.14) in the sense of viscosity solutions. Moreover, the continuity of \(b\) and boundedness of \(\nabla u\) imply that \(r_{\varepsilon}\xrightarrow[\varepsilon\to 0]{}0\) locally uniformly. Standard stability results from the theory of viscosity solutions then imply that the limit \(u\) of \(u_{\varepsilon}\) is the unique viscosity solution of (3.1).
The above result can be extended by studying the interplay between regularity of \(b\) and \(u\).
**Theorem 3.7**.: _Suppose that \(\alpha,\beta\in(0,1]\) satisfy \(\alpha+\beta>1\), \(b\) satisfies (2.1) and \(\sup_{t\in[0,T]}[b(t,\cdot)]_{C^{\alpha}}<\infty\), and \(u\) is a distributional solution of (3.1) such that \(\sup_{t\in[0,T]}[u(t,\cdot)]_{C^{\beta}}<\infty\). Then \(u\) is the unique viscosity solution of (3.1)._
**Remark 3.1**.: _The condition on \(\alpha+\beta\), and, in particular, the strict inequality, is sharp, as the example above with \(b(x)=\operatorname{sgn}x|x|^{\alpha}\) and \(u_{T}(x)=|x|^{1-\alpha}\) shows._
Proof of Theorem 3.7.: Arguing similarly as for Theorem 3.6, it suffices to prove that
\[r_{\varepsilon}=(b\cdot\nabla u)*\rho_{\varepsilon}-b\cdot\nabla(u*\rho_{ \varepsilon})\xrightarrow[\varepsilon\to 0]{}0\quad\text{locally uniformly},\]
where \(\rho_{\varepsilon}\) is a standard mollifier. We note that \(r_{\varepsilon}=M_{\varepsilon}[b(t,\cdot),u(t,\cdot)]\), where the bilinear operator \(M_{\varepsilon}\) is defined, for sufficiently regular \((B,U):\mathbb{R}^{d}\to\mathbb{R}^{d}\times\mathbb{R}\), by
\[M_{\varepsilon}[B,U]=\int_{\mathbb{R}^{d}}\left(B(y)-B(x)\right)\cdot\nabla U (y)\rho_{\varepsilon}(x-y)dy.\]
Standard interpolation arguments give, for some \(C>0\) depending on \(\alpha\) and \(\beta\), for all \((B,U)\in C^{\alpha}\times C^{\beta}\),
\[|M_{\varepsilon}[B,U]|\leq C\varepsilon^{\alpha+\beta-1}[B]_{C^{\alpha}}[U]_{ C^{\beta}}.\]
Therefore \(|r_{\varepsilon}(t,x)|\leq C[b(t,\cdot)]_{C^{\alpha}}[u(t,\cdot)]_{C^{\beta}} \varepsilon^{\alpha+\beta-1}\), and we conclude upon sending \(\varepsilon\to 0\).
### The conservative equation
#### 3.2.1 Duality solutions
For either of the two conservative equations (3.2) and (3.4), the tendency of the backward flow to concentrate on sets of Lebesgue measure zero implies that, even if \(f_{0}\) is absolutely continuous with respect to the Lebesgue measure, \(f(t,\cdot)\) may develop a singular part for \(t>0\).
This presents an obstacle in defining solutions in the sense of distributions, since the product of the discontinuous vector field \(b\) with a singular measure \(f\) may not be well-defined. Instead, we directly define solutions in duality with the nonconservative equation.
**Definition 3.2**.: A map \(f\in C([0,T],\mathcal{M}_{\mathrm{loc,w}})\) is called a solution of (3.2) if, for all \(t\in[0,T]\) and \(g\in C_{c}(\mathbb{R}^{d})\),
\[\int g(x)f(t,dx)=\int g(\phi_{0,t}(x))f_{0}(dx).\]
**Remark 3.2**.: _For \(g\in C_{c}(\mathbb{R}^{d})\) and \(t\in[0,T]\), \((s,x)\mapsto g(\phi_{s,t}(x))\) is the solution of the transport equation (3.1) in \([0,t]\times\mathbb{R}^{d}\) with terminal value \(g\) at time \(t\), and, hence, \(f\) is called the duality solution of (3.2). Equivalently, \(f(t,\cdot)\) is the pushforward by \(\phi_{0,t}\) of the measure \(f_{0}\). When \(f_{0}\) is a probability measure, this means that \(f(t,\cdot)\) is the law at time \(t\) of the stochastic process \(\phi_{0,t}(X_{0})\), where \(X_{0}\) is a random variable with law \(f_{0}\)._
**Remark 3.3**.: _The notion of duality solution can be equivalently formulated in relation to nonconservative equations with a right-hand side7, that is, for \(g\in L^{1}([0,T],C(\mathbb{R}^{d}))\),_
Footnote 7: The theory of viscosity solutions of the terminal value problem for (3.15) can be formulated following the theory of the previous subsection with little change.
\[-\partial_{t}u+b(t,x)\cdot\nabla u=g(t,x)\quad\text{in }(0,T)\times\mathbb{R}^{d}. \tag{3.15}\]
_With this perspective, although the object \(\mathrm{div}(bf)\) does not make sense as a classical distribution, the equation can still be applied to particular singular test functions, namely, solutions of equations like (3.15). Then the pairing_
\[\int_{\mathbb{R}^{d}}u(T,x)f(T,dx)-\int_{\mathbb{R}^{d}}u(0,x)f_{0}(dx)+\int_{ 0}^{T}\int_{\mathbb{R}^{d}}\underbrace{[-\partial_{t}u(t,x)+b(t,x)\cdot\nabla u (t,x)]}_{=g(t,x)}f(t,dx)=0 \tag{3.16}\]
_has a sense, because the singular terms collapse into a continuous function, which may be paired with \(f(t,\cdot)\)._
**Theorem 3.8**.: _There exists a unique duality solution \(f\) of (3.2). If, for \(\varepsilon>0\), \(f^{\varepsilon}\) is the solution corresponding to \(b^{\varepsilon}\) as in (2.3), then, as \(\varepsilon\to 0\), \(f^{\varepsilon}\) converges weakly in the sense of measures to \(f\). If \(1\leq p<\infty\), \(f_{0},g_{0}\in\mathcal{P}_{p}\), and \(f\) and \(g\) are the corresponding duality solutions, then, for some \(C>0\) depending on \(p\) and the constants in (2.1), \(\mathcal{W}_{p}(f_{t},g_{t})\leq C\mathcal{W}_{p}(f_{0},g_{0})\)._
Proof.: The existence and uniqueness of duality solutions is a direct consequence of the definition. Moreover, the duality solution identity implies that, for any \(R>0\) and for some \(C>0\) depending on the constants in (2.1), \(\left\|f(t,\cdot)\right\|_{TV(B_{R})}\leq\left\|f_{0}\right\|_{TV(B_{R+C})}\). For \(0\leq s<t\leq T\) and \(g\in C_{c}(\mathbb{R}^{d})\), we apply the duality formula with the test function \(g\circ\phi_{s,t}\) and obtain the identity
\[\int_{\mathbb{R}^{d}}g(x)f(t,dx)=\int_{\mathbb{R}^{d}}g(\phi_{s,t}(x))f(s,dx).\]
Then, by Lemma 2.2, for some modulus of continuity \(\omega\) depending on the modulus of continuity for \(g\),
\[\left|\int_{\mathbb{R}^{d}}g(x)\left[f(t,dx)-f(s,dx)\right]\right|\leq\omega( |t-s|)\left\|f_{0}\right\|_{B_{\mathrm{supp}\,g+C}},\]
and we conclude that \(f\in C([0,T],\mathcal{M}_{\mathrm{loc,w}})\).
For \(R>0\), define \(f_{0,R}:=f_{0}\mathrm{I}_{B_{R}}\), and denote by \(f_{R}\) and \(f_{R}^{\varepsilon}\) the duality solutions of (3.2) with respectively \(b\) and \(b^{\varepsilon}\) and initial condition \(f_{0,R}\). It then suffices to prove that, for fixed \(R>0\) as \(\varepsilon\to 0\), \(f_{R}^{\varepsilon}\rightharpoonup f_{R}\) in the sense of measures. Then, in view of Lemma 2.2, for any \(t\in[0,T]\) and \(g\in C_{c}(\mathbb{R}^{d})\) for sufficiently large support,
\[\int_{\mathbb{R}^{d}}g(x)f_{R}(t,dx)=\int_{B_{R}}g(\phi_{0,t}(x))f_{0}(dx)= \int_{\mathbb{R}^{d}}g(\phi_{0,t}(x))f_{0}(dx)=\int_{\mathbb{R}^{d}}g(x)f(t,dx),\]
and similarly for \(f^{\varepsilon}\).
Let then \(g\in C_{c}(\mathbb{R}^{d})\) and \(t\in(0,T]\) be fixed, and assume without loss of generality that \(f_{0}\) has compact support in \(B_{R}\) for some \(R>0\). Then, for \(\varepsilon>0\),
\[\int_{\mathbb{R}^{d}}g(x)f^{\varepsilon}(t,dx)=\int g(\phi_{0,t}^{\varepsilon} (x))f_{0}(dx).\]
so that \(\left\|f^{\varepsilon}\right\|_{TV}\leq\left\|f_{0}\right\|_{TV}\). Moreover, if \(\operatorname{supp}g\subset\mathbb{R}^{d}\backslash B_{R+C}\) for some \(C>0\) sufficiently large and independent of \(\varepsilon>0\), again by Lemma 2.2,
\[\int_{\mathbb{R}^{d}}g(x)f^{\varepsilon}(t,dx)=0.\]
We may then take a weakly convergent subsequence of \(f^{\varepsilon}\), with limit point \(F\in L^{\infty}([0,T],\mathcal{M})\), and, sending \(\varepsilon\to 0\), we obtain that \(F\) satisfies the duality solution identity, and therefore \(F=f\).
Choose \(h_{1},h_{2}\in C_{c}(\mathbb{R}^{d})\) such that, for all \(x,y\in\mathbb{R}^{d}\), \(h_{1}(x)+h_{2}(y)\leq|x-y|^{p}\). Then, if \(\gamma\) is any coupling between \(f_{0}\) and \(g_{0}\), we compute, using the duality identity and Lemma 2.2,
\[\int h_{1}(x)f(t,dx)+\int h_{2}(y)g(t,dy) =\iint\left(h_{1}(\phi_{0,t}(x))+h_{2}(\phi_{0,t}(y))\right)\gamma (dx,dy)\] \[\leq C\iint|x-y|^{p}\gamma(dx,dy).\]
Taking the infimum over such \(\gamma\) and supremum over such \(h_{1},h_{2}\), and using the dual formulation of the \(p\)-Wasserstein distance, we arrive at the estimate for the Wasserstein distances.
**Remark 3.4**.: _The final estimate can also be proved using the characterization of \(f\) and \(g\) as laws of certain stochastic processes (see Remark 3.2) and the characterization of the Wasserstein metric in terms of random variables._
We may repeat the above analysis for the second-order conservative equation (3.2), the only difference being the lack of a finite speed of propagation. Therefore, all measures are taken to have finite mass over \(\mathbb{R}^{d}\). Below, \(\Phi_{t,0}\) is the stochastic flow satisfying (3.7).
**Definition 3.3**.: A map \(f\in C([0,T],\mathcal{M}_{\mathrm{w}})\) is called a solution of (3.4) if, for all \(t\in[0,T]\) and \(g\in C_{b}(\mathbb{R}^{d})\),
\[\int g(x)f(t,dx)=\int\mathbb{E}[g(\Phi_{t,0}(x))]f_{0}(dx).\]
**Remark 3.5**.: _Once again, such solutions are called duality solutions because \(\mathbb{E}[g\circ\Phi_{t,0}]\) is the solution of (3.3) with terminal value \(g\) at time \(t\). If \(f_{0}\) is a probability measure, then \(f(t,\cdot)\) is the law of the stochastic process \(\Phi_{t,0}(X_{0})\), where \(X_{0}\) is a random variable with law \(f_{0}\), independent of the Wiener process \(W\)._
The following may be proved exactly as for Theorem 3.8, now invoking the properties of the stochastic flow described by Lemma 2.5.
**Theorem 3.9**.: _There exists a unique duality solution \(f\) of (3.4). If, for \(\varepsilon>0\), \(f^{\varepsilon}\) is the solution corresponding to \(b^{\varepsilon}\) as in (2.3), then, as \(\varepsilon\to 0\), \(f^{\varepsilon}\) converges weakly in the sense of measures to \(f\). If \(1\leq p\leq\infty\), \(f_{0},g_{0}\in\mathcal{P}_{p}\), and \(f\) and \(g\) are the corresponding duality solutions, then, for some \(C>0\) depending on \(p\) and the constants in (2.1), \(\mathcal{W}_{p}(f_{t},g_{t})\leq C\mathcal{W}_{p}(f_{0},g_{0})\)._
#### 3.2.2 On the failure of renormalization
In view of the formula (3.6), it is immediate that (viscosity) solutions of (3.1) satisfy the renormalization property, that is, if \(u\) is a viscosity solution and \(\beta:\mathbb{R}\to\mathbb{R}\) is smooth, then \(\beta\circ u\) is also a solution. This is related to the existence and uniqueness of the Lipschitz backward flow; indeed, note that, coordinate by coordinate, \(\phi_{t,T}(x)\) is the unique viscosity solution of (3.1) with terminal value \(x\) at time \(T\).
We contrast this with the renormalization property for the forward, conservative problem (3.2). If \(b\) is smooth, then classical computations show that \(f\) is a solution if and only if \(|f|\), \(f_{+}\), and \(f_{-}\) are all solutions. Because \(f(t,\cdot)\) is the pushforward by \(f_{0}\) of the flow \(\phi_{0,t}\), this can be viewed as a generalized form of injectivity for the flow. For general \(b\) satisfying (2.1), the backward flow is not only not injective, but concentrates at null sets. We therefore cannot expect renormalization to hold in general.
As a concrete example, take again \(b(x)=\operatorname{sgn}x\) on \(\mathbb{R}\), and \(f_{0}=\frac{1}{2}\delta_{1}-\frac{1}{2}\delta_{-1}\). Then, for \(t>0\), \(f(t,\cdot)=\frac{1}{2}\delta_{(1-t)_{+}}-\frac{1}{2}\delta_{-(1-t)_{+}}\), which means that \(f(t,\cdot)\equiv 0\) for \(t\geq 1\). However, the solution \(F\) of (3.2) with \(F_{0}=|f_{0}|=\frac{1}{2}\delta_{1}+\frac{1}{2}\delta_{-1}\) is equal to \(F(t,\cdot)=\frac{1}{2}\delta_{(1-t)_{+}}+\frac{1}{2}\delta_{-(1-t)_{+}}\), so that \(F(t,\cdot)=\delta_{0}\) for \(t\geq 1\). Thus \(F_{t}\neq|f_{t}|\) for \(t\geq 1\); indeed, \(|f_{t}|\) does not even conserve mass.
The failure of renormalization holds even if we impose \(f_{0}\in L^{1}\cap L^{\infty}\). For such \(f_{0}\) and for \(b(x)=\operatorname{sgn}x\), we have
\[f(t,dx)=\left[f_{0}(x+t)\mathbf{1}\left\{x>0\right\}+f_{0}(x-t)\mathbf{1}\left\{ x<0\right\}\right]dx+\left(\int_{[-t,t]}f_{0}\right)d\delta_{0}(x).\]
Therefore, renormalization fails whenever \(f_{0}\) is nonzero and odd.
We present one more counterexample to renormalization in which \(b\in C\) and \(f\in L^{1}\) (as the previous example shows, even if \(f_{0}\in L^{1}\), \(f(t,\cdot)\) may not be absolutely continuous with respect to Lebesgue measure due to the concentration of the flow). Take \(b(t,x)=2\operatorname{sgn}x|x|^{1/2}\). The backward flow is given by \(\phi_{0,t}(x)=\operatorname{sgn}x(|x|^{1/2}-t)_{+}^{2}\) for \((t,x)\in[0,T]\times\mathbb{R}\). For \(f_{0}\in L^{1}\), the duality solution is given by
\[f(t,dx)=\left(\int_{[-t^{2},t^{2}]}f_{0}\right)\delta_{0}(dx)+f_{0}\left( \operatorname{sgn}x(|x|^{1/2}+t)^{2}\right)\frac{|x|^{1/2}+t}{|x|^{1/2}}dx.\]
We then take the odd density \(f_{0}(x)=\operatorname{sgn}x|x|^{1/2}\mathbf{1}_{[-1,1]}(x)\), and the duality solution takes values in \(L^{1}\):
\[f(t,x)=\operatorname{sgn}x\frac{(|x|^{1/2}+t)^{2}}{|x|^{1/2}}\mathbf{1}_{[-(1 -t)_{+}^{2},(1-t)_{+}^{2}]}(x). \tag{3.17}\]
On the other hand, \(|f|\) is not the duality solution, or even a distributional solution, since mass is not conserved. The unique duality solution with initial density \(|f_{0}(x)|=|x|^{1/2}\mathbf{1}_{[-1,1]}(x)\) in this case is given by
\[F(t,dx)=\frac{4t^{3}}{3}\delta_{0}(dx)+\frac{(|x|^{1/2}+t)^{2}}{|x|^{1/2}} \mathbf{1}_{[-(1-t)_{+}^{2},(1-t)_{+}^{2}]}(x)dx.\]
**Remark 3.6**.: _One consequence of the commutator lemma of DiPerna and Lions [37, Lemma II.1] is that, if \(f\in L^{p}\) and \(b\in W^{1,q}\) with \(\frac{1}{p}+\frac{1}{q}\leq 1\), then the renormalization property is satisfied. The previous example therefore indicates that these conditions cannot be weakened in general. Indeed, even though \(f_{0}\in L^{1}\cap L^{\infty}\), the solution \(f(t,\cdot)\) given by (3.17) belongs to \(L^{p}\) only for \(p\in[1,2)\) when \(t>0\), and the same is true for \(\partial_{x}b\)._
#### 3.2.3 Equivalence of duality and distributional solutions
We finish this section by studying the setting where \(bf\) can be understood as a distribution, and, therefore, distributional solutions of (3.2) can be considered.
**Theorem 3.10**.: _Assume either that \(b\) is continuous, or that \(f(t,\cdot)\in L^{1}_{\mathrm{loc}}\) for all \(t\in[0,T]\). Then \(f\) is a distributional solution of (3.2) if and only if \(f\) is the unique duality solution._
Proof.: Suppose \(f\) is the unique duality solution. Let \((b^{\varepsilon})_{\varepsilon>0}\) be as in (2.3) and let \(f^{\varepsilon}\) be the corresponding solution of (3.2). For \(\phi\in C^{1}_{c}((0,T)\times\mathbb{R}^{d})\), integrating by parts yields
\[\iint_{(0,T)\times\mathbb{R}^{d}}f^{\varepsilon}(t,x)\left(-\partial_{t}\phi( t,x)+b^{\varepsilon}(t,x)\cdot\nabla\phi(t,x)\right)dtdx=0.\]
In the case that \(b\in C\), we may choose regularizations \(b^{\varepsilon}\) that converge locally uniformly to \(b\). By Theorem 3.8, as \(\varepsilon\to 0\), \(f^{\varepsilon}\) converges weakly in the sense of measures to \(f\), and so we may take \(\varepsilon\to 0\) above to obtain
\[\iint_{(0,T)\times\mathbb{R}^{d}}f(t,dx)\left(-\partial_{t}\phi(t,x)+b(t,x) \cdot\nabla\phi(t,x)\right)dt=0.\]
Otherwise, if \(f\in L^{1}_{\mathrm{loc}}\), it follows that \(f^{\varepsilon}\) converges weakly in \(L^{1}_{\mathrm{loc}}\), and therefore the same is true for \(b^{\varepsilon}f^{\varepsilon}\) by the dominated convergence theorem. We may then take \(\varepsilon\to 0\) in this case as well.
Assume now that \(f\) is an arbitrary distributional solution. We aim to show the duality equality in Definition 3.2, and, by a density argument, it suffices to do so for \(g\in C_{c}(\mathbb{R}^{d})\cap C^{0,1}(\mathbb{R}^{d})\). Let \(\rho_{\varepsilon}\) be a standard mollifier as before and set \(f_{\varepsilon}=f*\rho_{\varepsilon}\). Then \(f_{\varepsilon}\) satisfies
\[\partial_{t}f_{\varepsilon}-\operatorname{div}(bf_{\varepsilon})=\operatorname {div}r_{\varepsilon},\]
where \(r_{\varepsilon}=(bf)*\rho_{\varepsilon}-bf_{\varepsilon}\). For \(t\in(0,T]\), let \(u\) be the unique Lipschitz viscosity solution of the terminal value problem
\[-\partial_{s}u+b\cdot\nabla u=0\quad\text{in }(0,T)\times\mathbb{R}^{d},\quad u (t,\cdot)=g.\]
By the theory in subsection 3.1, \(u(s,x)=g(\phi_{s,t}(x))\) and is Lipschitz continuous with compact support. We then compute
\[\partial_{s}\int f_{\varepsilon}(s,x)u(s,x)dx=-\int r_{\varepsilon}(s,x)\cdot \nabla u(s,x)dx,\]
so that
\[\int f_{\varepsilon}(t,x)g(x)dx-\int(f_{0}*\rho_{\varepsilon})(x)g(\phi_{0,t}( x))dx=-\int_{0}^{t}\int_{\mathbb{R}^{d}}r_{\varepsilon}(s,x)\cdot\nabla u(s,x)dxds.\]
We may then conclude by proving that \(r_{\varepsilon}\xrightarrow{\varepsilon\to 0}0\) in \(L^{1}_{\text{loc}}\).
If \(f\in L^{1}_{\text{loc}}\), this is immediate because, as \(\varepsilon\to 0\), both \((bf)*\rho_{\varepsilon}\) and \(bf_{\varepsilon}\) converge in \(L^{1}_{\text{loc}}\) to \(bf\). If \(b\in C\), then, as \(\varepsilon\to 0\), both \((bf)*\rho_{\varepsilon}\) and \(bf_{\varepsilon}\) converge locally in total variation to \(bf\). It follows that \(r_{\varepsilon}\) converges locally in total variation to \(0\), but, because \(r_{\varepsilon}\in L^{1}\) for all \(\varepsilon>0\), the convergence in \(L^{1}_{\text{loc}}\) is established.
**Remark 3.7**.: _Even in the context of Theorem 3.10, the renormalization property can fail. Indeed, this is the case for the final example in the previous subsubsection, where both \(b\in C\) and \(f\in L^{1}\)._
## 4 The expansive regime
We continue our analysis of transport and continuity equations with vector fields \(b\) satisfying (2.1), and in this section we study the expansive regime. Reversing the sign appearing in front of the velocity field \(b\), the initial value problem for the continuity equation becomes
\[\partial_{t}f+\operatorname{div}(b(t,x)f)=0\quad\text{in }(0,T)\times\mathbb{R}^{d} \quad\text{and}\quad f(0,\cdot)=f_{0}, \tag{4.1}\]
and the corresponding dual terminal value problem for the non-conservative transport equation is
\[\partial_{t}u+b(t,x)\cdot\nabla u=0\quad\text{in }(0,T)\times\mathbb{R}^{d} \quad\text{and}\quad u(T,\cdot)=u_{T}. \tag{4.2}\]
Equivalently, we are studying the time-reversed versions of (3.1) and (3.2) (in this case, \(b\) is replaced with \(b(T-t,\cdot)\)). As such, the relevant direction of the flow (2.2) changes in this context: whereas in the previous section, the compressive, backward flow gave rise to the dual solution spaces \(C\) and \(\mathcal{M}\), here, the expansive, forward flow allows to develop a theory for both (4.1) and (4.2) in Lebesgue spaces. This can also be seen from formal a priori \(L^{p}\) estimates for (4.1) and (4.2), which follow immediately from the lower bound on \(\operatorname{div}b\).
The regime for these equations matches those studied by Bouchut, James, and Mancini [23], in which emphasis is placed on the fact that distributional solutions \(f\in C([0,T],L^{\infty}_{\text{w.}*}(\mathbb{R}^{d}))\) of (4.1) are not unique in general. Our approach to these equations is similar, in that we use a particular solution of (4.1) to study, by duality, the transport equation (4.2) and the forward ODE flow to (2.2). We extend the results of [23] by identifying a "good" solution of (4.2) for any \(f_{0}\in L^{p}_{\text{loc}}\), where the continuous solution operator on \(L^{p}\) is stable under regularizations in the weak topology of \(C([0,T],L^{p}_{\text{loc}}(\mathbb{R}^{d}))\).
The terminal value problem (4.2) is then understood both in the dual sense and through the lens of renormalization theory. It is this theory that allows, as in [37], to make sense of the forward ODE flow (2.17) as the right-inverse of the backward flow, completing the program initiated in Section 2. As a consequence, we then also obtain the uniqueness of nonnegative distributional solutions of (4.1), and, by extension, a characterization of the "good" solution.
We finish the section by making some remarks about the second-order analogues of (4.1) and (4.2). Unlike in the previous section, we do not have a full solution theory for general second-order equations, unless the ellipticity matrix is uniformly positive (the case which has already been covered by Figalli in [38]) or is degenerate but independent of the space variable.
### The conservative equation
The starting point for the study of the conservative equation (4.1) is that distributional solutions in the sense of distributions are not unique (see also [20], [23, Section 6]). We revisit the example, when \(d=1\), \(b(t,x)=\operatorname{sgn}x\). Then \(f(t,x):=\operatorname{sgn}x\mathbf{1}_{|x|\leq t}\) is a nontrivial distributional solution of (4.1) belonging to \(L^{1}\cap L^{\infty}\) with \(f(0,\cdot)=0\). The uniqueness can be seen as a consequence of the contractive nature of the backward flow (2.2), which allows for positive and negative mass to be "cancelled" at time \(0\), only to appear immediately for \(t>0\). The same phenomenon is what leads to the failure of renormalization for the contractive regime for the continuity equation in subsection 3.2. In either case, we remark that, this particular \(b\) belongs to \(BV(\mathbb{R})\), while \(\partial_{x}b\) is not absolutely continuous with respect to Lebesgue measure, and so the condition in the work of Ambrosio [4] that \(\operatorname{div}b\in L^{1}_{\text{loc}}\) cannot indeed be weakened in general, if one is to hope for renormalization or uniqueness for the continuity equation.
One strategy is to define solutions of (4.1) by duality with the transport equation (3.1) from the contractive setting. With the theory of Section 3, for \(g\in C^{0,1}_{c}(\mathbb{R}^{d})\), we may define a Lipschitz viscosity solution of the initial value problem
\[\partial_{t}v+b(t,x)\cdot\nabla v=0\quad\text{in }(0,T)\times\mathbb{R}^{d}, \quad v(0,\cdot)=g\]
(because \(\tilde{v}(t,x):=v(T-t,x)\) solves the corresponding terminal value problem (3.1) with velocity \(\tilde{b}(t,x)=b(T-t,x)\)), and then, formally, for \(t>0\), \(\int f(t,x)v(t,x)dx=\int f_{0}(x)g(x)dx\).
The main problem with this approach is that duality does not define unique solutions, again due to the concentration effect of the backward flow. Taking once more \(b(t,x)=\operatorname{sgn}x\), we have, by (3.6),
\[v(t,x)=\begin{cases}g(x-(\operatorname{sgn}x)t),&|x|\geq t,\\ g(0),&|x|\leq t.\end{cases}\]
Therefore, the duality equality fails to give sufficient information to identify \(f\) in the cone \(\{|x|\leq t\}\), in which \(v\) is always constant, regardless of the initial data \(g\). Indeed, the two distributional solutions \(f\equiv 0\) and \(f(t,x)=\operatorname{sgn}x\mathbf{1}\{|x|\leq t\}\) differ in exactly this cone, in which the Jacobian of the backward flow vanishes. It is exactly this observation that lead to the notion of "exceptional" solutions of (3.1) and the exceptional set in [23].
We instead identify a "good" solution operator acting on all \(f_{0}\in L^{p}_{\text{loc}}\), \(1\leq p\leq\infty\), by extending the solution formula in the smooth case, which depends on the backward flow studied in Section 2, as well as the corresponding Jacobian. In particular, the "good solution" is distinguished by vanishing whenever the Jacobian does. Our approach differs slightly from that of [23], who work with a general class of "transport flows" that generalize the backward ODE flow. One advantage of our analysis is that we can directly appeal to the various topological properties of the backward flow proved in Section 2.
#### 4.1.1 Representation formula
If \(b\) is Lipschitz, then the solution of (4.1) is given by
\[f(t,x)=f_{0}(\phi_{0,t}(x))J_{0,t}(x), \tag{4.3}\]
where \(\phi_{0,t}(x)\) is the reverse flow defined in Section 2 and \(J_{0,t}(x)=\det(\nabla_{x}\phi_{0,t}(x))\) is the corresponding Jacobian. One way to derive this formula is through the Feynman-Kac formula for the reversed time equation
\[-\partial_{t}\tilde{f}+b(T-t,x)\cdot\nabla\tilde{f}+\operatorname{div}_{x}b( T-t,x)\tilde{f}=0\quad\text{in }(0,T)\times\mathbb{R}^{d},\quad\tilde{f}(T,\cdot)=f_{0},\]
which gives
\[f(t,x)=\tilde{f}(T-t,x)=f_{0}(\phi_{0,t}(x))\exp\left(-\int_{0}^{t} \operatorname{div}b(s,\phi_{0,s}(x)ds\right), \tag{4.4}\]
and then \(J_{0,t}(x)=\exp\left(-\int_{0}^{s}\operatorname{div}b(s,\phi_{0,s}(x)ds\right)\).
In the general case where \(b\) satisfies (2.1), the formula (4.3) makes sense for arbitrary \(f_{0}\in L^{p}_{\text{loc}}\), \(1\leq p\leq\infty\). We may then use the various results in Section 2 to analyze the stability properties of the solution operator defined by the formula (3.6). We remark in particular that the stability results of Lemma 2.3 depend on the determinant structure of the Jacobian, which is somewhat disguised by the exponential expression in (4.4).
**Theorem 4.1**.: _Let \(1\leq p\leq\infty\), assume that \(f_{0}\in L^{p}_{\rm loc}(\mathbb{R}^{d})\), and define \(f\) by (4.3). Then \(f\) is a distributional solution of (4.1). If \(1\leq p<\infty\), \(f\in C([0,T],L^{p}(\mathbb{R}^{d}))\), and if \(p=\infty\), \(f\in C([0,T],L^{\infty}_{\rm vac}(\mathbb{R}^{d}))\). There exists a constant \(C>0\) depending only on the assumptions in (2.1) such that, for all \(R>0\),_
\[\left\|f(t,\cdot)\right\|_{L^{p}(B_{R})}\leq C\left\|f_{0}\right\|_{L^{p}(B_{R +C})}. \tag{4.5}\]
_If \((b_{\varepsilon})_{\varepsilon>0}\) are as in (2.3) and \((f_{\varepsilon})_{\varepsilon>0}\) are the corresponding solutions of (4.1), then, as \(\varepsilon\to 0\), \(f_{\varepsilon}\) converges to \(f\) weakly in \(C([0,T],L^{p}_{\rm loc}(\mathbb{R}^{d}))\) if \(1\leq p<\infty\), and weak-\(\star\) in \(L^{\infty}\) if \(p=\infty\)._
Proof.: When \(p=\infty\), the bound (4.5) follows from the \(L^{\infty}\) bounds for the flow and Jacobian in Lemmas 2.2 and 2.3. We prove the bound when \(p<\infty\) for the solutions \(f_{\varepsilon}\) of the equation with \(b_{\varepsilon}\) as in (2.3), for a constant independent of \(\varepsilon\), and then the estimate for \(f\) follows after proving the weak convergence result.
For a constant \(C>0\) independent of \(\varepsilon\), by Lemmas 2.2 and 2.3, we have \(|J_{0,t}|\leq C\) and \(|\phi_{0,t}(x)|\leq R+C\) for \(|x|\leq R\). Then
\[\int_{B_{R}}|f^{\varepsilon}(t,x)|^{p}dx =\int_{\mathbb{R}^{d}}|f_{0}(\phi_{0,t}^{\varepsilon}(x))|^{p}J_{ 0,t}^{\varepsilon}(x)^{p}dx\] \[\leq\left\|J_{0,t}\right\|_{\infty}^{p-1}\int_{B_{R}}|f_{0}(\phi_ {0,t}^{\varepsilon}(x))|^{p}J_{0,t}^{\varepsilon}(x)dx\leq C\int_{B_{R+C}}|f_{ 0}(x)|^{p}dx.\]
It suffices to prove the weak convergence of \(f^{\varepsilon}\) when \(p<\infty\) for \(f_{0}\in C_{c}\). In the general case, if \(\tilde{f}_{0}\) is continuous with compact support and we let \(\tilde{f}^{\varepsilon}\) be the solution with \(b^{\varepsilon}\) and \(\tilde{f}_{0}\), we have
\[\left\|f^{\varepsilon}-\tilde{f}^{\varepsilon}\right\|_{C([0,T],L^{p}(B_{R}) )}\leq C\left\|f_{0}-\tilde{f}_{0}\right\|_{L^{p}(B_{R+C})},\]
and we may then choose \(\tilde{f}_{0}\) arbitrarily close to \(f_{0}\) in \(L^{p}_{\rm loc}\).
By Lemma 2.2, as \(\varepsilon\to 0\), \(\phi^{\varepsilon}\to\phi\) uniformly in \([0,T]\times\mathbb{R}^{d}\), and therefore \(f_{0}\circ\phi_{0,t}^{\varepsilon}\)) converges uniformly to \(f_{0}\circ\phi_{0,t}\). In view of Lemma 2.3, \(f^{\varepsilon}\) converges weakly in the sense of distributions (and therefore, in the sense of locally bounded Borel measures) to \(f\). Since \(f^{\varepsilon}\) is bounded in \(L^{\infty}([0,T],L^{p}_{\rm loc}(\mathbb{R}^{d}))\), the convergence is actually weak in \(L^{\infty}([0,T],L^{p}_{\rm loc}(\mathbb{R}^{d}))\).
If \(p=\infty\), then, in particular, \(f^{\varepsilon}\in C([0,T],L^{p}_{\rm loc}(\mathbb{R}^{d}))\) for \(p<\infty\), uniformly in \(\varepsilon\), and we have the convergence as \(\varepsilon\to 0\) in the sense of distributions to \(f\). In this case, \(f\in L^{\infty}_{\rm loc}([0,T]\times\mathbb{R}^{d})\), and so the convergence is weak-\(\star\) in \(L^{\infty}_{\rm loc}\).
Given \(g\in C^{1}_{c}((0,T)\times\mathbb{R}^{d})\), integrating by parts gives
\[\iint_{[0,T]\times\mathbb{R}^{d}}f^{\varepsilon}(t,x)\left[\partial_{t}\phi(t,x)+b^{\varepsilon}(t,x)\cdot\nabla\phi(t,x)\right]dxdt=0.\]
As \(\varepsilon\to 0\), the bracketed expression converges a.e. to \(\partial_{t}\phi(t,x)+b(t,x)\cdot\nabla\phi(t,x)\), and so converges weakly in \(L^{q}\) for all \(1\leq q<\infty\) by the dominated convergence theorem. We may therefore send \(\varepsilon\to 0\), using the weak convergence of \(f^{\varepsilon}\), to deduce that \(f\) is a distributional solution. This implies in particular that \(f\in C([0,T],L^{p}_{\rm vac}(\mathbb{R}^{d}))\), or \(C([0,T],L^{\infty}_{\rm vac}(\mathbb{R}^{d}))\) if \(p=\infty\).
To show that \(f\in C([0,T],L^{p}(\mathbb{R}^{d}))\) when \(p<\infty\), we may again consider \(f_{0}\in C_{c}(\mathbb{R}^{d}))\) without loss of generality. Then \(f_{0}\circ\phi_{0,\cdot}\in C([0,T]\times\mathbb{R}^{d})\), while \(J_{0,\cdot}\in C([0,T],L^{1}_{\rm loc}(\mathbb{R}^{d}))\) by Lemma 2.3, and the result follows.
**Remark 4.1**.: _We often call the solution \(f\) defined through (4.3) the "good" solution. In view of the stability results of Theorem 4.1 above, this solution coincides with the notion of reversible solutions in [20, 23]._
The following is immediate from the formula (4.3).
**Corollary 4.1**.: _If \(f\) is a "good" solution of (4.1), then so is \(|f|\)._
Corollary 4.1 is in direct contrast to the continuity equation in the compressive setting of the previous section, where renormalization fails. Its proof depends on the formula for the good solution; indeed, despite the weak stability result in Theorem 4.1, this renormalization property cannot be proved by regularization, since we only have the weak convergence as \(\varepsilon\to 0\) of \(f_{\varepsilon}\) to \(f\). At present, we do not know whether the convergence is strong in \(L^{p}\). This turns out to be equivalent to the strong convergence in \(L^{1}_{\rm loc}\) of the Jacobians, and therefore, in view of Proposition 2.1, we have the following when \(d=1\).
**Theorem 4.2**.: _Assume \(d=1\), \(f_{0}\in L^{p}_{\rm loc}(\mathbb{R})\) for \(p<\infty\), \((b^{\varepsilon})_{\varepsilon>0}\) is as in (2.3), and \(f^{\varepsilon}\) is the corresponding solution of (4.1). Then, as \(\varepsilon\to 0\), \(f^{\varepsilon}\) converges strongly in \(C([0,T],L^{p}_{\rm loc}(\mathbb{R}))\) to \(f\)._
Proof.: Just as in the proof of Theorem 4.1, we may assume without loss of generality that \(f_{0}\in C_{c}(\mathbb{R})\). In that case, \(f^{\varepsilon}\) is bounded in \(L^{1}\) and \(L^{\infty}\), and so the strong \(L^{p}\) convergence reduces to the strong convergence of \(J^{\varepsilon}_{0,\cdot}\) to \(J_{0,\cdot}\) in \(L^{1}_{\rm loc}([0,T]\times\mathbb{R})\) from Proposition 2.1.
#### 4.1.2 Vanishing viscosity approximation
The good solution above also arises from vanishing viscosity limits, that is, the limit as \(\varepsilon\to 0\) of solutions of
\[\partial_{t}f^{\varepsilon}-\frac{\varepsilon^{2}}{2}\Delta f^{\varepsilon}+ \operatorname{div}(b(t,x)f^{\varepsilon})=0\quad\text{in }[0,T]\times\mathbb{R}^{d}\quad\text{and}\quad f^{ \varepsilon}(0,\cdot)=f_{0}, \tag{4.6}\]
which has as its unique solution
\[f^{\varepsilon}(t,x):=\mathbb{E}[f_{0}(\phi^{\varepsilon}_{0,t}(x))J^{ \varepsilon}_{0,t}(x)], \tag{4.7}\]
where now \(\phi^{\varepsilon}\) and \(J^{\varepsilon}\) denote respectively the stochastic flow and Jacobian from (2.23), corresponding to Proposition 2.3.
The proof of the following result follows from Proposition 2.3, and is proved almost exactly as for Theorem 4.1.
**Theorem 4.3**.: _The function \(f^{\varepsilon}\) defined by (4.6) belongs to \(C([0,T],L^{p}_{\rm loc}(\mathbb{R}^{d}))\) if \(1\leq p<\infty\) and \(C([0,T],L^{\infty}_{\rm w.\star}(\mathbb{R}^{d}))\) if \(p=\infty\), and, as \(\varepsilon\to 0\), \(f^{\varepsilon}\) converges weakly in those spaces to \(f\)._
### The nonconservative equation
The next step is the study of the terminal value problem (4.2). Unlike the transport equation (3.1) with velocity \(-b\), which was solved in the space of continuous functions, we cannot define \(L^{p}\) solutions in the distributional sense, as the product \(b\cdot\nabla u=\operatorname{div}(bu)-(\operatorname{div}b)u\) does not make sense when \(\operatorname{div}b\) is merely a measure. Instead, we initially characterize solutions by duality with (4.1), which can be seen as a way of restricting the class of test functions to deal with the singularities in \(b\) (see Remark 3.3).
#### 4.2.1 \(L^{p}\) and \(BV\) estimates
We will first prove a priori \(L^{p}\) and \(BV\) estimates for the solution of (4.2), assuming all the data and solutions are smooth. The \(BV\) estimates in particular are crucial to establishing the _strong_ convergence in \(L^{p}\) of regularized solutions to a unique limit, which will be the duality solution, adjoint to the equation (4.1). The \(BV\) estimate appears already in [23, Lemma 4.4]. We present an alternate proof here, which is similar to the one for second-order equations we prove later.
**Lemma 4.1**.: _Assume \(b\) is smooth and satisfies (2.1), and let \(u\) be a smooth solution of (4.2). Then, for all \(1\leq p\leq\infty\), there exist \(C=C_{p,R}\in L^{1}_{+}([0,T])\) and \(C_{R}>0\) depending only on the bounds in (2.1) such that, for all \(0\leq t\leq T\),_
\[\left\|u(t,\cdot)\right\|_{L^{p}(B_{R})}\leq\exp\left(\int_{0}^{t}C(s)ds \right)\left\|u_{T}\right\|_{L^{p}(B_{R+C})}\]
_and_
\[\left\|u(t,\cdot)\right\|_{BV(B_{R})}\leq\exp\left(\int_{0}^{t}C(s)ds\right) \left\|u_{T}\right\|_{BV(B_{R+C})}.\]
Proof.: We assume that \(u_{T}\) has compact support, and, therefore, in view of the finite speed of propagation property, so does \(u\). The general result for \(L^{p}_{\rm loc}\) and \(BV_{\rm loc}\) is proved similarly.
The \(L^{\infty}\) bound is a consequence of the maximum principle. For \(p<\infty\), we compute
\[\frac{\partial}{\partial t}\int_{\mathbb{R}^{d}}|u(t,x)|^{p}dx=\int_{\mathbb{ R}^{d}}\operatorname{div}b(t,x)|u(t,x)|^{p}dx\geq-C_{0}(t)d\int_{\mathbb{R}^{d}}|u(t,x )|^{p}dx,\]
and the \(L^{p}\) bound follows from Gronwall's inequality.
Now, for \(t\leq T\) and \(x,z\in\mathbb{R}^{d}\), set \(w(t,x,z)=\nabla u(t,x)\cdot z\). Then \(w\) satisfies
\[-\partial_{t}w+b\cdot\nabla_{x}w+(z\cdot\nabla)b\cdot\nabla_{z}w=0.\]
Since \(b\) and \(w\) are smooth, the renormalization property holds for this transport equation, and so a simple regularization argument shows, in the sense of distributions,
\[\partial_{t}|w|+b\cdot\nabla_{x}|w|+(z\cdot\nabla)b\cdot\nabla_{z}|w|=0.\]
Define \(\phi(z)=e^{-|z|^{2}}\). Then
\[\iint_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\phi(z)b(t,x)\cdot\nabla_{x}|w(t,x,z )|dxdz=-\iint_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\phi(z)\operatorname{div}b(t,x)|w(t,x,z)|dxdz\]
and
\[\iint_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\phi(z)(z\cdot\nabla)b (t,x)\cdot\nabla_{z}|w(t,x,z)|dxdz\] \[=-\iint_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\left[\nabla\phi(z) \cdot(z\cdot\nabla b(t,x))+\phi(z)\operatorname{div}b(t,x)\right]|w(t,x,z)| dxdz.\]
Therefore, by Lemma 2.1,
\[\partial_{t}\iint_{\mathbb{R}^{d}\times\mathbb{R}^{d}}|w(t,x,z)| \phi(z)dxdz =\iint_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\left[\nabla\phi(z) \cdot(z\cdot\nabla b(t,x))+2\phi(z)\operatorname{div}b(t,x)\right]|w(t,x,z)| dxdz\] \[=\iint_{\mathbb{R}^{d}\times\mathbb{R}^{d}}2e^{-|z|^{2}}\left[ \operatorname{div}b(t,x)-\nabla b(t,x)z\cdot z\right]dxdz\] \[\geq-2(d-1)C_{0}(t)\iint_{\mathbb{R}^{d}\times\mathbb{R}^{d}}e^ {-|z|^{2}}|w(t,x,z)|dxdz.\]
The result follows from Gronwall's lemma and the fact that
\[\iint_{\mathbb{R}^{d}\times\mathbb{R}^{d}}e^{-|z|^{2}}|w(t,x,z)|dxdz=c_{0}\int _{\mathbb{R}^{d}}|\nabla u(t,x)|dx,\]
where the constant \(c_{0}=\int_{\mathbb{R}^{d}}e^{-|z|^{2}}|\nu\cdot z|dz\) is independent of the choice of \(|\nu|=1\) by rotational invariance.
#### 4.2.2 Duality solutions
Proceeding by duality with the conservative forward equation, and using the \(BV\)-estimates above, then gives the following.
**Theorem 4.4**.: _Assume \(1\leq p\leq\infty\) and \(u_{T}\in L^{p}_{\rm loc}\). Then there exists a unique function \(u\in C([0,T],L^{p}_{\rm loc}(\mathbb{R}^{d}))\) (or in \(C([0,T],L^{\infty}_{\rm w\text{-}\star}(\mathbb{R}^{d}))\) if \(p=\infty\)) such that, if \((b^{\varepsilon})_{\varepsilon>0}\) is as in (2.3) and \(u^{\varepsilon}\) denotes the corresponding solution of (4.2), then, as \(\varepsilon\to 0\), \(u^{\varepsilon}\) converges strongly in \(C([0,T],L^{p}(\mathbb{R}^{d}))\) for \(p<\infty\) and weak-\(\star\) in \(L^{\infty}\) to \(u\). Moreover, the solution map \(u_{T}\mapsto u\) is linear, order-preserving, and continuous in \(L^{p}_{\rm loc}(\mathbb{R}^{d}))\). If \(s\in[0,T)\), \(f_{s}\in L^{p^{\prime}}(\mathbb{R}^{d})\) and \(f\in C([s,T],L^{p^{\prime}}(\mathbb{R}^{d}))\) (or \(C([s,T],L^{\infty}_{\rm w\text{-}\star}(\mathbb{R}^{d}))\) if \(p=1\)) is the good solution of (4.1) with initial data \(f(s,\cdot)=f_{s}\), then_
\[\int_{\mathbb{R}^{d}}u(s,x)f_{s}(x)dx=\int_{\mathbb{R}^{d}}u_{T}(x)f(T,x)dx.\]
**Remark 4.2**.: _The function \(u\) corresponds with the notion of duality solution presented in [23] whenever \(u_{T}\) (and therefore \(u(t,\cdot)\) for \(t<T\)) belongs to \(BV_{\rm loc}\)._
Proof.: By Lemma 4.1, \((u^{\varepsilon})_{\varepsilon>0}\) is bounded uniformly in \(C([0,T],L^{p}_{\mathrm{loc}}(\mathbb{R}^{d}))\), and so, along a subsequence, converges weakly as \(\varepsilon\to 0\) to some \(u\) satisfying the same bounds.
In order to see that the convergence is strong, note that it suffices, by the \(L^{p}\)-boundedness of solution operator implied by Lemma 4.1, to assume that \(u_{T}\in C_{c}(\mathbb{R}^{d})\). We then have \(u^{\varepsilon}\) bounded in \(L^{\infty}([0,T],BV(\mathbb{R}^{d}))\) independently of \(\varepsilon\). The identity \(\partial_{t}u^{\varepsilon}=-b^{\varepsilon}\cdot\nabla u^{\varepsilon}\) then implies that, for any \(t_{1}<t_{2}\leq T\) and \(R>0\),
\[\|u^{\varepsilon}(t_{1},\cdot)-u^{\varepsilon}(t_{2},\cdot)\|_{L^{1}(B_{R})} \leq\|b\|_{L^{\infty}(B_{R})}\sup_{t\in[0,T]}\|\nabla u^{\varepsilon}\|_{L^{1 }(B_{R})}\,|t_{1}-t_{2}|.\]
This, along with the uniform \(BV\) estimates, implies that \((u^{\varepsilon})_{\varepsilon>0}\) is precompact in \(C([0,T],L^{1}_{\mathrm{loc}}(\mathbb{R}^{d}))\), and, because of the uniform \(L^{\infty}\)-bound, precompact in \(C([0,T],L^{p}_{\mathrm{loc}}(\mathbb{R}^{d}))\) for any \(p\in[1,\infty)\). It therefore follows that any weakly convergent subsequence actually converges strongly.
If \(f^{\varepsilon}\) is the solution of (4.1) with \(f^{\varepsilon}(s,\cdot)=f_{s}\), then classical computations involving integration by parts give
\[\int_{\mathbb{R}^{d}}u^{\varepsilon}(s,x)f_{s}(x)dx=\int_{\mathbb{R}^{d}}u_{ T}(x)f^{\varepsilon}(T,x)dx.\]
Sending \(\varepsilon\to 0\) along a subsequence and using the weak convergence of \(f^{\varepsilon}\) and strong convergence of \(u^{\varepsilon}\) shows that any limit point \(u\) must satisfy the duality identity with \(f\), and is therefore unique. We conclude that the full sequence converges strongly. As before, when \(p=\infty\), we obtain the same result since then also \(u\in C([0,T],L^{p}_{\mathrm{loc}}(\mathbb{R}^{d}))\) for any \(p<\infty\).
**Remark 4.3**.: _If \(u_{T}\in BV_{\mathrm{loc}}\), then the duality solution \(u\) of (4.2) satisfies \(\nabla u\in L^{\infty}([0,T],\mathcal{M}_{\mathrm{loc}}(\mathbb{R}^{d}))\). Note, however, that this is still not enough to make sense of \(u\) as a distributional solution, unless \(b\) is continuous._
#### 4.2.3 Renormalization
In Section 3, the renormalization property for solutions of the transport equation (3.1) followed from the formula (3.6). We prove a similar renormalization property for the transport equation (4.2) in the expansive regime. Here, it depends on the strong convergence in \(L^{p}\) of regularizations.
**Theorem 4.5**.: _Let \(1\leq p\leq\infty\) and \(u_{T}\in L^{p}_{\mathrm{loc}}(\mathbb{R}^{d})\), and let \(u\in C([0,T],L^{p}_{\mathrm{loc}}(\mathbb{R}^{d}))\) be the duality solution of (4.2). Assume \(\beta:\mathbb{R}\to\mathbb{R}\) is smooth and satisfies \(|\beta(r)|\leq(1+|r|^{\alpha})\) for some \(\alpha>0\). Then \(\beta\circ u=C([0,T],L^{p/\alpha}_{\mathrm{loc}}(\mathbb{R}^{d}))\) is the duality solution of (4.2) with terminal value \(\beta(u(T,\cdot))=\beta\circ u_{T}\)._
Proof.: The proof is an easy consequence of regularization of \(b\) as in (2.3), and the passage to the limit follows from the strong convergence of \(u^{\varepsilon}\) to \(u\).
### The forward ODE flow
We finally return to the study of the flow (2.2), in particular for the forward direction. A candidate for the object \(\phi_{t,s}(x)\), \(t>s\), a.e. \(x\) was already identified in Proposition 2.2 as the right inverse of the backward flow--note that the full measure set of \(x\in\mathbb{R}^{d}\) depends on \(s\) and \(t\). We now connect this right-inverse with the transport equation (4.2), and exploit the renormalization property to identify \(\phi_{t,s}(x)\) as a regular Lagrangian flow, that is, for a.e. \(x\in\mathbb{R}^{d}\), an absolutely continuous solution of the integral equation for (2.2) with control on the compressibility.
#### 4.3.1 Properties of the right inverse
We first record more properties of the right-inverse of the backward flow identified in Proposition 2.2. From now on, for \(0\leq s\leq t\leq T\), we always denote by \(\phi_{t,s}\) the version of the right-inverse of \(\phi_{s,t}\) which is continuous almost everywhere (such a version is guaranteed to exist by Proposition 2.2).
**Theorem 4.6**.: _For any \(t\in(0,T]\), \((s,x)\mapsto\phi_{s,t}(x)\) is (coordinate-by-coordinate) the duality solution of (4.2) with terminal value \(x\) at time \(t\). For all \(1\leq p<\infty\),_
\[\phi_{\cdot,s}\in C([s,T],L^{p}_{\mathrm{loc}}(\mathbb{R}^{d}))\quad\text{and} \quad\phi_{t,\cdot}\in C([0,t],L^{p}_{\mathrm{loc}}(\mathbb{R}^{d})),\]
_and there exists a constant \(C>0\) such that, for all \(0\leq s\leq t\leq T\) and \(x\in\mathbb{R}^{d}\),_
\[|\phi_{t,s}(x)|\leq C(1+|x|)\quad\text{and}\quad\left\|\phi_{t,s}\right\|_{BV_{ \mathrm{loc}}}\leq C.\]
_Finally, if \((b^{\varepsilon})_{\varepsilon>0}\) is as in (2.3) and \(\phi_{t,s}^{\varepsilon}\) is the corresponding forward flow, then, for all \(1\leq p<\infty\),_
\[\lim_{\varepsilon\to 0}\phi_{\cdot,s}^{\varepsilon}=\phi_{\cdot,s}\quad \text{strongly in }C([s,T],L^{p}_{\mathrm{loc}}(\mathbb{R}^{d}))\quad\text{and} \quad\lim_{\varepsilon\to 0}\phi_{t,\cdot}^{\varepsilon}=\phi_{t,\cdot} \quad\text{strongly in }C([0,t],L^{p}_{\mathrm{loc}}(\mathbb{R}^{d})),\]
_and the convergence also holds in the weak-\(\star\) sense in \(L^{\infty}_{\mathrm{loc}}\)._
Proof.: For \(\varepsilon>0\) and \((b^{\varepsilon})_{\varepsilon>0}\) as in (2.3), it is standard that, for \(t\in(0,T]\), the vector-valued solution of
\[\frac{\partial u^{\varepsilon}}{\partial s}+b^{\varepsilon}\cdot\nabla u^{ \varepsilon}=0\quad\text{in }(0,t)\times\mathbb{R}^{d},\quad u^{\varepsilon}(t,x)=x\]
is given by \(u^{\varepsilon}(s,x)=\phi_{t,s}^{\varepsilon}(x)\) for \(s\in[0,t]\), where \(\phi^{\varepsilon}\) is the flow corresponding to \(b^{\varepsilon}\). By Theorem 4.4, we have the given convergence statements, as \(\varepsilon\to 0\), of \(\phi^{\varepsilon}\) to the vector valued duality solution \(u\) of (4.2) in \([0,t]\times\mathbb{R}^{d}\) with terminal value \(u(t,\cdot)=x\).
The flow property for smooth \(b^{\varepsilon}\) yields, for \(0\leq s\leq t\leq T\) and \(x\in\mathbb{R}^{d}\), \(\phi_{s,t}^{\varepsilon}(\phi_{t,s}^{\varepsilon}(x))\). By Lemma 2.2 and the above strong \(L^{p}\)-convergence statement, we may take \(\varepsilon\to 0\) to obtain \(\phi_{s,t}(u(s,x))=x\), and then, by Proposition 2.2, we must have \(u(s,x)=\phi_{t,s}(x)\). The other statements now follow immediately in view of Theorem 4.4. Note that we are using that, for \(s\in[0,T)\), the map \([s,T]\times\mathbb{R}^{d}\ni(t,x)\mapsto\phi_{t,s}(x)\) is the duality solution of the initial value problem
\[\frac{\partial\tilde{u}}{\partial t}-b(t,x)\cdot\nabla\tilde{u}=0\quad\text{ in }[s,T]\times\mathbb{R}^{d},\quad\tilde{u}(s,x)=x,\]
whose theory can be treated exactly as for (4.2).
#### 4.3.2 The regular Lagrange property
We now observe that there is a representation formula for the duality solution of the transport equation (4.2).
**Theorem 4.7**.: _Let \(1\leq p\leq\infty\). Then there exists a constant \(C>0\) depending only on \(p\) and the constant in (2.1) such that, for all \(F\in L^{p}_{\mathrm{loc}}\cap C\), \(R>0\), and \(0\leq s\leq t\leq T\),_
\[\left\|F\circ\phi_{t,s}\right\|_{L^{p}(B_{R})}\leq C\left\|F\right\|_{L^{p}(B_{ R+C})}. \tag{4.8}\]
_In particular, for any \(A\subset\mathbb{R}^{d}\) with finite Lebesgue measure,_
\[|\{x:\phi_{t,s}(x)\in A\}|\leq C|A|. \tag{4.9}\]
_If \(u_{T}\in L^{p}_{\mathrm{loc}}(\mathbb{R}^{d})\), then the duality solution of (4.2) is given by_
\[u(t,x)=u_{T}(\phi_{T,t}(x)). \tag{4.10}\]
_If \(u_{T}\) has a version which is continuous almost everywhere, then, for \(t<T\), \(u(t,\cdot)\) also has a version that is continuous almost everywhere._
**Remark 4.4**.: _When \(u_{T}\) is not continuous, then (4.10) must be interpreted as the continuous extension of the operator \(u_{T}\mapsto u_{T}\circ\phi_{T,t}\) to \(u_{T}\in L^{p}_{\mathrm{loc}}\), which is well-defined in view of the estimate (4.8)._
**Remark 4.5**.: _The estimate (4.9) is called the regular Lagrange property. It reinforces the fact that \(\phi_{t,s}\) does not concentrate in sets of measure zero._
**Remark 4.6**.: _The propagation of almost-everywhere continuity is a consequence of the same property for the forward flow (Proposition 2.2). Note that it is not true in general that a function \(u\in BV_{\mathrm{loc}}(\mathbb{R}^{d})\) is continuous almost everywhere, unless \(d=1\)._
Proof of Theorem 4.7.: For continuous \(u_{T}\), the representation formula is an immediate consequence of the renormalization property Theorem 4.5 and Theorem 4.6. The estimate (4.8) then follows from Theorem 4.4, and (4.9) is obtained by taking \(p=1\) and \(F=\mathbf{1}_{A}\).
For the claim about almost everywhere continuity, define
\[A:=\left\{y\in\mathbb{R}^{d}:\text{$u_{T}$ is not continuous at $y$}\right\}.\]
Then \(\left|A\right|=0\), and then (4.9) gives, for \(0\leq t<T\),
\[\left|\left\{x\in\mathbb{R}^{d}:\text{$u_{T}$ is not continuous at $\phi_{t,T}(x)$}\right\}\right|=0.\]
It follows that \(u_{T}\) is continuous at \(\phi_{t,T}(x)\) for a.e. \(x\). By Proposition 2.2, \(\phi_{t,T}\) is continuous almost everywhere, and the result follows.
Recalling the duality relationship between (4.1) and (4.2) from Theorem 4.4, we then have the following.
**Corollary 4.2**.: _For any \(1\leq p\leq\infty\) and \(f_{0}\in L^{p}_{\mathrm{loc}}(\mathbb{R}^{d})\), the good solution \(f\) of (4.1) is given at time \(t>0\) by \(\phi_{t,0}^{\#}f_{0}\)._
**Remark 4.7**.: _The regular Lagrange property says that the measure \(\phi_{t,0}^{\#}f_{0}\) is well-defined and absolutely continuous with respect to Lebesgue measure, with a density in \(L^{p}_{\mathrm{loc}}\). If \(f_{0}\) is the density for a probability measure, that is, \(f_{0}\in L^{1}_{+}(\mathbb{R}^{d})\) and \(\int f_{0}=1\), then \(f(t,\cdot)\) is the law at time \(t\) of the stochastic process \(\phi_{t,0}(X)\), where \(X\) is a random variable with density \(f_{0}\)._
A consequence of renormalization and the regular Lagrange property is the fact that the forward flow \(\phi_{t,s}\) solves the ODE (2.2) for a.e. initial \(x\in\mathbb{R}^{d}\). A first step is the following lemma.
**Lemma 4.2**.: _For all \(p\in[1,\infty)\) and \(s\in[0,T)\), \(\left\{(t,x)\mapsto b(t,\phi_{t,s}(x))\right\}\in L^{1}([0,T],L^{p}_{\mathrm{ loc}}(\mathbb{R}^{d}))\). If \((b^{\varepsilon})_{\varepsilon>0}\) is as in (2.3) and \((\phi^{\varepsilon})_{\varepsilon>0}\) is the corresponding flow, then, for all \(R>0\),_
\[\lim_{\varepsilon\to 0}\int_{s}^{T}\left\|b^{\varepsilon}(t,\phi_{t,s}^{ \varepsilon})-b(t,\phi_{t,s})\right\|_{L^{p}(B_{R})}dt=0.\]
Proof.: The first claim follows from (4.8): there exists \(C>0\) independent of \(s\) and \(R\) such that, for all \(t\in[0,T]\), \(\left\|b(t,\phi_{t,s})\right\|_{L^{p}(B_{R})}\leq C\left\|b(t,\cdot)\right\|_{L ^{p}(B_{R+C})}\).
For \(\delta>0\) and \(0\leq s\leq t\leq T\), we write
\[\left\|b^{\varepsilon}(t,\phi_{t,s}^{\varepsilon})-b(t,\phi_{t,s}) \right\|_{L^{p}(B_{R})} \leq\left\|b^{\varepsilon}(t,\phi_{t,s}^{\varepsilon})-b^{\delta} (t,\phi_{t,s}^{\varepsilon}\right\|_{L^{p}(B_{R})}+\left\|b^{\delta}(t,\phi_{ t,s}^{\varepsilon})-b^{\delta}(t,\phi_{t,s}\right\|_{L^{p}(B_{R})}\] \[+\left\|b^{\delta}(t,\phi_{t,s})-b(t,\phi_{t,s}\right\|_{L^{p}(B _{R})}.\]
By (4.8), for some \(C>0\) independent of \(\delta\), \(\varepsilon\), \(s\), and \(t\),
\[\left\|b^{\varepsilon}(t,\phi_{t,s}^{\varepsilon})-b^{\delta}(t,\phi_{t,s}^{ \varepsilon})\right\|_{L^{p}(B_{R})}\leq C\left\|b^{\varepsilon}(t,\cdot)-b^{ \delta}(t,\cdot)\right\|_{L^{p}(B_{R+C})}\]
and
\[\left\|b^{\delta}(t,\phi_{t,s})-b(t,\phi_{t,s})\right\|_{L^{p}(B_{R})}\leq C \left\|b^{\delta}(t,\cdot)-b(t,\cdot)\right\|_{L^{p}(B_{R+C})}.\]
The smoothness of \(b^{\delta}\) implies that, for all \(t\in[s,T]\), as \(\varepsilon\to 0\), \(b^{\delta}(t,\phi_{t,s}^{\varepsilon})\) converges a.e. to \(b^{\delta}(t,\phi_{t,s})\). Sending \(\varepsilon\to 0\) and using dominated convergence, we thus have
\[\limsup_{\varepsilon\to 0}\int_{s}^{T}\left\|b^{\varepsilon}(t,\phi_{t,s}^{ \varepsilon})-b(t,\phi_{t,s})\right\|_{L^{p}(B_{R})}dt\leq C\int_{s}^{T} \left\|b^{\delta}(t,\cdot)-b(t,\cdot)\right\|_{L^{p}(B_{R+C})}dt.\]
The proof of the claim is finished upon sending \(\delta\to 0\) and again using dominated convergence.
**Theorem 4.8**.: _Fix \(1\leq p<\infty\) and \(s\in[0,T)\). Then_
\[\{(t,x)\mapsto\phi_{t,s}(x)\}\in L^{p}_{\mathrm{loc}}(\mathbb{R}^{d},W^{1,1}([0,T] )),\]
_and, for a.e. \(x\in\mathbb{R}^{d}\), \([s,T]\ni t\mapsto\phi_{t,s}(x)\) is an absolutely continuous solution of_
\[\phi_{t,s}(x)=x+\int_{s}^{t}b(r,\phi_{r,s}(x))dr.\]
_If \((b^{\varepsilon})_{\varepsilon>0}\) satisfy (2.3) and \(\phi^{\varepsilon}\) is the corresponding flow, then, for all \(R>0\),_
\[\lim_{\varepsilon\to 0}\left\|\phi^{\varepsilon}_{\cdot,s}-\phi_{\cdot,s} \right\|_{L^{p}(B_{R},W^{1,1}([s,T])}=0.\]
_For all \(0\leq r\leq s\leq t\leq T\), \(\phi_{t,r}=\phi_{t,s}\circ\phi_{s,r}\) a.e._
**Remark 4.8**.: _The fact that \(\partial_{t}\phi_{t,\cdot}\in L^{1}\) is due to the fact that we are assuming the weakest possible integrability of \(b\) in the time variable. If \(b\in L^{q}\) for some \(q>1\), then the forward flow belongs to \(W^{1,p}\) for any \(p\leq q\)._
**Remark 4.9**.: _The composition \(\phi_{t,s}\circ\phi_{s,r}\) is made sense of due to (4.8) and the fact that the forward flow takes values in \(L^{p}_{\mathrm{loc}}(\mathbb{R}^{d})\)._
Proof of Theorem 4.8.: For \(\varepsilon>0\), we have \(\partial_{t}\phi^{\varepsilon}_{t,s}(x)=b^{\varepsilon}(t,\phi^{\varepsilon} _{t,s}(x))\). By Lemma 4.2, sending \(\varepsilon\to 0\), we see that the distribution \(\partial_{t}\phi_{t,s}(x)\) satisfies, in the distributional sense, \(\partial_{t}\phi_{t,s}(x)=b(t,\phi_{t,s}(x))\), and therefore, for all \(R>0\),
\[\left\|\int_{s}^{T}|\phi_{t,s}|dt\right\|_{L^{p}(B_{R})}\leq\int_{s}^{T}\left\| \phi_{t,s}\right\|_{L^{p}(B_{R})}dt<\infty.\]
The convergence claim and the solvability of the ODE follow immediately in view of the fact that \(\phi^{\varepsilon}_{s,s}(x)=\phi_{s,s}(x)=x\) for all \(\varepsilon>0\) and \(x\in\mathbb{R}^{d}\).
To prove the last claim, we note that the equality \(\phi_{r,t}\circ\phi_{t,r}=\mathrm{Id}\) holds as functions in \(L^{p}_{\mathrm{loc}}\), and, in view of the flow property of the backward flow,
\[\phi_{r,t}\circ(\phi_{t,s}\circ\phi_{s,r})=\phi_{r,s}\circ\phi_{s,t}\circ\phi _{t,s}\circ\phi_{s,r}=\phi_{r,s}\circ\phi_{s,r}=\mathrm{Id}\,.\]
It follows from Proposition 2.2 that \(\phi_{t,r}=\phi_{t,s}\circ\phi_{s,r}\) a.e., as desired.
We recall that Proposition 2.2 implies that any right-inverse of the backward flow is determined uniquely almost everywhere. We remark here that this property actually follows from the duality between the transport and continuity equations.
**Theorem 4.9**.: _Assume \(\psi\in C([0,t],L^{p}_{\mathrm{loc}}(\mathbb{R}^{d}))\) satisfies \(\phi_{s,t}(\psi_{s}(x))=x\) for all \(s\in[0,t]\), for a.e. \(x\in\mathbb{R}^{d}\). Then \(\psi=\phi_{t,\cdot}\)._
Proof.: It suffices to show that \(u(t,x)=\psi_{s}(x)\) is the unique (vector-valued) duality solution of (4.2) with terminal data equal to \(x\) at time \(t\).
Fix \(g\in C_{c}(\mathbb{R}^{d})\). For a.e. \(x\in\mathbb{R}^{d}\), if \(y=\psi_{t}(x)\), we have \(\phi_{s,t}(y)=x\) by assumption. Therefore, the change of variables formula yields
\[\int_{\mathbb{R}^{d}}g(x)\psi_{t}(x)dx=\int_{\mathbb{R}^{d}}g(\phi_{s,t}(y)) yJ_{s,t}(y)dy=\int_{\mathbb{R}^{d}}f(t,y)ydy,\]
where \(f\) is the good solution of the forward continuity equation with initial condition \(g\) at time \(s\).
**Remark 4.10**.: _A corresponding result characterizing \(\phi_{\cdot,s}\) on \([s,T]\) follows in exactly the same way, by considering the duality between the IVP and TVP for, respectively, an appropriate transport and continuity equation._
**Remark 4.11**.: _The uniqueness result above demonstrates that the right-inverse property is a crucial property of the forward flow. In other words, it implies that \(\phi_{t,s}\) solves the ODE, that it solves the transport PDE in the duality sense, and that it has the regularity properties laid out in Theorems 4.7 and 4.8._
### Characterizations
We now present alternative ways to characterize the solutions of the forward continuity and backward transport equations identified above. Although the PDE (4.2) does not make sense as a distribution, we nevertheless can characterize solutions in a PDE sense through the use of sup- and inf-convolutions. The propagation of almost-everywhere continuity proved in Theorem 4.7 is a crucial ingredient.
By using this characterization in duality with the conservative equation, we then show that nonnegative distributional solutions of (4.1) are unique, and therefore equal to the "good" solution identified by the formula (4.3). As a consequence, we finally conclude with the uniqueness of regular Lagrangian flows, forward in time, of the ODE (2.2).
#### 4.4.1 The nonconservative equation: \(\sup\) and \(\inf\) convolutions
Given \(\delta>0\) and \(u\in L^{\infty}(\mathbb{R}^{d})\), we define the sup- and inf-convolutions
\[u^{\delta}(x):=\operatorname*{ess\,sup}_{y\in\mathbb{R}^{d}}\left\{u(y)-\frac {1}{2\delta}|x-y|^{2}\right\}\]
and
\[u_{\delta}(x):=\operatorname*{ess\,inf}_{y\in\mathbb{R}^{d}}\left\{u(y)+\frac {1}{2\delta}|x-y|^{2}\right\}.\]
These regularizations are common in the theory of viscosity solutions, or generally for equations satisfying a maximum principle in spaces of continuous functions. The supremum and infimum must be essential, because \(u\) is only defined almost everywhere.
**Lemma 4.3**.: _Assume that \(u\in L^{\infty}(\mathbb{R}^{d})\) is continuous almost everywhere. Then, for all \(\delta>0\), \(u_{\delta},u^{\delta}\) are globally Lipschitz with constant_
\[(\operatorname*{ess\,sup}u-\operatorname*{ess\,inf}u)^{1/2}\delta^{-1/2},\]
_and_
\[u_{\delta}\leq u\leq u^{\delta}\quad\text{a.e.}\]
_As \(\delta\to 0\), \(u^{\delta}\) decreases to \(u\) and \(u_{\delta}\) increases to \(u\) a.e. Finally, the \(\operatorname*{ess\,sup}\) and \(\operatorname*{ess\,inf}\) in the definitions of \(u^{\delta}\) and \(u_{\delta}\) can be restricted to respectively \(y\in B_{R^{\delta}(x)}(x)\) and \(B_{R^{\delta}(x)}(x)\), where_
\[R^{\delta}(x)=2(u^{2\delta}(x)-u^{\delta}(x))^{1/2}\delta^{1/2}.\]
_and_
\[R_{\delta}(x)=2(u_{\delta}(x)-u_{2\delta}(x))^{1/2}\delta^{1/2}.\]
Proof.: Fix \(x\in\mathbb{R}^{d}\) and \(r>0\). We thus have
\[u^{\delta}(x)\geq\operatorname*{ess\,sup}_{y\in B_{r}(x)}u(y)-\frac{r^{2}}{2 \delta}.\]
Sending \(r\to 0\), we see that \(u^{\delta}(x)\geq u(t,x)\) whenever \(u\) is continuous at \(x\), and therefore \(u^{\delta}\geq u\) a.e. Similarly, \(u_{\delta}\leq u\) a.e.
We now observe that, if \(R>(\operatorname*{ess\,sup}u-\operatorname*{ess\,inf}u)^{1/2}\), then, for a.e. \(y\notin B_{R^{\delta^{1/2}}}\),
\[u(y)-\frac{|x-y|^{2}}{2\delta}\leq\operatorname*{ess\,sup}u-R^{2}<\operatorname* {ess\,inf}u\leq u^{\delta}(x).\]
By also using a similar argument for \(u_{\delta}\), we see that
\[u^{\delta}(x):=\operatorname*{ess\,sup}_{|y-x|\leq R^{\delta^{1/2}}}\left\{u( y)-\frac{1}{2\delta}|x-y|^{2}\right\}\]
and
\[u_{\delta}(x):=\operatorname*{ess\,inf}_{|y-x|\leq R^{\delta^{1/2}}}\left\{u (y)+\frac{1}{2\delta}|x-y|^{2}\right\}.\]
It is then straightforward to see that \(u^{\delta}\) and \(u_{\delta}\) are respectively decreasing and increasing pointwise as \(\delta\) decreases to \(0\), and converge whenever \(u\) is continuous at \(x\) (and thus a.e.) to \(u(x)\).
For fixed \(x\in\mathbb{R}^{d}\), \(\delta>0\), and \(\eta>0\), define
\[A_{\delta,\eta}(x):=\left\{y\in\mathbb{R}^{d}:u(y)-\frac{|x-y|^{2}}{2\delta}>u^ {\delta}(x)-\eta\right\}.\]
Then, by definition, \(A_{\delta,\eta}(x)\) has nonzero Lebesgue measure. Therefore, for any \(x^{\prime}\in\mathbb{R}^{d}\), there exists \(y\in A_{\delta,\eta}(x)\) such that
\[u(y)-\frac{|x^{\prime}-y|^{2}}{2\delta}\leq u^{\delta}(x^{\prime}),\]
and so
\[u^{\delta}(x)-u^{\delta}(x^{\prime})\leq\frac{|x^{\prime}-y|^{2}}{2\delta}- \frac{|x-y|^{2}}{2\delta}+\eta\leq\frac{R}{\delta^{1/2}}|x^{\prime}-x|+\frac{| x^{\prime}-x|^{2}}{\delta}+\eta.\]
Switching the roles of \(x\) and \(x^{\prime}\) and using the fact that \(\eta\) was arbitrary, we see that, for all \(x\in\mathbb{R}^{d}\),
\[\limsup_{x^{\prime}\to x}\frac{|u^{\delta}(x^{\prime})-u^{\delta}(x)|}{|x^{ \prime}-x|}\leq\frac{R}{\delta^{1/2}}.\]
We may then let \(R\) decrease down to \((\operatorname{ess}\sup u-\operatorname{ess}\inf u)^{1/2}\), and the same proof for \(u_{\delta}\) holds.
For any \(\eta>0\) and a.e. \(y\in A_{\delta}^{\eta}\),
\[u^{2\delta}(x)\geq u(y)-\frac{|x-y|^{2}}{4\delta}>u^{\delta}(x)+\frac{|x-y|^{2 }}{2\delta}-\eta,\]
and so
\[|y-x|\leq 2(u^{2\delta}(x)-u^{\delta}(x)+\eta)^{1/2}\delta^{1/2}.\]
Therefore, for a.e. \(y\) such that \(|y-x|>R^{\delta}(x)\), we must have \(u(y)-\frac{|x-y|^{2}}{2\delta}<u^{\delta}(x)\), and the statement about restricting the \(\operatorname{ess}\sup\) follows. The corresponding result for \(u_{\delta}\) is proved in the same way.
**Theorem 4.10**.: _Assume \(u\in C([0,T],L^{1}_{\operatorname{loc}}(\mathbb{R}^{d}))\cap L^{\infty}([0,T] \times\mathbb{R}^{d})\) is continuous almost everywhere and \(u(T,\cdot)=u_{T}\in L^{\infty}(\mathbb{R}^{d})\). Then \(u\) is the duality solution of (4.2) if and only if there exist \(r^{\delta},r_{\delta}\in L^{1}_{\operatorname{loc}}([0,T]\times\mathbb{R}^{d}))\) such that \(\lim_{\delta\to 0}r^{\delta}=\lim_{\delta\to 0}r_{\delta}=0\) in \(L^{1}_{\operatorname{loc}}\), and the \(\sup\)- and \(\inf\)-convolutions_
\[u^{\delta}(t,x):=\operatorname{ess}\sup_{y\in\mathbb{R}^{d}}\left\{u(t,y)- \frac{1}{2\delta}|x-y|^{2}\right\}\]
_and_
\[u_{\delta}(t,x):=\operatorname{ess}\inf_{y\in\mathbb{R}^{d}}\left\{u(t,y)+ \frac{1}{2\delta}|x-y|^{2}\right\}\]
_satisfy in the sense of distributions on \([0,T]\times\mathbb{R}^{d}\) the inequalities_
\[\frac{\partial u^{\delta}}{\partial t}+b(t,x)\cdot\nabla u^{\delta}\leq r^{ \delta}(t,x)\quad\text{and}\quad\frac{\partial u_{\delta,\eta}}{\partial t}+b (t,x)\cdot\nabla u_{\delta}\geq-r_{\delta}(t,x).\]
Proof.: Assume first that the \(\sup\)- and \(\inf\)-convolutions have the stated properties. For standard mollifiers \((\rho_{\eta})_{\eta>0}\) on \(\mathbb{R}\), define \(u^{\delta}_{\eta}(t,x)=(u^{\delta}(\cdot,x)*_{t}\rho_{\eta})(t)\) and \(u_{\delta,\eta}(t,x)=(u_{\delta}(\cdot,x)*_{t}\rho_{\eta})(t)\). Then, by Lemma 4.3, \(u^{\delta}_{\eta}\) and \(u_{\delta,\eta}\) are Lipschitz continuous on \([0,T]\times\mathbb{R}^{d}\), and satisfy a.e. in \([0,T]\times\mathbb{R}^{d}\)
\[\frac{\partial u^{\delta}_{\eta}}{\partial t}+b(t,x)\cdot\nabla u^{\delta}_{ \eta}\leq r^{\delta}_{\eta}(t,x)\quad\text{and}\quad\frac{\partial u_{\delta, \eta}}{\partial t}+b(t,x)\cdot\nabla u_{\delta,\eta}\geq-r_{\delta,\eta}(t,x),\]
where
\[r^{\delta}_{\eta}(t,x)=(r^{\delta}(\cdot,x)*_{t}\rho_{\eta})(t)+\int_{\mathbb{ R}}(b(t,x)-b(s,x))\cdot\nabla u^{\delta}(s,x)\rho_{\eta}(s-t)ds\]
\[r_{\delta,\eta}(t,x)=(r_{\delta}(\cdot,x)\ast_{t}\rho_{\eta})(t)+\int_{\mathbb{R}}(b (t,x)-b(s,x))\cdot\nabla u_{\delta}(s,x)\rho_{\eta}(s-t)ds.\]
The (local) boundedness of \(b\), \(\nabla u_{\delta}\), and \(\nabla u^{\delta}\) then allow us to invoke the dominated convergence theorem to say that, for fixed \(\delta\), \(\lim_{\eta\to 0}r_{\eta}^{\delta}=r^{\delta}\) and \(\lim_{\eta\to 0}r_{\delta,\eta}=r_{\delta}\) in \(L^{1}_{\mathrm{loc}}\).
Now let \(f_{0}\in C_{c}(\mathbb{R}^{d})\) be nonnegative and let \(f\) be the "good" solution of (4.1). In view of the nonnegativity of \(J\), \(f\) given by (4.3) is nonnegative on \([0,T]\times\mathbb{R}^{d}\), and the bounds for the backward flow in Lemma 2.2 imply that \(f\) has compact support in \([0,T]\times\mathbb{R}^{d}\). By Theorem 4.1, \(f\) is a distributional solution, and therefore
\[\int_{\mathbb{R}^{d}}f(T,x)u_{\eta}^{\delta}(T,x)dx-\int_{\mathbb{ R}^{d}}f_{0}(x)u_{\eta}^{\delta}(0,x)dx =\int_{0}^{T}\int_{\mathbb{R}^{d}}f(t,x)\left[\partial_{t}u^{ \delta,\eta}(t,x)+b(t,x)\cdot\nabla u^{\delta,\eta}(t,x)\right]dxdt\] \[\leq\int_{0}^{T}\int_{\mathbb{R}^{d}}f(t,x)r_{\eta}^{\delta}(t,x )dxdt.\]
Sending first \(\eta\to 0\) and then \(\delta\to 0\), using Lemma 4.3 and the dominated convergence theorem, we conclude that
\[\int_{\mathbb{R}^{d}}f(T,x)u_{T}(x)dx\leq\int_{\mathbb{R}^{d}}f_{0}(x)u(0,x)dx.\]
Arguing similarly with \(u_{\delta,\eta}\) as a test function, we achieve the opposite inequality. By linearity, the duality identity holds for any \(f_{0}\in L^{\infty}\) with bounded support, and we conclude that \(u\) is the unique duality solution.
Assume now conversely that \(u\) is the duality solution. Let \((b^{\varepsilon})_{\varepsilon>0}\) be as in (2.3), let \(u^{\varepsilon}\) be the corresponding solution, and define
\[u^{\varepsilon,\delta}(t,x):=\sup_{y\in\mathbb{R}^{d}}\left\{u^{\varepsilon}( t,y)-\frac{1}{2\delta}|x-y|^{2}\right\}\]
and
\[u^{\varepsilon}_{\delta}(t,x):=\inf_{y\in\mathbb{R}^{d}}\left\{u^{\varepsilon }(t,y)+\frac{1}{2\delta}|x-y|^{2}\right\}.\]
By Lemma 4.3, for fixed \(\delta>0\), \(u^{\varepsilon,\delta}\) and \(u^{\varepsilon}_{\delta}\) are Lipschitz continuous in the space variable, uniformly over \([0,T]\times\mathbb{R}^{d}\) and \(\varepsilon>0\). Moreover, the sup and inf are actually a max and min, and may be restricted to
\[|y-x|\leq(\max u_{0}-\min u_{0})^{1/2}\delta^{1/2}\]
(note that we have used the maximum principle for the transport equation to control the maximum and minimum of \(u^{\varepsilon}\) and \(u_{\varepsilon}\)). We may alternatively restrict the \(y\) for which the maximum in the definition of \(u^{\varepsilon,\delta}(t,x)\) is attained to satisfy
\[|y-x|\leq 2(u^{\varepsilon,2\delta}(t,x)-u^{\varepsilon,\delta}(t,x))^{1/2} \delta^{1/2}, \tag{4.11}\]
and the minimum in the definition of \(u^{\varepsilon}_{\delta}\) is attained by \(y\) satisfying
\[|y-x|\leq 2(u^{\varepsilon}_{\delta}(t,x)-u^{\varepsilon}_{2\delta}(t,x))^{1/2} \delta^{1/2}. \tag{4.12}\]
Standard properties of envelopes then give the identities, for any \((t,x)\in[0,T]\times\mathbb{R}^{d}\),
\[\frac{\partial u^{\varepsilon,\delta}}{\partial t}(t,x)=\frac{\partial u^{ \varepsilon}}{\partial t}(t,y)\quad\text{and}\quad\nabla u^{\varepsilon, \delta}(t,x)=\nabla u^{\varepsilon}(t,y)=\frac{y-x}{\delta}\]
for some \(y\) satisfying (4.11). Therefore
\[\partial_{t}u^{\varepsilon,\delta}(t,x)=-b^{\varepsilon}(t,y)\cdot\nabla u^{ \varepsilon,\delta}(t,x),\]
from which we deduce that \(u^{\varepsilon,\delta}\) is uniformly Lipschitz continuous in the time variable over \([0,T]\times B_{R}\) for any \(R>0\), independently of \(\varepsilon\). Further developing the equality gives
\[\begin{split}\frac{\partial u^{\varepsilon,\delta}}{\partial t}(t, x)+b^{\varepsilon}(t,x)\cdot\nabla u^{\varepsilon,\delta}(t,x)&=\frac{ \partial u^{\varepsilon}}{\partial t}(t,y)+b^{\varepsilon}(t,x)\cdot\nabla u^ {\varepsilon}(t,y)\\ &=-(b^{\varepsilon}(t,x)-b^{\varepsilon}(t,y))\cdot\frac{x-y}{ \delta}\\ &\leq C_{0}(t)\frac{|x-y|^{2}}{\delta}\leq 4C_{0}(t)(u^{\varepsilon,2 \delta}(t,x)-u^{\varepsilon,\delta}(t,x)).\end{split} \tag{4.13}\]
We similarly have that \(u^{\varepsilon}_{\delta}\) is Lipschitz continuous in the time variable, locally in space, uniformly over \(\varepsilon>0\), and
\[\frac{\partial u^{\varepsilon}_{\delta}}{\partial t}(t,x)+b^{\varepsilon}(t,x )\cdot\nabla u^{\varepsilon}_{\delta}(t,x)\geq-4C_{0}(t)(u^{\varepsilon}_{ \delta}(t,x)-u^{\varepsilon}_{2\delta}(t,x)). \tag{4.14}\]
We now claim that, as \(\varepsilon\to 0\), \(u^{\varepsilon,\delta}\) and \(u^{\varepsilon}_{\delta}\) converge pointwise to respectively \(u^{\delta}\) and \(u_{\delta}\), and then, by the uniform-in-\(\varepsilon\) Lipschitz regularity, the convergence is locally uniform. To see this, fix \(x\in\mathbb{R}^{d}\) and \(\eta>0\), and let \(A\subset\mathbb{R}^{d}\) be a set of positive measure such that
\[u^{\delta}(t,x)\leq u(t,y)-\frac{|x-y|^{2}}{2\delta}+\eta.\]
We then have, for all \(y\in A\),
\[u^{\delta,\varepsilon}(t,x)\geq u^{\varepsilon}(t,y)-\frac{|x-y|^{2}}{2\delta}.\]
For at least one such \(y\), we then have \(u^{\varepsilon}(t,y)\xrightarrow{\varepsilon\to 0}u(t,y)\), and we thus have
\[\limsup_{\varepsilon\to 0}\left(u^{\delta}(t,x)-u^{\delta,\varepsilon}(t,x) \right)\leq\eta.\]
It follows that \(\limsup_{\varepsilon\to 0}\left(u^{\delta}(t,x)-u^{\delta,\varepsilon}(t,x) \right)\leq 0\) since \(\eta\) was arbitrary.
Now, there exists a full measure set \(B\subset\mathbb{R}^{d}\) such that, for all \(y\in B\),
\[u^{\delta}(t,x)\geq u(t,y)-\frac{|x-y|^{2}}{2\delta}\quad\text{and}\quad\lim_{ \varepsilon\to 0}u^{\varepsilon}(t,y)=u(t,y).\]
In view of the continuity of \(u^{\varepsilon}(t,\cdot)\), there exists a bounded (independently \(\varepsilon\)) sequence \((y_{n})_{n\in\mathbb{N}}\subset B\) such that
\[\rho_{n}:=u^{\delta,\varepsilon}(t,x)-\left\{u^{\varepsilon}(t,y_{n})-\frac{ |x-y_{n}|^{2}}{2\delta}\right\}\]
satisfies \(\lim_{n\to\infty}\rho_{n}=0\). Therefore, for all \(n\),
\[u^{\delta,\varepsilon}(t,x)-u^{\delta}(t,x)\leq u^{\varepsilon}(t,y_{n})-u(t, y_{n})+\rho_{n}.\]
Sending \(\varepsilon\to 0\) gives \(\limsup_{\varepsilon\to 0}(u^{\delta,\varepsilon}(t,x)-u^{\delta}(t,x))\leq\rho_{n}\), and the proof of pointwise convergence is finished upon sending \(n\to\infty\). The exact same argument can be used for the pointwise convergence of \(u^{\varepsilon}_{\delta}\) to \(u_{\delta}\).
It then follows that, for fixed \(\delta\), as \(\varepsilon\to 0\), \(\nabla u^{\varepsilon,\delta}\) and \(\nabla u^{\varepsilon}_{\delta}\) converge weak-\(\star\) in \(L^{\infty}\) to \(\nabla u^{\delta}\) and \(\nabla u_{\delta}\) respectively, while \(b^{\varepsilon}\) converges in \(L^{1}_{\mathrm{loc}}\) to \(b\). We may then take \(\varepsilon\to 0\) in (4.13) and (4.14) to obtain the distributional inequalities
\[\frac{\partial u^{\delta}}{\partial t}(t,x)+b(t,x)\cdot\nabla u^{\delta}(t,x) \leq 4C_{0}(t)(u^{2\delta}(t,x)-u^{\delta}(t,x))=:r^{\delta}(t,x)\]
and
\[\frac{\partial u_{\delta}}{\partial t}(t,x)+b(t,x)\cdot\nabla u_{\delta}(t,x )\geq-4C_{0}(t)(u_{\delta}(t,x)-u_{2\delta}(t,x))=:-r_{\delta}(t,x).\]
By Lemma 4.3 and the almost-everywhere continuity of \(u\), the right-hand sides of both inequalities converge a.e. to \(0\) as \(\delta\to 0\), and, by the uniform boundedness in \(\delta\) of \(u^{\delta}\) and \(u_{\delta}\) and the dominated convergence theorem, \(r^{\delta}\) and \(r_{\delta}\) both converge in \(L^{1}_{\mathrm{loc}}\) to \(0\) as \(\delta\to 0\)
#### 4.4.2 The conservative equation: uniqueness of nonnegative solutions
We observe that, in the first implication in the proof of Theorem 4.10, it was proved that \(u\) was a duality solution by proving the duality identity relative to the a "good" nonnegative solution. However, it was only explicitly used that \(f\) was a distributional solution. Therefore, after having proved the equivalence in Theorem 4.10, we arrive at the following.
**Theorem 4.11**.: _Suppose that \(f\in C([0,T],L^{p}_{\mathrm{loc}}(\mathbb{R}^{d}))\) is a distributional solution of (4.1) and \(f\geq 0\). Then \(f(t,x)=f(0,\phi_{0,t}(x))J_{0,t}(x)\)._
Proof.: Fix \(t>0\) and \(v\in C_{c}(\mathbb{R}^{d})\), and let \(u\in C([0,t],L^{1}_{\mathrm{loc}}(\mathbb{R}^{d}))\cap L^{\infty}([0,t]\times \mathbb{R}^{d})\) be the duality solution of (4.2) with terminal data \(v\) at time \(t\). Then, by Theorem 4.7, \(u\) is continuous almost everywhere in \([0,t]\times\mathbb{R}^{d}\). Arguing exactly as in the first part of Theorem 4.10, using the nonnegativity of \(f\), we arrive at the equality
\[\int_{\mathbb{R}^{d}}f(t,x)v(x)dx=\int_{\mathbb{R}^{d}}f(0,x)u(0,x)dx.\]
Since \(v\) was arbitrary, it follows from the definition of duality solutions that \(f(t,x)\) must be given by (4.3).
We then have the following corollary about characterizing the good solution even when \(f\) is signed:
**Corollary 4.3**.: _A function \(f\in C([0,T],L^{p}_{\mathrm{loc}}(\mathbb{R}^{d}))\) is the good solution of (4.1) if and only if \(f\) and \(|f|\) are both solutions in the sense of distributions._
Proof.: That this property is satisfied by the good solution was already pointed out (Corollary 4.1). Suppose now that \(f\) and \(|f|\) are both distributional solutions. It follows that \(f_{+}=\frac{1}{2}(f+|f|)\) and \(f_{-}=\frac{1}{2}(|f|-f)\) are distributional solutions, and, since \(f_{+}\geq 0\) and \(f_{-}\geq 0\), they are both the good solutions. Therefore \(f=f_{+}-f_{-}\) is a good solution by linearity.
#### 4.4.3 Uniqueness of regular Lagrangian flows
We can finally establish the uniqueness for the forward flows of the ODE (2.2)
**Theorem 4.12**.: _For every \(s\in[0,T]\) and almost every \(x\in\mathbb{R}^{d}\), \(\phi_{t,s}(x)\) is the unique absolutely continuous solution of (2.2)._
Proof.: This is a consequence of Theorem 4.11 and the superposition principle of Ambrosio [5, Theorem 3.1].
### Some remarks for second order equations
We next investigate the second-order analogues of (4.1) and (4.2). As mentioned earlier, we are not able to treat the most general case in which \(\sigma\) is a regular function of \(x\). This is due to the fact that Lemma 2.5 only gives regularity of the backward stochastic flow in \(C^{0,1-\varepsilon}\) for \(0<\varepsilon<1\). As a consequence, defining the Jacobian and using it to analyze the right-inverse of the flow is not possible in general. Our results in this case are limited to stochastic flows for which the coefficient \(\sigma\) in front of the Wiener process is constant in the space variable. The generalization to regular but nonconstant \(\sigma\) will be the subject of future work.
#### 4.5.1 The expansive stochastic flow with constant noise coefficient
The stochastic analogue of the forward flow (2.2) is
\[d_{t}\Phi_{t,s}(x)=b(t,\Phi_{t,s}(x))dt+\sigma(t,\Phi_{t,s}(x))dW_{t},\quad t \in[s,T],\quad\Phi_{s,s}(x)=x, \tag{4.15}\]
where \(\sigma:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d\times m}\) is some matrix-valued map. As we shall see, this general setting is out of the reach at the moment, and we thus assume
\[\sigma\in L^{2}([0,T],\mathbb{R}^{d\times m}) \tag{4.16}\]
is constant in the space variable. We then consider the forward stochastic flow
\[d\Phi_{t,s}(x)=b(t,\Phi_{t,s}(x))dt+\sigma_{t}dW_{t},\quad t\in[s,T],\quad\Phi_{s,s}(x)=x. \tag{4.17}\]
Formally defining
\[\tilde{\Phi}_{t,s}(x):=\Phi_{t,s}(x)-\underbrace{\int_{s}^{t}\sigma_{r}dW_{r}}_ {:=M_{t}-M_{s}}\]
leads to the random ODE
\[\partial_{t}\tilde{\Phi}_{t,s}(x)=b\left(t,\tilde{\Phi}_{t,s}(x)+M_{t}-M_{s} \right),\quad t\in[s,T],\quad\tilde{\Phi}_{s,s}(x)=x. \tag{4.18}\]
We now invoke the theory of the previous subsections to obtain the following:
**Theorem 4.13**.: _For every \(s\in[0,T)\), with probability one, there exists a unique \(\Phi_{\cdot,s}\in C([s,T],L^{p}_{\mathrm{loc}}(\mathbb{R}^{d}))\cap L^{p}_{ \mathrm{loc}}(\mathbb{R}^{d},C([s,T]))\) such that, for a.e. \(x\in\mathbb{R}^{d}\),_
\[\Phi_{t,s}(x)=x+\int_{s}^{t}b(r,\Phi_{r,s}(x))dr+\int_{s}^{t}\sigma_{r}dW_{r}.\]
_If \((b^{\varepsilon})_{\varepsilon>0}\) are as in (2.3) and \(\Phi^{\varepsilon}\) is the unique stochastic flow solving (4.17) with drift \(b^{\varepsilon}\), then, with probability one, as \(\varepsilon\to 0\), \(\Phi^{\varepsilon}\) converges in \(C([s,T],L^{p}_{\mathrm{loc}}(\mathbb{R}^{d}))\) and in \(L^{p}_{\mathrm{loc}}(\mathbb{R}^{d},C([s,T]))\) to \(\Phi\)._
Proof.: This follows upon applying the results of Theorems 4.8 and 4.12 to the random ODE (4.18).
#### 4.5.2 A priori estimates for the second-order nonconservative equation
We next relate the forward stochastic flow from the previous subsection to the terminal value problem for a certain second-order, nonconservative equation. This will be done with the use of a priori \(L^{p}\) and \(BV\) estimates, which lead to useful compactness results, just as for the first order case.
We begin with the more general problem
\[-\partial_{t}u-\mathrm{tr}[a(t,x)\nabla^{2}u]+b(t,x)\cdot\nabla u=0\quad\text{ in }(0,T)\times\mathbb{R}^{d},\quad u(T,\cdot)=u_{T}, \tag{4.19}\]
where
\[a(t,x)=\frac{1}{2}\sigma(t,x)\sigma(t,x)^{T},\quad\sigma\in L^{2}([0,T],C^{1,1 }(\mathbb{R}^{d},\mathbb{R}^{d\times m})); \tag{4.20}\]
notice that, although we allow \(\sigma\) to be nonconstant here, we require more regularity for \(\sigma\) than in Section 3.
**Lemma 4.4**.: _There exists \(C\in L^{1}_{+}([0,T])\) depending only the \(C^{1,1}\) norm of \(\sigma\) such that, if \(u\) is a smooth solution of_
\[-\partial_{t}u-\mathrm{tr}[a(t,x)\nabla^{2}u]=0\quad\text{in }(0,T)\times \mathbb{R}^{d},\quad u(T,\cdot)=u_{T},\]
_then_
\[\left\|u(t,\cdot)\right\|_{BV(\mathbb{R}^{d})}\leq\exp\left(\int_{t}^{T}C(s) ds\right)\left\|u_{T}\right\|_{BV(\mathbb{R}^{d})}.\]
Proof.: For \((t,x,z)\in[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{d}\), set \(w(t,x,z)=\nabla u(t,x)\cdot z\). Then \(w\) solves the parabolic PDE
\[\frac{\partial w}{\partial t}-\mathrm{tr}[A(t,x,z)\nabla^{2}_{(x,z)}w]=0\quad \text{in }(0,T)\times\mathbb{R}^{2d},\]
where
\[A(t,x,z)=\frac{1}{2}\begin{pmatrix}\sigma(t,x)\\ z\cdot\nabla\sigma(t,x)\end{pmatrix}\left(\sigma(t,x)^{T}\quad z\cdot\nabla \sigma(t,x)^{T}\right).\]
After a routine regularization argument, using the convexity of \(w\mapsto|w|\),
\[\frac{\partial|w|}{\partial t}-\mathrm{tr}[A(t,x,z)\nabla^{2}_{(x,z)}|w|]\leq 0 \quad\text{in }(0,T)\times\mathbb{R}^{d}\times\mathbb{R}^{d}. \tag{4.21}\]
For some \(m>d+1\), let \(\phi\in C^{\infty}_{+}([0,\infty))\) be such that, for some universal \(C>0\),
\[\phi(r)=\frac{1}{r^{m}}\quad\text{for }r\geq 1\quad\text{and}\quad r|\phi^{ \prime}(r)|+r^{2}|\phi^{\prime\prime}(r)|\leq C\phi(r)\quad\text{for all }r\geq 0. \tag{4.22}\]
We multiply (4.21) by \(\phi(|z|)\) and integrate in \((x,z)\in\mathbb{R}^{d}\times\mathbb{R}^{d}\). Then (4.20) and (4.22) imply that for some \(C\in L^{1}_{+}([0,T])\),
\[-\frac{d}{dt}\iint_{\mathbb{R}^{d}\times\mathbb{R}^{d}}|w(t,x,z)|\phi(|z|)dxdz \leq C(t)\iint_{\mathbb{R}^{d}\times\mathbb{R}^{d}}|w(t,x,z)|\phi(|z|)dxdz.\]
The proof is then finished by Gronwall's lemma and the fact that
\[\iint_{\mathbb{R}^{d}\times\mathbb{R}^{d}}|w(t,x,z)|\phi(z)dxdz=c_{0}\int_{ \mathbb{R}^{d}}|\nabla u(t,x)|dx,\]
where \(c_{0}:=\int_{\mathbb{R}^{d}}|\nu\cdot z|\phi(|z|)dz\) is finite and independent of \(|\nu|=1\).
We have already proved an exponential propagation of the \(BV\) bounds when \(a=0\) in Lemma 4.1. It is a classical fact for evolution PDEs that, upon using a splitting scheme, that these estimates can be combined, and we immediately have the following:
**Lemma 4.5**.: _There exists a constant \(C\in L^{1}_{+}([0,T])\) depending only on the constants in (2.1) and (4.20) such that, if \(u\) is a smooth solution of (4.19), then_
\[\left\|u(t,\cdot)\right\|_{L^{p}}\leq\exp\left(\int_{0}^{t}C(s)ds\right)\left\| u_{T}\right\|_{L^{p}}\quad\text{and}\quad\left\|u(t,\cdot)\right\|_{BV}\leq\exp \left(\int_{0}^{t}C(s)ds\right)\left\|u_{T}\right\|_{BV}.\]
Just as in the first-order case, it is not possible to define \(L^{p}\)-distributional solutions of (4.19), and the utility of Lemma 4.5 is that it allows to obtain strongly convergent subsequences in \(C([0,T],L^{p}(\mathbb{R}^{d}))\) after regularizing the velocity field \(b\).
The main question is whether such limiting solutions are unique. This uniqueness was achieved in the first-order case through duality with the conservative equation, and the solution was further characterized with a formula involving the forward flow. In the second-order case, we are constrained to work with constant noise coefficients:
\[-\partial_{t}u-\mathrm{tr}[a(t)\nabla^{2}u]+b(t,x)\cdot\nabla u=0\quad\text{ in }(0,T)\times\mathbb{R}^{d},\quad u(T,\cdot)=u_{T}, \tag{4.23}\]
where \(a=\frac{1}{2}\sigma\sigma^{T}\) as before.
**Theorem 4.14**.: _For \(1<p<\infty\) and \(t\in[0,T]\), the map_
\[C_{c}(\mathbb{R}^{d})\ni u_{T}\mapsto\mathbb{E}[u_{T}\circ\Phi_{T,t}]\]
_extends to a continuous, linear, order-preserving map on \(L^{p}(\mathbb{R}^{d})\), and the function_
\[u(t,x):=\mathbb{E}[u_{T}(\Phi_{T,t}(x))]\quad(t,x)\in[0,T]\times\mathbb{R}^{d} \tag{4.24}\]
_belongs to \(C([0,T],L^{p}(\mathbb{R}^{d}))\), and, if \(u_{T}\in BV(\mathbb{R}^{d})\), then \(u\in L^{\infty}([0,T],BV(\mathbb{R}^{d}))\)._
_If \((b^{\varepsilon})_{\varepsilon>0}\) is as in (2.3) and \(u^{\varepsilon}\) is the corresponding solution of (4.23), then, as \(\varepsilon\to 0\), \(u^{\varepsilon}\) converges strongly to \(u\) in \(C([0,T],L^{p}(\mathbb{R}^{d}))\)._
Proof.: Assume that \(u_{T}\in C^{2}(\mathbb{R}^{d})\cap C_{c}(\mathbb{R}^{d})\). For \(b^{\varepsilon}\) and \(u^{\varepsilon}\) as in the statement of the theorem, we have the standard representation formula \(u^{\varepsilon}(t,x)=\mathbb{E}[u_{T}(\Phi^{\varepsilon}_{T,t}(x))]\), where \(\Phi^{\varepsilon}\) corresponds to the flow (4.17) with drift \(b^{\varepsilon}\). By Theorem 4.13, for any \(t\in[0,T]\), with probability one, \(u_{T}\circ\Phi^{\varepsilon}_{T,t}\to u_{T}\circ\Phi_{T,t}\) a.e. in \(\mathbb{R}^{d}\). On the other hand, by Lemma 4.5, \((u^{\varepsilon})_{\varepsilon>0}\) is precompact in \(C([0,T],L^{p}(\mathbb{R}^{d}))\), and therefore the full sequence converges to \(u\) given by (4.24). The \(L^{p}\)-bounds and the extension to \(u_{T}\in L^{p}(\mathbb{R}^{d})\) now follow from the \(L^{p}\) a priori estimates in Lemma 4.5.
#### 4.5.3 Representation formula for the Fokker-Planck equation
We turn next to the Fokker-Planck equation
\[\partial_{t}f-\nabla^{2}\cdot(a(t,x)f)+\operatorname{div}(b(t,x)f)=0\quad\text{ in }(0,T)\times\mathbb{R}^{d},\quad f(0,\cdot)=f_{0}, \tag{4.25}\]
where once again \(a=\frac{1}{2}\sigma\sigma^{T}\) with \(\sigma\) as in (4.20).
The existence of solutions in \(C([0,T],L^{p}(\mathbb{R}^{d}))\) is straightforward; we include the proof for convenience.
**Theorem 4.15**.: _For any \(f_{0}\in L^{p}(\mathbb{R}^{d})\), \(1\leq p\leq\infty\), there exists a distributional solution \(f\in C([0,T],L^{p}_{\mathrm{w}}(\mathbb{R}^{d}))\) if \(1\leq p<\infty\), or \(f\in L^{\infty}\) if \(p=\infty\). Moreover, there exists \(C\in L^{1}_{+}([0,T])\) depending only on \(p\), \(C_{0}(t)\) from (2.1) and the \(L^{2}([0,T],C^{1,1}(\mathbb{R}^{d}))\) norm of \(a\)8 such that_
Footnote 8: In fact, only an upper bound for \(\nabla^{2}\cdot a=\partial_{ij}a_{ij}\) is needed.
\[\left\|f(t,\cdot)\right\|_{L^{p}}\leq\exp\left(\int_{0}^{t}C(s)ds\right)\left\| f\right\|_{L^{p}}.\]
Proof.: We do this with the use of a priori estimates, assuming all the data is smooth. The computations may be made rigorous by regularizing \(b\), adding a small ellipticity to \(a\), and extracting weakly convergent subsequences.
We then compute
\[\partial_{t}|f|^{p}-\nabla^{2}\cdot(a(t,x)|f|^{p})+\operatorname{div}(b(t,x)| f|^{p})\leq(p-1)\left(\nabla^{2}\cdot a(t,x)-\operatorname{div}b(t,x)\right)|f|^{p},\]
and so \(\partial_{t}\int|f(t,\cdot)|^{p}\leq C(t)\int|f(t,\cdot)^{p}\) for some \(C\) as in the statement of the Theorem. The result now follows from Gronwall's lemma.
We now explore the possibility of obtaining a formula for the solution, similar to (4.3) for the first order equation (4.1). To do so, it is convenient to reverse time and consider, for fixed \(t\in(0,T]\), the equation satisfied by \(g^{(t)}(s,x):=f(t-s,x)\):
\[-\partial_{s}g^{(t)}-\nabla^{2}\cdot(a(t-s,x)g^{(t)})+\operatorname{div}(b(t- s,x)g^{(t)})=0\quad\text{in }(0,t)\times\mathbb{R}^{d},\quad g^{(t)}(t,\cdot)=f_{0}.\]
For \((s,x,\xi)\in[0,t]\times\mathbb{R}^{d}\times\mathbb{R}\), define \(G^{(t)}(s,x,\xi)=g^{(t)}(s,x)\xi\). Then
\[\begin{cases}-\partial_{s}G^{(t)}-\operatorname{tr}[A^{(t)}(s,x,\xi)\nabla_{ x,\xi}^{2}G^{(t)}]-B^{(t)}(s,x)\cdot\nabla G^{(t)}-C^{(t)}(s,x)\xi\partial_{\xi}G^{( t)}=0\quad\text{in }(0,t)\times\mathbb{R}^{d+1},\\ G^{(t)}(t,x,\xi)=f_{0}(x)\xi,\end{cases} \tag{4.26}\]
where
\[\begin{cases}A^{(t)}(s,x,\xi)=\frac{1}{2}\Sigma^{(t)}(s,x,\xi)\Sigma^{(t)}(s, x,\xi)^{T},\quad\Sigma^{(t)}(s,x,\xi)=\begin{pmatrix}\sigma\\ \xi\operatorname{div}\sigma\end{pmatrix},\\ B^{(t)}(s,x)=-b+(\sigma\cdot\nabla)\sigma^{T},\quad\text{and}\\ C^{(t)}(s,x)=-\operatorname{div}\left(b-\operatorname{div}a\right)\\ =-\operatorname{div}b+\operatorname{tr}[(\sigma\cdot\nabla)(\nabla\cdot \sigma)]+\frac{1}{2}|\operatorname{div}\sigma|^{2}+\frac{1}{2}\operatorname{ tr}[\nabla\sigma\nabla\sigma^{T}];\end{cases} \tag{4.27}\]
for brevity, we have suppressed the arguments for \(a\), \(\sigma\), and \(b\), which are all \((t-s,x)\).
For an \(m\)-dimensional Wiener process \(W\) on \([0,t]\) and a fixed \(s\in[0,t]\), we are led to consider the SDE, for \(r\in[s,t]\),
\[\begin{cases}d_{r}\begin{pmatrix}\Phi^{(t)}_{r,s}(x,\xi)\\ \Xi^{(t)}_{r,s}(x,\xi)\end{pmatrix}=\begin{pmatrix}B^{(t)}(r,\Phi^{(t)}_{r,s}(x,\xi))\\ C^{(t)}(r,\Phi^{(t)}_{r,s}(x,\xi))\Xi^{(t)}_{r,s}(x,\xi)\end{pmatrix}dr+\Sigma^{(t )}(r,\Phi^{(t)}_{r,s}(x,\xi),\Xi^{(t)}_{r,s}(x,\xi))dW_{r},\\ \begin{pmatrix}\Phi^{(t)}_{s,s}(x,\xi)\\ \Xi^{(t)}_{s,s}(x,\xi)\end{pmatrix}=\begin{pmatrix}x\\ \xi\end{pmatrix}.\end{cases} \tag{4.28}\]
Ito's formula, (4.26), and (4.28) then yield that, for any \((s,x,\xi)\in[0,t)\times\mathbb{R}^{d}\times\mathbb{R}\),
\[r\mapsto G^{(t)}(r,\Phi^{(t)}_{r,s}(x,\xi),\Xi^{(t)}_{r,s}(x,\xi))\]
is a martingale on \([s,t]\) with respect to the filtration \((\mathcal{F}_{r})_{r\in[0,t]}\) generated by the Wiener process \(W\), and so, for all \(r\in[s,t]\),
\[\mathbb{E}\left[G^{(t)}(r,\Phi^{(t)}_{r,s}(x,\xi),\Xi^{(t)}_{r,s}(x,\xi))\mid \mathcal{F}_{s}\right]=G^{(t)}(s,x,\xi). \tag{4.29}\]
Observe that \(\Phi^{(t)}_{r,s}\) is independent of \(\xi\), while \(\Xi^{(t)}_{r,s}\) can be written as \(\Xi^{(t)}_{r,s}(x,\xi)=J^{(t)}_{r,s}(x)\xi\) for some scalar quantity \(J^{(t)}_{r,s}(x)\), and so (4.28) reduces to the two SDEs
\[\begin{cases}d_{r}\Phi^{(t)}_{r,s}(x)=-\left[b(t-r,\Phi^{(t)}_{r,s}(x))-(\sigma \cdot\nabla)\sigma^{T}(t-r,\Phi^{(t)}_{r,s}(x))\right]dt+\sigma(t-r,\Phi^{(t)}_ {r,s}(x))dW_{r},\quad r\in[s,t],\\ \Phi^{(t)}_{s,s}(x)=x\end{cases} \tag{4.30}\]
and
\[\begin{cases}d_{r}J^{(t)}_{r,s}(x)=\left[-\operatorname{div}b+\operatorname{ tr}[(\sigma\cdot\nabla)(\nabla\cdot\sigma)]+\frac{1}{2}|\operatorname{div} \sigma|^{2}+\frac{1}{2}\operatorname{tr}[\nabla\sigma\nabla\sigma^{T}] \right](t-r,\Phi^{(t)}_{r,s}(x))J^{(t)}_{r,s}(x)dr\\ \qquad+\operatorname{div}\sigma(t-r,\Phi^{(t)}_{r,s}(x))J^{(t)}_{r,s}(x)dW_{r },\quad r\in[s,t],\\ J^{(t)}_{s,s}(x)=1.\end{cases} \tag{4.31}\]
Standard but tedious computations involving Ito's formula reveal that \(J^{(t)}_{r,s}(x)=\det\nabla_{x}\Phi^{(t)}_{r,s}(x)\).
Taking \(r=t\) and \(\xi=1\) in (4.29), we thus arrive at
\[\mathbb{E}\left[f_{0}(\Phi^{(t)}_{t,s}(x))J^{(t)}_{t,s}(x)\mid\mathcal{F}_{s }\right]=g(s,x),\]
and so, because \(g(0,x)=f(t,x)\), we obtain the representation for solutions of (4.25):
\[f(t,x)=\mathbb{E}\left[f_{0}(\Phi^{(t)}_{t,0}(x))J^{(t)}_{t,0}(x)\right]. \tag{4.32}\]
Let us note that \(\Phi^{(t)}_{t,0}\) has the same law as \((\Phi_{t,0})^{-1}\), where \(\Phi_{t,s}\) is the stochastic flow from (4.15). We can see this by duality with the nonconservative equation. Indeed, if \(u\) is the solution of (4.19) with \(u(t,\cdot)=g\) for some given \(g\), then
\[\int f_{0}(x)u(0,x)dx=\int f(t,x)g(x)dx.\]
On the other hand, by (4.24) and (4.32),
\[\int f_{0}(x)u(0,x)dx=\mathbb{E}\int f_{0}(x)g(\Phi_{t,0}(x))dx\]
and
\[\int f(t,x)g(x)dx=\mathbb{E}\int f_{0}(\Phi^{(t)}_{t,0}(x))g(x)J^{(t)}_{t,0}( x)dx,\]
so, using the change of variables formula and the fact that \(f_{0}\) is arbitrary, we have \(\mathbb{E}[g(\Phi_{t,0}(x))]=\mathbb{E}[g([\Phi^{(t)}_{t,0}]^{-1}(x))]\) for all \(g:\mathbb{R}^{d}\to\mathbb{R}\) and \(x\in\mathbb{R}^{d}\).
We now note that the SDE (4.30) falls under the assumptions of Lemma 2.5, and therefore, for every \(0\leq s<t\leq T\), there exists a unique solution \(\Phi^{(t)}_{,s}\) with the properties laid out by that result. However, the main difficulty is that we do not know whether \(\Phi^{(t)}_{t,0}\) is Lipschitz continuous on \(\mathbb{R}^{d}\) (see Remark 2.8). This prevents us from bounding \(J^{(t)}_{t,0}\) uniformly in \(L^{\infty}\) and passing to weak distributional limits. This is a major obstacle in using the formula (4.32) to identify the unique limiting distributional solution of (4.25), as we did for the first order equation (4.1).
The exception is when \(\sigma\) is independent of \(x\). In that case, (4.30) and (4.31) become
\[d_{r}\Phi^{(t)}_{r,s}(x)=-b(t-r,\Phi^{(t)}_{r,s}(x))dr+\sigma(t-r)dW_{r},\quad r \in[s,t],\quad\Phi^{(t)}_{s,s}(x)=x \tag{4.33}\]
and
\[\partial_{r}J^{(t)}_{r,s}(x)=-\operatorname{div}b(t-r,\Phi^{(t)}_{r,s}(x))J^{ (t)}_{r,s}(x),\quad r\in[s,t],\quad J^{(t)}_{s,s}(x)=1. \tag{4.34}\]
The SDE (4.34) is in fact an ODE with random coefficients. In particular, \(J^{(t)}_{\cdot,s}\) has a deterministic bound.
We then characterize uniquely the limiting distributional solution of
\[\partial_{t}f-\nabla^{2}\cdot(a(t)f)+\operatorname{div}(b(t,x)f)=0\quad\text{ in }(0,T)\times\mathbb{R}^{d},\quad f(0,\cdot)=f_{0}. \tag{4.35}\]
**Theorem 4.16**.: _For \(1\leq p<\infty\), the formula (4.32), where \(\Phi^{(t)}_{\cdot,s}\) and \(J^{(t)}_{\cdot,s}\) are specified by respectively (4.33) and (4.34), extends continuously to any \(f_{0}\in L^{p}(\mathbb{R}^{d})\). If \(f_{0}\in L^{p}(\mathbb{R}^{d})\) and \((b^{\varepsilon})_{\varepsilon>0}\) are as in (2.3) and \(f^{\varepsilon}\) is the corresponding solution of (4.35), then, as \(\varepsilon\to 0\), \(f^{\varepsilon}\) converges weakly in \(C([0,T],L^{p}_{\mathrm{w}}(\mathbb{R}^{d}))\) to \(f\). If \(f_{0}\geq 0\), then there exists a unique nonnegative distributional solution of (4.35), which is given by (4.32)._
Proof.: Let \((b^{\varepsilon})_{\varepsilon>0}\) and \(f^{\varepsilon}\) be as in the statement of the theorem, and assume \(f_{0}\in C^{2}_{\mathrm{c}}(\mathbb{R}^{d})\). Let \(u^{\varepsilon}\) be the solution of (4.23) with velocity \(b^{\varepsilon}\) and with terminal data \(u(t,\cdot)=g\in C^{2}_{\mathrm{c}}(\mathbb{R}^{d})\) for some fixed \(t\in[0,T]\). Then integration by parts yields
\[\int f^{\varepsilon}(t,x)g(x)dx=\int f_{0}(x)u^{\varepsilon}(0,x)dx.\]
By Theorem 4.14, as \(\varepsilon\to 0\), \(u^{\varepsilon}\) converges strongly in \(L^{p^{\prime}}(\mathbb{R}^{d})\) to the function \(u\) defined uniquely by \(u(s,x)=g(\Phi_{t,s}(x))\). Therefore, any \(C([0,T],L^{p}_{\mathrm{w}}(\mathbb{R}^{d}))\)-weak limit \(f\) of \(f^{\varepsilon}\) as \(\varepsilon\to 0\) must satisfy
\[\int f(t,x)g(x)dx=\int f_{0}(x)u(0,x)dx,\]
and it follows that there is a unique such limiting function \(f\).
On the other hand, for \(\varepsilon>0\),
\[f^{\varepsilon}(t,x)=\mathbb{E}\left[f_{0}\left(\Phi^{(t),\varepsilon}_{t,0}( x)\right)J^{(t),\varepsilon}_{t,0}(x)\right],\]
where \(\Phi^{(t),\varepsilon}_{\cdot,s}\) and \(J^{(t),\varepsilon}_{\cdot,s}\) are as in respectively (4.33) and (4.34) with \(b\) replaced everywhere by \(b^{\varepsilon}\). For fixed \(t\in[0,T]\), uniformly in \(\varepsilon\), \(\Phi^{(t),\varepsilon}_{t,0}\) is Lipschitz continuous on \(\mathbb{R}^{d}\), and so \(J^{(t),\varepsilon}_{t,0}=\det\nabla_{x}\Phi^{(t),\varepsilon}_{t,0}\) is bounded in \(L^{\infty}\). By exactly the same arguments as in Lemma 2.3 and Theorem 4.1, we see that, as \(\varepsilon\to 0\), \(\mathbb{E}f_{0}\left(\Phi^{(t),\varepsilon}_{t,0}\right)J^{(t),\varepsilon}_{t,0}\) converges weakly in \(L^{p}\) to \(\mathbb{E}f_{0}\left(\Phi^{(t)}_{t,0}\right)J^{(t)}_{t,0}\). It follows that \(f\) must be given by (4.32). The fact that the formula extends to arbitrary \(f_{0}\in L^{p}(\mathbb{R}^{d})\) now follows from the a priori \(L^{p}\) bounds in Theorem 4.15.
The uniqueness of nonnegative distributional solutions is then a consequence of the uniqueness of the forward flow established in Theorem 4.13, as well as the generalization of superposition to second-order Fokker-Planck equations (see Figalli [38, Lemma 2.3]).
|
2301.00535 | Performance of the r$^{2}$SCAN functional in transition metal oxides | We assess the accuracy and computational efficiency of the recently developed
meta-generalized gradient approximation (metaGGA) functional, the restored
regularized strongly constrained and appropriately normed (r$^2$SCAN), in
transition metal oxide (TMO) systems and compare its performance against SCAN.
Specifically, we benchmark the r$^2$SCAN-calculated oxidation enthalpies,
lattice parameters, on-site magnetic moments, and band gaps of binary
3\textit{d} TMOs against the SCAN-calculated and experimental values.
Additionally, we evaluate the optimal Hubbard \emph{U} correction required for
each transition metal (TM) to improve the accuracy of the r$^2$SCAN functional,
based on experimental oxidation enthalpies, and verify the transferability of
the \emph{U} values by comparing against experimental properties on other
TM-containing oxides. Notably, including the \textit{U}-correction to r$^2$SCAN
increases the lattice parameters, on-site magnetic moments and band gaps of
TMOs, apart from an improved description of the ground state electronic state
in narrow band gap TMOs. The r$^2$SCAN and r$^2$SCAN+\textit{U} calculated
oxidation enthalpies follow the qualitative trends of SCAN and SCAN+\emph{U},
with r$^2$SCAN and r$^2$SCAN+\textit{U} predicting marginally larger lattice
parameters, smaller magnetic moments, and lower band gaps compared to SCAN and
SCAN+\textit{U}, respectively. We observe that the overall computational time
(i.e., for all ionic+electronic steps) required for r$^2$SCAN(+\textit{U}) to
be lower than SCAN(+\textit{U}). Thus, the r$^2$SCAN(+\textit{U}) framework can
offer a reasonably accurate description of the ground state properties of TMOs
with better computational efficiency than SCAN(+\textit{U}). | S. Swathilakshmi, Reshma Devi, Gopalakrishnan Sai Gautam | 2023-01-02T05:59:53Z | http://arxiv.org/abs/2301.00535v1 | # Performance of the r\({}^{2}\)SCAN functional in transition metal oxides
###### Abstract
We assess the accuracy and computational efficiency of the recently developed meta-generalized gradient approximation (metaGGA) functional, the restored regularized strongly constrained and appropriately normed (r\({}^{2}\)SCAN), in transition metal oxide (TMO) systems and compare its performance against SCAN. Specifically, we benchmark the r\({}^{2}\)SCAN-calculated oxidation enthalpies, lattice parameters, on-site magnetic moments, and band gaps of binary 3\(d\) TMOs against the SCAN-calculated and experimental values. Additionally, we evaluate the optimal Hubbard \(U\) correction required for each transition metal (TM) to improve the accuracy of the r\({}^{2}\)SCAN functional, based on experimental oxidation enthalpies, and verify the transferability of the \(U\) values by comparing against experimental properties on other TM-containing oxides. Notably, including the _U_-correction to r\({}^{2}\)SCAN increases the lattice parameters, on-site magnetic moments and band gaps of TMOs, apart from an improved description of the ground state electronic state in narrow band gap TMOs. The r\({}^{2}\)SCAN and r\({}^{2}\)SCAN\(+\)_U_ calculated oxidation enthalpies follow the qualitative trends of SCAN and SCAN\(+\)_U_, with r\({}^{2}\)SCAN and r\({}^{2}\)SCAN\(+\)_U_ predicting marginally larger lattice parameters, smaller magnetic moments, and lower band gaps compared to SCAN and SCAN\(+\)_U_, respectively. We observe that the overall computational time (i.e., for all ionic+electronic steps) required for r\({}^{2}\)SCAN(\(+\)_U_) to be lower than SCAN(\(+\)_U_). Thus, the r\({}^{2}\)SCAN(\(+\)_U_) framework can offer a reasonably accurate description of the ground state properties of TMOs with better computational efficiency than SCAN(\(+\)_U_).
## 1 Introduction
Density functional theory (DFT [1]) calculations are the bedrock of modern computational materials science in terms of predicting thermodynamic and kinetic properties, with such property predictions being put to use in subsequent materials discovery [2, 3, 4, 5, 6, 7] and understanding underlying physical phenomena. [8, 9, 10, 11, 12] In recent years, machine learning has been used to augment DFT in property predictions, thereby reducing computational cost and accelerating materials discovery. [13, 14, 15, 16, 17] Note that a key approximation within DFT is the exchange-correlation (XC) functional, the exact form of which is unknown. However, several approximations for the XC functional have been proposed over the years, which can be categorized into different classes depending on the degree of sophistication and accuracy, and visually represented as rungs on the Jacob's ladder. [18, 19, 2, 1] As with most computational tools, the higher the accuracy (higher up Jacob's ladder) higher is the computational cost.
Most DFT calculations for "large" solid systems (10s to 100s of atoms) are performed using the Perdew-Burke-Ernzerhof (PBE) parameterization of the generalized gradient approximation (GGA) XC functional, [20] as it offers fair accuracy at reasonable computational cost for a wide variety of materials. [21, 22, 23] Specifically, GGAs include the local electron density as well as the gradient of the electron density in describing the XC. As a semilocal functional of electron density, PBE captures short range interactions but fails to capture medium and long-range dispersions and also exhibits large electronic self-interaction errors (SIEs), especially in highly correlated systems. [24, 25] Also, PBE typically underestimates the formation energies [26, 27] and semiconductor band gaps of crystalline solids, [26, 28] while overestimating their lattice volumes. [26, 29]
As we move higher in the Jacob's ladder, [19] we obtain metaGGA functionals, which may account for medium range dispersions and exhibit lower SIEs. Some metaGGAs consider orbital kinetic energy density in addition to the local electron density and its gradient, such as the recently developed strongly constrained and appropriately normed (SCAN [30]) functional, which offers better numerical accuracy than PBE and satisfies all 17 known constraints for a XC functional (namely, 6 for exchange, 6 for correlation, and 5 for both). The iso-orbital indicator (\(\alpha\)), which includes the kinetic energy density in SCAN, distinguishes various bonding environments in a given material and consequently improves the accuracy of SCAN over GGA. However, SCAN suffers from numerical instability during self-consistent-field (SCF) calculations [31] wherein denser \(k\)-grids (than PBE) are required for accurate and consistent predictions. [31, 32, 33] Thus it is computationally expensive (per SCF step) compared to PBE. [21]
To overcome the numerical instability and reduce the computational cost of SCAN, Bartok and Yates [34] developed regularized SCAN (rSCAN), which satisfies 13 out of the 17 known constraints. The authors replaced the non-analytical switching \(\alpha\) interpolation function in SCAN with a simple polynomial function, which improves computational speed. [35] However, subsequent investigations showed a significant drop in numerical accuracy with rSCAN (compared to SCAN), which is attributed to the failure of the polynomial \(\alpha\) function to fully recover the uniform gas limit. [31, 32] Subsequently, Furness et al. [32] introduced the restored regularized SCAN (or r\({}^{2}\)SCAN), wherein the constraints broken by rSCAN were restored except the fourth order gradient expansion constraint for exchange (or GE4X). Furness et al. claimed that the new r\({}^{2}\)SCAN functional combines the numerical accuracy of SCAN and computational speed of rSCAN as the smooth polynomial \(\alpha\) function of rSCAN is modified to satisfy the uniform gas limit in r\({}^{2}\)SCAN. [32] Recently, Kingsbury et al. [36] demonstrated that r\({}^{2}\)SCAN functional indeed delivers robust numerical accuracy (i.e., similar to SCAN) and better computational performance (faster and numerically stable) by comparing r\({}^{2}\)SCAN and SCAN for solids using a high-throughput computational workflow. Specifically, the authors [36] reported that while r\({}^{2}\)SCAN predicts a smaller band gap (for most of the strongly-bound materials) and larger lattice volumes than SCAN, the mean atomization error with r\({}^{2}\)SCAN is \(\sim\)15-20% lower for most solids. However, the performance of r\({}^{2}\)SCAN in correlated electron systems, i.e., transition metal oxides (TMOs) containing open-shell \(d\) electrons, remains to be seen and forms the main focus of this work.
Despite the accuracy of SCAN, it still has shortcomings in TMOs, which can be mitigated by adding an on-site Hubbard \(U\) correction term for the transition metal (TM) under consideration. [37, 38] This approach is similar to the one followed to mitigate the SIEs of PBE in TMOs. [39, 40] However, the magnitude of the \(U\) correction required is not known _a priori_, and there are both theory-based approaches such as density functional perturbation theory, [41] linear response theory, [42, 43, 44] embedded Hartree-Fock method, [45, 46] and machine learning based Bayesian optimisation, [47] and experimental-data-based approaches to identify the appropriate \(U\) values. For example, Gautam et al. [37, 38] used the experimental oxidation enthalpies
among binary TMOs to identify optimal \(U\) values across various oxidation states of 3\(d\) TMs. A similar experimental-data-based Hubbard \(U\) correction scheme can be developed in conjunction with r\({}^{2}\)SCAN as well, resulting in a r\({}^{2}\)SCAN+_U_ framework, in case r\({}^{2}\)SCAN exhibits similar SIEs as SCAN in TMOs. We explore the usefulness of such a r\({}^{2}\)SCAN+_U_ framework also in this work.
Here, we verify the numerical accuracy and computational efficiency of the r\({}^{2}\)SCAN and r\({}^{2}\)SCAN+_U_ frameworks in comparison to SCAN and SCAN+_U_, respectively, in describing material properties such as lattice parameters, on-site magnetic moments, and band gaps of binary 3\(d\) TMOs, including Ti, V, Cr, Mn, Fe, Co, Ni, and Cu. As necessary, we evaluate the optimal Hubbard \(U\) correction with r\({}^{2}\)SCAN for each TM by using the experimental-data-based approach employed in previous works.[37, 38] We find that r\({}^{2}\)SCAN predicts marginally larger lattice constants and smaller on-site magnetic moments than SCAN for most of the TMOs considered. On addition of the _U_-correction to both SCAN and r\({}^{2}\)SCAN, we observe an increase in the calculated lattice constants, on-site magnetic moments and band gaps. In the case of narrow band gap TMOs, SCAN+_U_ and r\({}^{2}\)SCAN+_U_ generally estimate a non-zero band gap, with r\({}^{2}\)SCAN+_U_'s band gap in better agreement with experiments. Also, we perform transferability checks for the optimal \(U\) values derived in this work for each TM, by benchmarking various properties in oxides that were not used in obtaining the \(U\) values. Finally, we compare the computational performance of r\({}^{2}\)SCAN/r\({}^{2}\)SCAN+_U_ relative to SCAN/SCAN+_U_ to explore the accuracy-cost trade-off. We report that r\({}^{2}\)SCAN/r\({}^{2}\)SCAN+_U_ is computationally less expensive than SCAN and SCAN+_U_, when all required ionic and electronic steps are taken into account for convergence during structure relaxations. We hope that our work will provide a foundational basis for further studies on understanding material behavior and computationally discovering new materials in the near future.
## 2 Methods
### Computational Methods
We used the Vienna ab-initio simulation package (VASP 6.2.1)[48, 49, 50] for all the spin-polarized DFT calculations, where the frozen-core PBE-based projector augmented wave (PAW)[51] potentials employed were identical to previous work.[37, 38] The plane waves for each system were expanded up to a kinetic energy of 520 eV, with each structure converged until the total energy differences and atomic forces became \(<\)0.01 meV and \(<\)\(|\)0.01\(|\) eV/A, respectively. We adopted a \(\Gamma\)-centered Monkhorst-Pack[52] grid with a density of 48 \(k\)-points per A for all systems. The conjugate gradient algorithm was used to relax the structures (i.e., cell shapes, volumes, and ionic positions), without preserving any underlying symmetry. An 'accurate' level of precision was maintained while projecting the wavefunctions in the reciprocal space. The Fermi surface of each system was integrated with a Gaussian smearing of partial occupancies, with a width of 0.05 eV. In terms of DFT+_U_ calculations, we used the Dudarev framework[53] for adding a effective \(U\) correction on the \(d\) orbitals of TM atoms. All \(U\) values used in SCAN+_U_ calculations were taken from previous work (see Table S1 of the Supporting Information -SI).[37, 38] Since we used different computing systems to perform our structure relaxations for different systems, we normalized the computational time with the number of cores used in each calculation to compare the computational efficiency of the different XC functionals considered.
For calculating band gaps, GGA functionals typically use the Kohn Sham potential as a multiplicative term, which typically underestimates the band gap of solids even at the SCAN level.[54, 55] Here, we
use the generalized Kohn Sham technique to determine the band gaps by calculating the density of states (DOS) for all systems considered. For each DOS calculation, we used the optimized structure and the initial charge density from a previous structure relaxation. Subsequently, we introduced a set of zero-weighted \(k\)-points, corresponding to a density of 96 \(k\)-points per A, where the \(k\)-points that were used for the structure relaxation retained their original weights (as determined by VASP). Finally, we performed a single-SCF calculation where the DOS was sampled between energies of -20 to 20 eV in steps of 0.005 eV.
### Structures and magnetic configurations
We considered the binary oxides of each TM, i.e., Ti, V, Cr, Mn, Fe, Co, Ni, and Cu with different oxidation states, similar to previous studies.[37, 38] The main criteria in selection of these metal oxides are the availability of reliable thermodynamic data (i.e., formation energies[56, 57, 58]) and the experimentally-determined ground-state structures that are compiled in the inorganic crystal structure database (ICSD)[59] Note that the structures from the ICSD were the initial structures in all our DFT structure relaxations, including the systems used as transferability checks. In the case of Ni oxides, we chose NiO and LiNiO\({}_{2}\) (similar to previous work,[38]), as reliable thermodynamic data is not available for higher-oxidation-state binary Ni oxides (e.g., Ni\({}_{2}\)O\({}_{3}\) and NiO\({}_{2}\)). The TM in all oxides, except select Co and Ni compounds, was initialized in its high-spin configuration (e.g., high-spin configuration of Fe\({}^{3+}\) consists of five unpaired \(d\) electrons). A detailed description of all structures utilised in this work is provided in the SI, under the 'Crystal Structures' section, with the magnetic configurations depicted in Figure S1.
The magnetic configuration of each TMO considered (see Figure S1) was initialized to its appropriate (in several cases, experimentally-known) ground state configuration during the structural relaxation. For example, we considered the ferromagnetic (FM) ground state configuration for CrO\({}_{2}\) and VO\({}_{2}\), given that CrO\({}_{2}\) is metallic[60] and VO\({}_{2}\) undergoes a metal-to-insulator transition (MIT) below 341 K.[61] The rocksalt (RS) TMOs, namely, VO, MnO, FeO, CoO, and NiO were initialized with their experimentally-known type-II antiferromagnetic (AFM) configuration.[62, 63, 64, 65, 66, 67] Each Ni's spin in NiO was initialized with two unpaired \(d\) electrons (i.e., its high-spin configuration). In CuO, we arranged the magnetic moments of Cu\({}^{2+}\) antiferromagnetically along the Cu-O-Cu chains in the [101] direction.[68, 69]
We initialized \(\alpha\)-Mn\({}_{2}\)O\({}_{3}\) (bixbyite structure) in a FM configuration as this configuration was found to be the most stable in previous work.[37] AFM configurations were utilized for rutile-MnO\({}_{2}\),[70], and the other TM\({}_{2}\)O\({}_{3}\) oxides, namely, V\({}_{2}\)O\({}_{3}\), Fe\({}_{2}\)O\({}_{3}\), Ti\({}_{2}\)O\({}_{3}\), and Cr\({}_{2}\)O\({}_{3}\). Note that V\({}_{2}\)O\({}_{3}\) becomes AFM below its MIT temperature,[71, 72, 73] while Fe\({}_{2}\)O\({}_{3}\) displays an AFM configuration with the magnetic moment of Fe alternating every two consecutive layers along the \(c\)-axis.[74] Cr\({}_{2}\)O\({}_{3}\) and Ti\({}_{2}\)O\({}_{3}\) exhibit \(\uparrow\downarrow\uparrow\downarrow\) and \(\uparrow\downarrow\downarrow\uparrow\) magnetic configurations, respectively, on the TM centers along the \(a\)-axis.[75, 76]
In case of spinels, we used different ferrimagnetic (FIM) configurations, as per experimental observations. For example, spinel-Fe\({}_{3}\)O\({}_{4}\) contains both Fe\({}^{3+}\) and Fe\({}^{2+}\), with up-spin Fe\({}^{3+}\) occupying tetrahedral sites and down-spin Fe\({}^{3+}\) occupying half the octahedral sites. The remaining octahedral sites in Fe\({}_{3}\)O\({}_{4}\) are occupied by up-spin Fe\({}^{2+}\).[77, 78] In Co\({}_{3}\)O\({}_{4}\), no-spin Co\({}^{3+}\) occupies octahedral sites, while high-spin Co\({}^{2+}\) (three unpaired \(d\) electrons) occupies tetrahedral sites in an AFM configuration.[79, 80, 81] For Mn\({}_{3}\)O\({}_{4}\), we adopted the "FIM6" configuration, as this was found to be the ground state in previous work.[37] TiO\({}_{2}\), CrO\({}_{3}\), and V\({}_{2}\)O\({}_{5}\) are diamagnetic, since they contain TMs with empty \(3d\) orbitals. Similarly, Cu\({}_{2}\)O is diamagnetic owing to the completely-filled \(3d\) orbitals of Cu.
### Determining \(U\)
We determined the required \(U\) value, with r\({}^{2}\)SCAN, for each binary TMO oxidation reaction (e.g., Ti\({}^{3+}\)\(\rightarrow\) Ti\({}^{4+}\) in 2Ti\({}_{2}\)O\({}_{3}\) + O\({}_{2}\)\(\rightarrow\) 4TiO\({}_{2}\)) by comparing the experimental enthalpy (per mole of O\({}_{2}\)) with the calculated (r\({}^{2}\)SCAN+_U_) value that minimizes the error against the experimental value. Note that \(U\) = 0 eV in our data simply reflects a r\({}^{2}\)SCAN calculation. In order to obtain the experimental oxidation enthalpy, standard enthalpy of formation for all the considered TMOs were taken from the Wagman and/or Kubaschewski tables, [56, 57] thus ignoring the \(p-V\) and entropic contributions, similar to previous works. [37, 38, 82] The overall optimal \(U\) value for each TM was obtained by taking the average of the required \(U\) for each of the available oxidation reactions. In the case of Ni oxides, oxidation of NiO to LiNiO\({}_{2}\) by 2Li\({}_{2}\)O + 4NiO + O\({}_{2}\)\(\rightarrow\) 4LiNiO\({}_{2}\) was considered as a proxy for the Ni\({}^{2+}\)\(\rightarrow\) Ni\({}^{3+}\) oxidation reaction. [38]
## 3 Results
### Oxidation energetics
Figure 1 displays the variation of the enthalpy of different oxidation reactions among binary TMOs, as a function of applied \(U\) in the r\({}^{2}\)SCAN+_U_ framework, for all TMs considered except Cr and Cu. Solid lines in each panel of Figure 1 represent DFT-calculated oxidation enthalpies, with each color corresponding to different oxidation reactions for the TM. For instance in V oxides (Figure 1b), the solid black line corresponds to the oxidation reaction, VO \(\rightarrow\) V\({}_{2}\)O\({}_{3}\), while the solid red and green lines indicate V\({}_{2}\)O\({}_{3}\)\(\rightarrow\) VO\({}_{2}\) and VO\({}_{2}\)\(\rightarrow\) V\({}_{2}\)O\({}_{5}\), respectively. Similarly, the experimental enthalpy of each oxidation reaction is represented by dashed horizontal line of the same color. For example, the black dashed line in Figure 1b indicates the experimental oxidation enthalpy (-7.36 eV) of VO \(\rightarrow\) V\({}_{2}\)O\({}_{3}\). Also, dotted vertical line of a given color highlights the required \(U\) value to minimize the error between DFT-calculated and experimental value for the oxidation reaction enthalpy indicated by the same color. The dotted blue line in each panel signifies the overall optimal \(U\) for the TM that is averaged across all available oxidation reactions.
We report an optimal \(U\) value of 2.3, 1.0, 1.8, 3.1, 1.8, and 2.1 eV, respectively, for Ti, V, Mn, Fe, Co, and Ni oxides, within the r\({}^{2}\)SCAN+_U_ framework (Figure 1). Notably, the optimal \(U\) obtained with r\({}^{2}\)SCAN is less than that reported previously for SCAN functional (Table S1) for all 3\(d\) TMs considered (except V and Fe), which can be attributed to better accuracy of r\({}^{2}\)SCAN compared to SCAN, as observed in non-TMOs. [36] For V oxides, the required \(U\) value for VO\({}_{2}\)\(\rightarrow\) V\({}_{2}\)O\({}_{5}\), V\({}_{2}\)O\({}_{3}\)\(\rightarrow\) VO\({}_{2}\), VO \(\rightarrow\) V\({}_{2}\)O\({}_{3}\) is 0.0, 0.7, and 2.2 eV, respectively. Thus, the optimal \(U\) value for V is 1.0 eV (average of the three required \(U\) values), which is identical to the \(U\) correction required with SCAN. [38] The decreasing required \(U\) with increasing oxidation state of V in V oxides is expected due to the decrease in the strength of exchange interactions among the \(d\) electrons as oxidation state increases. In the case of Fe, FeO \(\rightarrow\) Fe\({}_{2}\)O\({}_{3}\) and FeO \(\rightarrow\) Fe\({}_{3}\)O\({}_{4}\) reactions require a \(U\) of 2.9 and 3.3 eV, respectively, resulting in an optimal \(U\) of 3.1 eV, which is also identical to the optimal \(U\) with SCAN. [37] Moreover, we obtain the highest optimal \(U\) of 3.1 eV for Fe, among all TMs considered in this work, which is consistent with the fact that Fe\({}^{3+}\) has the highest number of unpaired \(d\) electrons resulting in the strongest exchange interactions.
For Ti and Ni, we observe a marginal improvement in the _U_-value for r\({}^{2}\)SCAN when compared to SCAN. Specifically, we obtain an optimal \(U\) of 2.3 eV and 2.1 eV for Ti and Ni, respectively, versus 2.5 eV for both elements with SCAN. We find an optimal \(U\) value of 1.8 eV for both Mn (2.7 eV with SCAN) and Co (3.0 eV with SCAN). In Mn-oxides, the required \(U\) for the oxidation of Mn\({}_{2}\)O\({}_{3}\)\(\rightarrow\) MnO\({}_{2}\), and MnO \(\rightarrow\)
Mn\({}_{2}\)O\({}_{3}\) are 1.5 and 2.1 eV, respectively. The optimal \(U\) for Mn is transferable to other Mn oxides as well, indicated by the robust agreement between r\({}^{2}\)SCAN+ _U_-calculated and experimental oxidation enthalpy for MnO \(\rightarrow\) Mn\({}_{3}\)O\({}_{4}\) (green lines in Figure 1c).
For Cr and Cu oxides, we obtain reasonable agreement with experimental data without a \(U\) correction (Figure S2), similar to our observation with SCAN. [38] In fact, for Cu, introducing _U_-correction worsens the error in the calculated oxidation enthalpy for Cu\({}_{2}\)O \(\rightarrow\) CuO versus experiment, similar to our observation with SCAN(\(+\)_U_) as well, which can be attributed to PAW potentials derived at the PBE-level. [38] However, the magnitude of error (versus experiment) is smaller with r\({}^{2}\)SCAN (\(\approx\)13.1%) than with SCAN (\(\approx\)25.7%). In case of Cr, the oxidation reaction of CrO\({}_{2}\)\(\rightarrow\) CrO\({}_{3}\) requires _U_\(\sim\) 0.9 eV, but introducing a \(U\) correction worsens any agreement with experiment for Cr\({}_{2}\)O\({}_{3}\)\(\rightarrow\) CrO\({}_{2}\) (where required \(U\) = 0 eV). Thus, the optimal \(U\) for Cr oxides is 0.45 eV (\(<\)0.5 eV), which only provides a marginal improvement in describing oxidation enthalpies. Hence, we recommend using only r\({}^{2}\)SCAN for calculating any Cr oxide framework.
### Lattice parameters
All r\({}^{2}\)SCAN(\(+\)_U_) and SCAN(\(+\)_U_) calculated lattice parameters, on-site magnetic moments, and band gaps for each TMO are tabulated in Table S2. Additionally, the calculated lattice volumes by the four XC functionals are plotted against experimental data in Figure 2a for all oxides. Generally, both SCAN (green squares in Figure 2a) and r\({}^{2}\)SCAN (blue symbols) offer \(<\) 2.8% deviation from the experimental lattice parameters for all the TMOs considered, except VO, FeO, CuO, and LiNiO\({}_{2}\), indicating robust agreement with experiments for both functionals. In VO, SCAN and r\({}^{2}\)SCAN overestimate (by \(\sim\)8%) the experimental lattice constants, while the deviation in FeO and CuO is \(\sim\)3-4% and \(\sim\)8-10%, respectively. In LiNiO\({}_{2}\)
Figure 1: Calculated oxidation enthalpy versus the magnitude of \(U\) correction within r\({}^{2}\)SCAN+ \(U\) framework for (a) Ti, (b) V, (c) Mn, (d) Fe, (e) Co, and (f) Ni oxides. Solid, dashed, and dotted lines of a given color indicate calculated, experimental, and required \(U\) values for a given oxidation reaction. Optimal \(U\) for each TM is indicated by the dotted blue line in each panel.
SCAN's \(\beta\) angle evaluation is \(\sim\)4.1% different from experiment.
Notably, SCAN and r\({}^{2}\)SCAN do show qualitative differences in their calculated lattice parameters (when compared against experiments) across TMOs. For instance, both functionals overestimate the experimental lattice constants in TiO\({}_{2}\), Ti\({}_{2}\)O\({}_{3}\), and VO, while they underestimate in CrO\({}_{2}\), CrO\({}_{3}\), MnO\({}_{2}\), and Fe\({}_{3}\)O\({}_{4}\). There are also examples (MnO and Mn\({}_{2}\)O\({}_{3}\)) where SCAN underestimates the experimental lattice constants while r\({}^{2}\)SCAN overestimates. Overall, there are cases where SCAN's errors in lattice parameter estimations are lower versus experiments (e.g., Cr\({}_{2}\)O\({}_{3}\), CoO), r\({}^{2}\)SCAN's errors are lower (e.g., CrO\({}_{2}\), CrO\({}_{3}\), MnO\({}_{2}\), Fe\({}_{3}\)O\({}_{4}\)), and both functionals exhibit identical errors (e.g., TiO\({}_{2}\), Co\({}_{3}\)O\({}_{4}\), NiO, Cu\({}_{2}\)O), signifying that both functionals offer similar performance in terms of geometrical properties.
Comparing r\({}^{2}\)SCAN and SCAN, we find that r\({}^{2}\)SCAN's lattice constants are generally larger than SCAN across TMOs (e.g., Ti\({}_{2}\)O\({}_{3}\), Cr\({}_{2}\)O\({}_{3}\), CrO\({}_{3}\), VO\({}_{2}\), etc.). As a range, r\({}^{2}\)SCAN estimates lattice constants that are a maximum of \(\sim\)1.5% larger than SCAN (in CrO\({}_{3}\)) and a minimum of \(\sim\)0.1% larger than SCAN (in Mn\({}_{2}\)O\({}_{3}\)). Having said that, there are instances where r\({}^{2}\)SCAN's lattice constant evaluations are lower than SCAN (VO, CoO, CuO, and LiNiO\({}_{2}\)) and cases where both functionals are identical (TiO\({}_{2}\), Co\({}_{3}\)O\({}_{4}\), NiO, and Cu\({}_{2}\)O). In specific TMOs, SCAN and r\({}^{2}\)SCAN calculate an identical (individual) lattice constant, while the other lattice constants with r\({}^{2}\)SCAN are larger than SCAN. For example, \(a\) and \(c\) lattice constants with r\({}^{2}\)SCAN are higher than SCAN in V\({}_{2}\)O\({}_{5}\) while both functionals estimate \(b\) = 3.55 A.
On introducing the optimal \(U\) correction, an increase in the value of calculated lattice constants is obtained for both SCAN and r\({}^{2}\)SCAN functionals for all TMOs. The lattice constants computed by r\({}^{2}\)SCAN+_U_ (yellow symbols in Figure 2a) is up to 1.3% higher than r\({}^{2}\)SCAN, except FeO (\(\sim\)4.2% higher). Similar to the comparison of r\({}^{2}\)SCAN vs. SCAN, there are systems where r\({}^{2}\)SCAN+_U_ predicts larger, smaller, and identical lattice constants compared to SCAN+_U_ (red triangles). For example, r\({}^{2}\)SCAN+_U_ calculates larger lattice constants than SCAN+_U_ in VO\({}_{2}\), V\({}_{2}\)O\({}_{5}\), MnO, Mn\({}_{2}\)O\({}_{3}\) and Fe\({}_{3}\)O\({}_{4}\) (maximum of \(\sim\)0.5% higher in V\({}_{2}\)O\({}_{5}\)), while for Ti\({}_{2}\)O\({}_{3}\), CoO and NiO, r\({}^{2}\)SCAN+_U_'s estimations are smaller than SCAN+_U_ (maximum deviation of \(\sim\)2.1% in Ti\({}_{2}\)O\({}_{3}\)). Both SCAN+_U_ and r\({}^{2}\)SCAN+_U_ functionals evaluate identical lattice parameters for TiO\({}_{2}\), Co\({}_{3}\)O\({}_{4}\) and LiNiO\({}_{2}\).
Overall, lattice constants calculated by SCAN+_U_ and r\({}^{2}\)SCAN+_U_ deviate \(<\sim\)3.3% from experiments for all TMOs, except VO and VO\({}_{2}\) where deviations of \(\sim\)8.5% and \(\sim\)4.6% are observed, respectively. Adding \(U\) improves the agreement with experiment for both SCAN and r\({}^{2}\)SCAN in Co\({}_{3}\)O\({}_{4}\), while r\({}^{2}\)SCAN+_U_ gives the best estimate of the lattice parameters in FeO (\(<\) 1% deviation vs. experiments) compared to SCAN, SCAN+_U_ and r\({}^{2}\)SCAN. Notably, all functionals break the rocksalt symmetry of VO, MnO, and FeO, while the cubic symmetry of Fe\({}_{3}\)O\({}_{4}\) is retained only by SCAN. In Ti\({}_{2}\)O\({}_{3}\), the hexagonal symmetry is broken by SCAN but the symmetry is preserved by the other frameworks. In summary, we find that the differences in lattice parameter estimations to be minimal across the four functionals on average, with notable exceptions of a few systems.
### On-site magnetic moments
On-site magnetic moments of the TMOs (Figure 2c and Table S2) computed by SCAN and r\({}^{2}\)SCAN generally underestimate experimental values, with the exception of MnO\({}_{2}\), Mn\({}_{2}\)O\({}_{3}\), CrO\({}_{2}\), and VO\({}_{2}\). Note that larger magnetic moments typically indicate stronger localization of \(d\) electrons. Comparing r\({}^{2}\)SCAN and SCAN calculations, we find that r\({}^{2}\)SCAN typically estimates smaller magnetic moments than SCAN but with several exceptions, such as MnO, MnO\({}_{2}\), Mn\({}_{2}\)O\({}_{3}\), Cr\({}_{2}\)O\({}_{3}\), and VO\({}_{2}\). Thus, on average, SCAN's magnetic moment predictions are in better agreement with experiments. However, in terms of magnitude,
moments predicted by r\({}^{2}\)SCAN deviate by \(<3\%\) from SCAN's estimates, except CuO (\(\sim 6.8\%\) deviation), CrO\({}_{2}\) (\(\sim 3.5\%\)), and MnO\({}_{2}\) (\(\sim 3.5\%\)), highlighting that the differences in the predictions are marginal.
Adding optimal \(U\) to both SCAN and r\({}^{2}\)SCAN increases the magnitude of the calculated on-site magnetic moments for all TMOs (except VO\({}_{2}\), which is predicted to be metallic by all functionals), consistent with the expectation that the \(U\) correction facilitates \(d\) electron localization. r\({}^{2}\)SCAN+_U_-calculated data are similar to the corresponding SCAN+_U_ values (\(<2.3\%\) variation), except LiNiO\({}_{2}\) (\(\sim\)6.3% variation), and Ti\({}_{2}\)O\({}_{3}\) (\(\sim\)3.8%). Similar to r\({}^{2}\)SCAN versus SCAN, r\({}^{2}\)SCAN+_U_ estimates smaller magnetic moments than SCAN+_U_, with notable exceptions being VO\({}_{2}\), Mn\({}_{2}\)O\({}_{3}\), MnO\({}_{2}\) and FeO. Overall, we observe the accuracy in calculated on-site magnetic moments versus experiments to follow the order SCAN+_U_\(>\) r\({}^{2}\)SCAN+_U_\(>\) SCAN \(>\) r\({}^{2}\)SCAN for several TMOs. However, there are specific cases where specific XC frameworks offer better accuracy in calculating magnetic moments, such as SCAN in CrO\({}_{2}\), Mn\({}_{2}\)O\({}_{3}\), MnO\({}_{2}\), Fe\({}_{3}\)O\({}_{4}\) and CuO,
Figure 2: (a) Comparison of calculated and experimental lattice volume (in Å\({}^{3}\)) of all TMOs considered. (b) Violin plot capturing the difference between the experimental and computed band gap (in eV) across TMO systems using the four XC frameworks. The empty circle and horizontal line in the inner box plot corresponds to the mean and median of the calculated band gaps, respectively. (c) Heat map representation of the differences between the experimental and calculated on-site magnetic moments (in \(\mu_{B}\)) using the four XC functionals and across all TMOs. A value of zero indicates perfect consistency, while red (blue) colors indicate overestimation (underestimation) of magnetic moments. Hatched boxes either correspond to experimentally undetermined magnetic moments (VO) or calculations not executed with \(U\) frameworks (Cr and Cu oxides).
\({}^{2}\)SCAN in Mn\({}_{3}\)O\({}_{4}\) and Cr\({}_{2}\)O\({}_{3}\), and r\({}^{2}\)SCAN+_U_ in V\({}_{2}\)O\({}_{3}\). Given the numerically marginal deviations in calculated magnetic moments across the XC frameworks (\(\sim\)10% deviation), we expect an increase/decrease in accuracy to be marginal amongst the XC frameworks considered.
### Band gaps
The differences between calculated and experimental band gaps of all TMOs considered are visualized as violin plots for SCAN (green violin), SCAN+_U_ (red), r\({}^{2}\)SCAN (blue), and r\({}^{2}\)SCAN+_U_ in Figure 2b. The top and bottom ends of the individual violins mark the highest and lowest differences in the respective calculated data. Note that the mean values (white empty circles) are similar for SCAN and r\({}^{2}\)SCAN, and in turn are lower than their _U_-corrected versions. In other words, addition of the \(U\)-correction reduces the error of calculated band gaps compared to experimental values, which is expected given that semi-local DFT typically underestimates band gaps. Also, we find that SCAN+_U_ displays the lowest mean band gap difference among the XC functionals considered, indicating that on-average SCAN+_U_ provides better computed band gaps.
We present calculated electronic DOS of select TMOs, namely CoO (panels a and b), V\({}_{2}\)O\({}_{3}\) (c and d), and Mn\({}_{2}\)O\({}_{3}\) (e and f), in Figure 3, to illustrate qualitative trends in computed band gaps. The DOS for the remaining TMOs, calculated by the four XC frameworks, are compiled in Figures S3-S19 of the SI. In each DOS panel, solid orange and solid green lines correspond to the 2_p_-states of O and the 3_d_-states of the TM, respectively. Dashed black lines represent Fermi levels in metallic compounds. Dotted vertical lines represent valence and conduction band edges in semiconducting/insulating compounds, with the band gaps indicated by the text annotation near the conduction band minimum (CBM). The zero of the energy scale is set to the valence band maximum (VBM) for TMOs with a band gap and to the Fermi level in metallic TMOs.
We observe that r\({}^{2}\)SCAN generally calculates a smaller band gap than SCAN for most TMOs (maximum of \(\sim\)66% lower in MnO\({}_{2}\), see Table S2), as illustrated by the case of CoO in panels a and b of Figure 3. Notable exceptions do exist to this observation, such as V\({}_{2}\)O\({}_{5}\) (\(\sim\)1.7% larger), CrO\({}_{3}\) (\(\sim\)3.2%), MnO (\(\sim\)4.3%), and Fe\({}_{2}\)O\({}_{3}\) (\(\sim\)1.7%), where r\({}^{2}\)SCAN calculated band gaps are marginally larger than SCAN. Both SCAN and r\({}^{2}\)SCAN incorrectly describe the ground state electronic configuration of narrow band gap TMOs (i.e., experimental band gaps \(<\) 1 eV), including Ti\({}_{2}\)O\({}_{3}\) (Figure S4), V\({}_{2}\)O\({}_{3}\)(Figure 3c and S3c), VO\({}_{2}\) (Figure S7) and Fe\({}_{3}\)O\({}_{4}\) (Figure S15) to be metallic, with the exception of MnO\({}_{2}\) where both SCAN and r\({}^{2}\)SCAN estimate a narrow gap (Figures S12a and S12c). Additionally, both functionals also calculate the wrong electronic structure in the case of a non-narrow-gap semiconductor, Mn\({}_{2}\)O\({}_{3}\) (Figure S3), which exhibits an experimental gap of 1.2-1.3 eV. [83, 84] However, SCAN and r\({}^{2}\)SCAN qualitatively describe the right electronic structure in the case of wide band gap TMOs such as FeO (Figure S13), Fe\({}_{2}\)O\({}_{3}\) (Figure S14), and NiO (Figure S17), with a significant quantitative underestimation of the experimental gaps. In any case, the differences in electronic structure predictions between SCAN and r\({}^{2}\)SCAN in TMOs are minimal, with SCAN being marginally better in accuracy.
Introducing a \(U\) correction to SCAN and r\({}^{2}\)SCAN widens or opens the band gap, especially in narrow band gap TMOs, as illustrated by the case of V\({}_{2}\)O\({}_{3}\) (panels c and d in Figure 3). The opening of band gap with \(U\) correction is expected since localization of \(d\) electrons, which form the VBM and/or CBM in 3_d_-TMOs, is faciliated with \(U\) addition, in turn resulting in a larger gap. However, in the case of VO\({}_{2}\) (Figure S7), adding \(U\) does not capture the MIT that occurs at low temperatures (\(<\) 341 K [61]) with either SCAN or r\({}^{2}\)SCAN, causing the erroneous prediction of metallic behavior. Generally, SCAN+_U_ calculates
a larger band gap than r\({}^{2}\)SCAN+_U_ (Table S2), as highlighted by the case of Mn\({}_{2}\)O\({}_{3}\) (panels e and f in Figure 3). In fact, SCAN+_U_ is the only framework (among those considered) to estimate a band gap in the \(\mathrm{r}^{2}\)SCAN+_U_.
Figure 3: DOS for CoO calculated using (a) SCAN and (b) r\({}^{2}\)SCAN, DOS for V\({}_{2}\)O\({}_{3}\) computed using (c) r\({}^{2}\)SCAN and (d) r\({}^{2}\)SCAN+_U_, and DOS for Mn\({}_{2}\)O\({}_{3}\) estimated using (e) SCAN+_U_ and (f) r\({}^{2}\)SCAN+_U_.
Mn\({}_{2}\)O\({}_{3}\), which is consistent with experiment. Moreover, SCAN+_U_'s evaluations of larger band gaps results in better (poorer) quantitative agreement with experiments in wide (narrow) gap materials, such as MnO and FeO (V\({}_{2}\)O\({}_{3}\) and MnO\({}_{2}\)).
Note that SCAN+_U_ and r\({}^{2}\)SCAN+_U_ do underestimate the experimental band gaps, similar to SCAN and r\({}^{2}\)SCAN, in wide gap TMOs. The only exception to this observation is CoO, where SCAN+_U_ overestimates the band gap versus experiment (Figure S3a and Table S2), as also observed in our previous work. [38] In select TMOs, including Fe\({}_{2}\)O\({}_{3}\) and V\({}_{2}\)O\({}_{5}\), r\({}^{2}\)SCAN+_U_'s band gap is larger than SCAN+_U_, but the magnitude of difference (\(\leq 0.2\) eV) is meagre. Thus, for electronic structure predictions, we expect SCAN+_U_ to provide the best qualitative and quantitative band gaps across TMOs, among the functionals considered here, especially for wide gap semiconductors/insulators. However, the qualitative trends provided by r\({}^{2}\)SCAN+_U_ are quite robust as well and in small gap semiconductors (\(<1\) eV gap), r\({}^{2}\)SCAN+_U_'s quantitative accuracy is often better than SCAN+_U_.
### Transferability checks
To examine the transferability of the optimal \(U\) values determined in this work (with r\({}^{2}\)SCAN), to oxide systems not used for obtaining the values, we perform checks on systems with different oxidation state and/or coordination environment for each TM. We compare calculated values against available experimental data, such as structural, electronic, magnetic, and/or electrochemical properties. Specifically, we choose Ba\({}_{2}\)TiO\({}_{4}\) as a check for Ti, BiVO\({}_{4}\) for V, K\({}_{3}\)MnO\({}_{4}\), K\({}_{2}\)MnO\({}_{4}\), and Mn\({}_{2}\)O\({}_{7}\) for Mn, SrFeO\({}_{3}\) for Fe, LiCoO\({}_{2}\)-CoO\({}_{2}\) for Co, and LiNiO\({}_{2}\)-NiO\({}_{2}\) for Ni. Data related to transferability checks are compiled in Figure 4, Table 1, and Table S3.
In the case of Ba\({}_{2}\)TiO\({}_{4}\), we compare the calculated lattice parameters with experimental values (see Table S3 and lattice volume differences plotted in Figure 4). Ba\({}_{2}\)TiO\({}_{4}\) crystallizes in a monoclinic structure (space group \(P2_{1}\)/_n_) at low temperatures, where the unit cell is composed of four formula units. [85, 86] Ti atoms are present in distorted tetrahedra composed of neighbouring oxygen atoms (TiO\({}_{4}\)) within the Ba\({}_{2}\)TiO\({}_{4}\) lattice, which is different from the octahedral environments sampled in TiO\({}_{2}\) and Ti\({}_{2}\)O\({}_{3}\). Upon structure relaxation, we observe that both r\({}^{2}\)2SCAN and r\({}^{2}\)2SCAN+_U_ functionals marginally overestimate (by \(\sim\)2%) experimental lattice parameters (Figure 4 and Table S3). Similar to trends observed in Table S2, adding \(U\) to r\({}^{2}\)SCAN increases the calculated lattice parameters in Ba\({}_{2}\)TiO\({}_{4}\) (by \(\sim\)0.03 A), thereby marginally reducing the agreement with experiment.
We benchmark both structural and electronic properties of BiVO\({}_{4}\) as a transferability check for V-based systems. Note that BiVO\({}_{4}\) transforms from tetragonal (_I_41/_a_) to a monoclinic (_I_2/_b_)'scheelite' phase below \(\sim 528\) K, [87, 88] which is a reversible second order ferroelastic transition driven by soft optical phonon modes. The BiVO\({}_{4}\) unit cell possesses four formula units, with tetrahedrally coordinated V ions, which is different from the coordination environments of V in VO, V\({}_{2}\)O\({}_{3}\), VO\({}_{2}\), and V\({}_{2}\)O\({}_{5}\). Importantly, monoclinic-BiVO\({}_{4}\) spontaneously transforms to the tetragonal structure upon structure relaxation with r\({}^{2}\)2SCAN and r\({}^{2}\)2SCAN+_U_, similar to the observation by Liu et al [87] with GGA and hybrid functionals. Thus, neither r\({}^{2}\)2SCAN nor r\({}^{2}\)2SCAN+_U_ predict the correct ground state structure. Additionally, BiVO\({}_{4}\) possess a band gap of 2.4-2.48 eV [89] and is a candidate photocatalyst. [87] Both r\({}^{2}\)2SCAN and r\({}^{2}\)SCAN+_U_ provide similar band gap predictions (2.01-1.98 eV), which is in good qualitative agreement with experiment. Surprisingly, r\({}^{2}\)SCAN+_U_ evaluates a marginally lower band gap than r\({}^{2}\)SCAN (see panels a and b in Figure 4). However, both r\({}^{2}\)2SCAN and r\({}^{2}\)SCAN+_U_ predict similar states occupying the valence band (O\({}_{p}\)) and conduction band (V\({}_{d}\)) edges.
The rationale behind the choice of K\({}_{3}\)MnO\({}_{4}\), K\({}_{2}\)MnO\({}_{4}\), and Mn\({}_{2}\)O\({}_{7}\) as checks for Mn-based systems is to explore the higher, unsampled oxidation states of Mn, namely +5, +6, and +7 in K\({}_{3}\)MnO\({}_{4}\), K\({}_{2}\)MnO\({}_{4}\), and Mn\({}_{2}\)O\({}_{7}\), respectively. Also, Mn resides in tetrahedral coordination in these compounds, which is different from the octahedral coordination observed in MnO, Mn\({}_{2}\)O\({}_{3}\), and MnO\({}_{2}\). Although Mn\({}^{2+}\) resides in tetrahedral sites in spinel-Mn\({}_{3}\)O\({}_{4}\), we had not used in the spinel structure to obtain our optimal \(U\). We benchmark the calculated lattice parameters versus experiments for all Mn-based transferability checks.
Mn\({}_{2}\)O\({}_{7}\) is a volatile liquid at 298 K and solidifies to a monoclinic crystal structure (_P_2\({}_{1}\)/_c_) below \(\sim\) 279 K, with the unit cell consisting of 8 formula units of corner sharing tetrahedral MnO\({}_{4}\) pairs. [90, 91] Upon structural relaxation, both r\({}^{2}\)SCAN and r\({}^{2}\)SCAN+_U_ underestimate the lattice constants of monoclinic-Mn\({}_{2}\)O\({}_{7}\) by \(\sim\)1-3% (Figure 4 and Table S3). In the case of K\({}_{3}\)MnO\({}_{4}\), the tetragonal symmetry (_I_\(\overline{4}\)2_m_) [92] is broken with r\({}^{2}\)SCAN functional resulting in an orthorhombic structure, while the symmetry is preserved by r\({}^{2}\)SCAN+_U_ (see Figure 4 and Table S3). Nonetheless, both r\({}^{2}\)SCAN and r\({}^{2}\)SCAN+_U_ significantly underestimate the \(c\) parameter (by \(\sim\) 13.5%) and overestimate the \(a\) or \(b\) parameter (\(\sim\) 10.2%). K\({}_{2}\)MnO\({}_{4}\) is an orthorhombic crystal (_Pnma_) with four formula units per unit cell. [93] Here, r\({}^{2}\)SCAN and r\({}^{2}\)SCAN+_U_ predict identical lattice parameters, which marginally underestimate experimental values (by \(\sim\) 0.4-1%, see Figure 4 and Table S3).
The choice of SrFeO\({}_{3}\), a cubic perovskite, as a check for Fe is largely motivated by the 4+ oxidation state exhibited by Fe in the structure, which is not sampled in FeO, Fe\({}_{2}\)O\({}_{3}\), or Fe\({}_{3}\)O\({}_{4}\). Both r\({}^{2}\)SCAN and r\({}^{2}\)SCAN+_U_ preserve the cubic symmetry during structure relaxation, with r\({}^{2}\)SCAN+_U_'s lattice parameters
Figure 4: DOS for BiVO\({}_{4}\) calculated using (a) r\({}^{2}\)SCAN and (b) r\({}^{2}\)SCAN+_U_. (c) Difference between experimental and calculated lattice volumes (using r\({}^{2}\)SCAN and r\({}^{2}\)SCAN+_U_), plotted as a heatmap, for various systems. Red (blue) squares indicate overestimated (underestimated) calculated lattice volumes versus experiment.
identical to experiments and r\({}^{2}\)SCAN's parameters being a slight underestimation (\(\sim 0.5\%\), see Figure 4 and Table S3). In terms of magnetic configuration of Fe in SrFeO\({}_{3}\), Takeda et al. [94] reported a helical spin structure via their neutron diffraction experiments, with competing FM and AFM interactions. However, Shein et al. [95] found a FM metallic state to be the ground state of SrFeO\({}_{3}\), over a wide range of pressures, based on their first principles calculations, which they attributed to stronger FM than AFM interactions. We considered a FM configuration of Fe atoms in the SrFeO\({}_{3}\) unit cell, and the on-site magnetic moments on Fe calculated by both r\({}^{2}\)SCAN (3.375 \(\mu_{B}\), Table 1) and r\({}^{2}\)SCAN\(+\,U\) (3.819 \(\mu_{B}\)) overestimate the experimental value (2.7\(\pm\)0.4 \(\mu_{B}\)[94]). However, our calculated magnetic moments do indicate a localization of \(\sim\)4 electrons on the \(d\) orbitals of Fe, consistent with its +4 oxidation state.
We choose CoO\({}_{2}\) (_R\(\overline{3}\)m_ or 'O3' polymorph [96]), and NiO\({}_{2}\) (_P1m1_ or 'O1' [97]), both layered structures, as transferability checks for Co and Ni, respectively, owing to the unsampled 4+ oxidation states of each TM. In terms of experimental property to benchmark, we choose the average Li intercalation voltage in these structures, i.e., LiCoO\({}_{2}\)-CoO\({}_{2}\), and LiNiO\({}_{2}\)-NiO\({}_{2}\) pairs, since they have been measured with high precision. The reader is referred to previous works on calculating and benchmarking average 'topotactic' intercalation voltages. [98, 99] r\({}^{2}\)SCAN underestimates the experimental average voltage [96, 99, 100, 101, 102, 103] in LiNiO\({}_{2}\)-NiO\({}_{2}\) (by \(\sim 8\%\)), while it overestimates the average voltage in LiCoO\({}_{2}\)-CoO\({}_{2}\) (by \(\sim 1.7\%\)), similar to trends observed with SCAN. [99] The addition of \(U\) to r\({}^{2}\)SCAN leads to an improvement in agreement with the experimental voltage in the Ni-system (deviation of \(\sim 1.8\%\)), while it worsens the agreement in the Co-system (deviation of \(\sim 4.4\%\)). Nevertheless, r\({}^{2}\)SCAN\(+\,U\) does overestimate the average voltage in both Co and Ni systems, similar to the behavior of SCAN\(+\,U.\)[99]
## 4 Discussion
In this work, we evaluated the performance of the r\({}^{2}\)SCAN functional among binary TMOs consisting of 3_d_-TMs by calculating the oxidation enthalpies, lattice parameters, on-site magnetic moments, and band gaps. Additionally, for each TM-O\({}_{2}\) system considered, we calculated the optimal Hubbard-_U_ corrections to be used in a r\({}^{2}\)SCAN\(+\,U\) framework, based on experimental oxidation enthalpies. Although theoretical approaches exist to derive \(U\) values, [41, 42, 43, 44, 45, 46, 47] using oxidation enthalpies nominally gives an "average" correction that is suitable across several oxidation states of a given TM. Specifically, our optimal \(U\) values are 2.3,
\begin{table}
\begin{tabular}{c c c c} \hline Composition & Source & Voltage & Magnetic moment \\ (space group) & & (V) & (\(\mu_{B}\)) \\ \hline LiCoO\({}_{2}\)-CoO\({}_{2}\) & Expt. & 4.05 & - \\ (_R\(\overline{3}\)m_) & r\({}^{2}\)SCAN & 4.12 & - \\ & r\({}^{2}\)SCAN\(+\,U\) & 4.23 & - \\ LiNiO\({}_{2}\)-NiO\({}_{2}\) & Expt. & 3.85 & - \\ (_P1m1_) & r\({}^{2}\)SCAN & 3.54 & - \\ & r\({}^{2}\)SCAN\(+\,U\) & 3.92 & - \\ SrFeO\({}_{3}\) & Expt. & - & 2.7\(\pm\)0.4 \\ (_Pm\(\overline{3}\)m_) & r\({}^{2}\)SCAN & - & 3.375 \\ & r\({}^{2}\)SCAN\(+\,U\) & - & 3.819 \\ \hline \end{tabular}
\end{table}
Table 1: Voltage and magnetic moments calculated by r\({}^{2}\)SCAN, and r\({}^{2}\)SCAN\(+\,U\) compared against experimental values (denoted by ‘Expt.’). The \(U\) values used with r\({}^{2}\)SCAN\(+\,U\) are the corresponding optimal \(U\) values obtained for each TM (from Figure 1).
1.0, 1.8, 3.1, 1.8, and 2.1 eV for Ti, V, Mn, Fe, Co, and Ni, respectively, while we don't deem a \(U\) correction necessary for Cr and Cu oxides. Interestingly, the optimal \(U\) corrections needed with r\({}^{2}\)SCAN are lower in magnitude compared to SCAN for Ti, Mn, Co, and Ni oxides (while the corrections are identical for V and Fe oxides), indicating that r\({}^{2}\)SCAN exhibits lower errors with oxidation enthalpies and possibly lower SIEs than SCAN. However, this is not reflected in other physical properties. On an average, we find the accuracy, versus experimental values, to be similar for r\({}^{2}\)SCAN compared to SCAN, and for r\({}^{2}\)SCAN\(+\)_U_ compared to SCAN\(+\)_U_, respectively, in lattice parameter, on-site magnetic moment, and band gap evaluations as seen in Figure 2.
The general trends in lattice parameter, magnetic moment, and band gap predictions, across the XC frameworks considered, can be summarized as follows. We observe that r\({}^{2}\)SCAN generates larger lattice constants than SCAN and on addition of the \(U\) correction to both functionals, the lattice constants further increase. Thus, in systems where SCAN underestimates experimental lattice constants (e.g., CrO\({}_{2}\), CrO\({}_{3}\), MnO\({}_{2}\)), shifting to r\({}^{2}\)SCAN improves agreement (e.g., error in r\({}^{2}\)SCAN in CrO\({}_{3}\) is 0.8% versus 2.3% with SCAN). Also, there are instances where the ground state symmetry of the TMO is not preserved by some or all of the XC frameworks considered (i.e., in VO, MnO, FeO, Fe\({}_{3}\)O\({}_{4}\), and Ti\({}_{2}\)O\({}_{3}\)), highlighting systematic issues in the XC treatment across the four frameworks considered. The calculated on-site magnetic moments by r\({}^{2}\)SCAN (and r\({}^{2}\)SCAN\(+\)_U_) are marginally lower than SCAN (SCAN\(+\)_U_), with the \(U\) correction nominally increasing the calculated moments calculated by r\({}^{2}\)SCAN and SCAN. However, calculated magnetic moments across the four XC frameworks differ by \(<10\%\) (except LiNiO\({}_{2}\)), signifying marginal differences in accuracy. Both SCAN and r\({}^{2}\)SCAN underestimate band gaps across all TMOs (except MnO\({}_{2}\)), with band gaps calculated by r\({}^{2}\)SCAN typically being lower than SCAN, and adding the \(U\) opens/widens the gap. Thus, SCAN\(+\)_U_ offers the best quantitative accuracy versus experimental band gaps, especially for wide gap semiconductors. Note that the qualitative trends from r\({}^{2}\)SCAN\(+\)_U_ are consistent with the trends exhibited by SCAN\(+\)_U_ and should be reliable in electronic structure predictions in other TM-based oxide systems.
r\({}^{2}\)SCAN adopts the smooth polynomial interpolation function of rSCAN to maintain numerical stability during SCF calculations. Additionally, the reformed gradient expansion for correlation introduced in r\({}^{2}\)SCAN (partially) negates the error introduced to the slowly varying density by the non-vanishing interpolation function, [32] which largely accounts for the observed variation in accuracy of r\({}^{2}\)SCAN versus SCAN. Based on our data, we observe that r\({}^{2}\)SCAN is not systematically more accurate than SCAN across all TMOs and for all property predictions. For example, we have lower optimal \(U\) values indicating lower SIEs with r\({}^{2}\)SCAN versus SCAN, but also lower on-site magnetic moments (except Mn and Cr oxides) signifying poorer _d_-electron localization with r\({}^{2}\)SCAN. Further, the smaller band gaps with r\({}^{2}\)SCAN (versus SCAN) may be caused by the residual SIEs, resulting in an underestimation of the CBM across TMOs. Hence, usage of r\({}^{2}\)SCAN\((+\)_U_) in TM-based systems must be done with care and efforts should be made to benchmark as many available experimental properties as possible before performing "true" computational predictions.
We considered the transferability of the \(U\) values estimated in this work, with r\({}^{2}\)SCAN, by examining systems for each TM with oxidation states and/or coordination environments not sampled while calculating the optimal \(U\). In general, we find that r\({}^{2}\)SCAN or its Hubbard \(U\) corrected version estimate similar lattice parameters and hence yield similar accuracies on structural properties. Analogously, the calculated on-site magnetic moments in SrFeO\({}_{3}\) and the band gaps in BiVO\({}_{4}\) are similar between r\({}^{2}\)SCAN and r\({}^{2}\)SCAN\(+\)_U_. In case of electrochemical properties, we do find tangible variations in the calculated average voltages of r\({}^{2}\)SCAN and r\({}^{2}\)SCAN\(+\)_U_, with r\({}^{2}\)SCAN\(+\)_U_ exhibiting overall lower errors across the Co and Ni systems. Thus, we
Figure 5: (a) Overall computational time (electronic+ionic steps) (b) computational time per ionic step and (c) computational time per electronic loop taken for each TM-O\({}_{2}\) binary system with SCAN+_U_, r\({}^{2}\)SCAN, and r\({}^{2}\)SCAN+_U_ frameworks relative to SCAN. Values greater (smaller) than 1 in each panel indicates that a given calculation is slower (faster) than SCAN.
find the optimal \(U\) values obtained in this work to be transferable across oxide frameworks not sampled _a priori_. Nevertheless, more benchmarking studies to compare the performance of r\({}^{2}\)SCAN\(+\)_U_ with r\({}^{2}\)SCAN (and experiments) will help in quantifying the reliability and errors associated with using r\({}^{2}\)SCAN\(+\)_U_.
Given that r\({}^{2}\)SCAN\((+\)_U_) is not systematically more or less accurate than SCAN\((+\)_U_), the computational performance and numerical stability of r\({}^{2}\)SCAN\((+\)_U_) is critical in determining its utility in property predictions across materials. Thus, we have quantified the computational time of r\({}^{2}\)SCAN\((+\)_U_) and SCAN\(+\)_U_ relative to SCAN for each TM-O\({}_{2}\) system considered in Figure S1. Specifically, panels a, b, and c of Figure 5 plot the overall (electronic\(+\)ionic steps), per ionic step, and per electronic step computational time, respectively, taken by the SCAN\(+\)_U_ (blue bars), r\({}^{2}\)SCAN (red), and r\({}^{2}\)SCAN\(+\)_U_ (yellow) frameworks, relative to the computational time taken by the SCAN functional (dotted black lines), for each TM-based set of oxides. Details on calculating the computational times used by the functionals is described in the 'Computational time' section of the SI. Note that our objective is not to provide a rigorous quantification of computational resources required for each XC framework, but to provide a qualitative understanding of the relative computational costs across the frameworks considered.
For each electronic step, r\({}^{2}\)SCAN\((+\)_U_) is typically faster than SCAN (Figure 5), signifying better numerical stability than SCAN, with Mn, Ni, and Cu oxides being marginal exceptions. In contrast, on a per-ionic step basis, r\({}^{2}\)SCAN and r\({}^{2}\)SCAN\(+\)_U_ is slower than SCAN, by \(\sim\)1.05-1.78\(\times\) and \(\sim\)1.1-1.31\(\times\), respectively, highlighting that r\({}^{2}\)SCAN\((+\)_U_) takes more electronic steps to converge per ionic step. Importantly, the overall computational time (ionic\(+\)electronic steps, Figure 5) required for structural relaxation of TMOs using r\({}^{2}\)SCAN and r\({}^{2}\)SCAN\(+\)_U_ is lower than SCAN, by \(\sim\)12.1-61.2% and \(\sim\)1.9-34.5%, respectively, except in Fe oxides, indicating that r\({}^{2}\)SCAN\((+\)_U_) takes lower number of ionic steps to converge, which possibly indicates a better description of atomic forces. The higher overall computation time in Fe oxides with r\({}^{2}\)SCAN\((+\)_U_) than SCAN is primarily due to the difficulty in converging Fe\({}_{3}\)O\({}_{4}\) with r\({}^{2}\)SCAN\((+\)_U_). Comparing r\({}^{2}\)SCAN and r\({}^{2}\)SCAN\(+\)_U_, we find that r\({}^{2}\)SCAN\(+\)_U_ takes a higher overall computational time to converge, except in Fe and Ni oxides. Thus, we expect r\({}^{2}\)SCAN\((+\)_U_) to provide good utility in property predictions in TM-containing systems given its better computational performance and reasonable accuracy compared to SCAN\((+\)_U_).
## 5 Conclusion
3_d_-TMs and their compound phases find applications in several fields such as energy storage, solar cells, catalysts, thermochemical water splitting, etc., and it is imperative to predict their properties such as lattice constants, magnetic moments, reaction enthalpies, and band gaps accurately using DFT-based techniques for designing better materials. Recently, the r\({}^{2}\)SCAN metaGGA XC functional was proposed to exhibit the accuracy of its predecessor, SCAN, and the computational performance of rSCAN in main-group compounds, but the accuracy of r\({}^{2}\)SCAN was not rigorously tested on TM-based systems. Here, we assessed the numerical accuracy and computational performance of r\({}^{2}\)SCAN in binary 3_d_-TMOs, in calculating the lattice parameters, on-site magnetic moments, binary oxidation enthalpies, and band gaps against experimental data. Notably, we observed that r\({}^{2}\)SCAN exhibited similar qualitative trends as that of SCAN, with marginally larger estimations of lattice parameters than SCAN, while the on-site magnetic moments and band gap calculations are marginally smaller than SCAN. While both r\({}^{2}\)SCAN and SCAN underestimated the band gaps in wide gap TMOs, with SCAN offering slightly better accuracy, they failed to predict the correct ground state electronic configurations of narrow band gap TMOs (e.g., Mn\({}_{2}\)O\({}_{3}\)).
On analysing the addition of Hubbard _U_-correction to improve the accuracy of the r\({}^{2}\)SCAN functional, we observed that a lower optimal \(U\) value, based on experimental oxidation enthalpies, was required in a r\({}^{2}\)SCAN\(+\)_U_ framework for Ti, Mn, Co and Ni oxides, when compared to a SCAN\(+\)_U_ framework. The optimal \(U\) values were identical in both r\({}^{2}\)SCAN\(+\)_U_ and SCAN\(+\)_U_ frameworks for V and Fe oxides, while we did not observe the need for a \(U\) correction in Cr and Cu oxides with r\({}^{2}\)SCAN, similar to SCAN. Moreover, introducing the _U_-correction to SCAN and r\({}^{2}\)SCAN increased the calculated lattice parameters, on-site magnetic moments and the band gaps of the TMOs.
r\({}^{2}\)SCAN\(+\)_U_ and SCAN\(+\)_U_ successfully opened a band gap for narrow gap TMOs (except VO\({}_{2}\) and Mn\({}_{2}\)O\({}_{3}\) with r\({}^{2}\)SCAN\(+\)_U_). Upon testing the optimal \(U\) values with r\({}^{2}\)SCAN\(+\)_U_ on oxides with different oxidation states and/or coordination environments, we found that the \(U\) values derived in this work are in general transferable to other TM-containing oxides as well. Furthermore, we observed that r\({}^{2}\)SCAN(\(+\)_U_) took less overall computational time (ionic+electronic steps) to converge when compared to SCAN, which indicated that r\({}^{2}\)SCAN(\(+\)_U_) was computationally more efficient than SCAN(\(+\)_U_). Since r\({}^{2}\)SCAN\(+\)_U_ offers a reasonably accurate prediction of material properties at a lower computational expense than SCAN\(+\)_U_, we observe that r\({}^{2}\)SCAN\(+\)_U_ can be used in high-throughput materials discovery, after adequate benchmarking tests are done in each new chemical space explored.
## Acknowledgments
G.S.G. acknowledges the Indian Institute of Science (IISc) Seed Grant, SG/MHRD/20/0020 and SR/MHRD/20/0013, and the Science and Engineering Research Board (SERB) of the Department of Science and Technology, Government of India, under sanction numbers SRG/2021/000201 and IPA/2021/000007 for financial support. R.D. thanks the Ministry of Human Resource Development, Government of India, for financial assistance. S.S. acknowledges financial support from SERB under IPA/2021/000007. All the authors acknowledge the computational resources provided by the Supercomputer Education and Research Centre, IISc, for enabling some of the density functional theory calculations showcased in this work.
## Author Contributions
G.S.G. envisioned and designed the work. S.S. and R.D. performed the calculations. All authors contributed in data analysis and writing the paper.
## Conflicts of Interest
The authors declare no competing financial or non-financial interests.
## Availability of data
The data that support the findings of this study are openly available at [https://github.com/sai-mat-group/r2SCAN-U-benchmarking](https://github.com/sai-mat-group/r2SCAN-U-benchmarking).
## Supplementary Materials
Electronic Supporting Information is available online at, with details on the crystal structures used for calculations, oxidation energetics of Cr and Cu oxides, densities of states of all systems not showcased in the
main text, and details on computational time calculations.
|
2310.16416 | A Scattering theory on hyperbolic spaces | In this paper, we develop a theoretical framework for time-harmonic wave
scattering on hyperbolic spaces. Using the limiting absorption principle (LAP),
we derive the explicit forms of the ingoing and outgoing Green functions of the
Helmholtz operator of hyperbolic spaces, and verify that they are the
fundamental solutions. Then we establish accurate characterisations of the
asymptotic behaviours of the Green functions and use them to establish the
ingoing and outgoing radiation conditions, which are analogues to the
Sommerfeld radiation conditions in the Euclidean setting. Moreover, we prove a
Rellich's type theorem which guarantees that the scattered field as well as its
far-field pattern are uniquely defined. Within the framework, we consider the
scattering from a source and a potential respectively. To our best knowledge,
the theoretical framework is new to the literature and it paves the way for
many subsequent developments for wave scattering on hyperbolic spaces. | Lu Chen, Hongyu Liu | 2023-10-25T07:07:40Z | http://arxiv.org/abs/2310.16416v1 | # A scattering theory on hyperbolic spaces
###### Abstract.
In this paper, we develop a theoretical framework for time-harmonic wave scattering on hyperbolic spaces. Using the limiting absorption principle (LAP), we derive the explicit forms of the ingoing and outgoing Green functions of the Helmholtz operator of hyperbolic spaces, and verify that they are the fundamental solutions. Then we establish accurate characterisations of the asymptotic behaviours of the Green functions and use them to establish the ingoing and outgoing radiation conditions, which are analogues to the Sommerfeld radiation conditions in the Euclidean setting. Moreover, we prove a Rellich's type theorem which guarantees that the scattered field as well as its far-field pattern are uniquely defined. Within the framework, we consider the scattering from a source and a potential respectively. To our best knowledge, the theoretical framework is new to the literature and it paves the way for many subsequent developments for wave scattering on hyperbolic spaces.
The first author was partly supported by the National Key Research and Development Program (No. 2022YFA1006900) and National Natural Science Foundation of China (No. 12271027), the second author was supported by NSFC/RGC Joint Research Scheme, N_CityU101/21, ANR/RGC Joint Research Scheme, A-CityU203/19, and the Hong Kong RGC General Research Funds (projects 11311122, 12301420 and 11300821).
following asymptotic expansion as \(r\to\infty\):
\[u^{s}(x)=\frac{e^{\mathrm{i}\mu|x|}}{|x|}u_{\infty}(\hat{x})+\mathcal{O}(|x|^{-(n +1)/2}),\quad\hat{x}:=x/|x|\in\mathbb{S}^{n-1}, \tag{1.2}\]
where \(u_{\infty}\), defined on the unit sphere, is known as the far-field pattern. The correspondence between \(u_{\infty}\) and \(u^{s}\) is one-to-one, and hence the far-field pattern encodes all the scattering information of the scatterer \((\Omega,\mathcal{V})\).
A critical ingredient in establishing the well-posedness of the scattering problem (1.1) as well as the one-to-one correspondence between the far-field pattern and the scattered field is the classical Rellich uniqueness theorem which is stated as follows (cf. [2]):
**Theorem 1.1**.: _Let \(v\in L^{2}_{loc}(\mathbb{R}^{n}\backslash\overline{\Omega})\) solve the equation \((-\Delta-\mu)v=0\) in \(\mathbb{R}^{n}\backslash\overline{\Omega}\), and assume that_
\[\lim_{R\to\infty}\frac{1}{R}\int_{B_{R}\backslash\overline{\Omega}}|v(x)|^{2} \,dV(x)=0. \tag{1.3}\]
_The \(v=0\) in \(\mathbb{R}^{n}\backslash\overline{\Omega}\)._
By the standard regularity estimate, we know that \(v\) is smooth in \(\mathbb{R}^{n}\backslash\overline{\Omega}\), and hence (1.3) is equivalent to
\[\lim_{R\to\infty}\int_{|x|=R}|v(x)|^{2}\,d\sigma(x)=0. \tag{1.4}\]
Here, we note that the scattering system (1.1) arises in studying the following wave equation (cf. [20, 23]):
\[\frac{1}{c^{2}}\partial_{t}^{2}w(x,t)-\Delta w(x,t)=0, \tag{1.5}\]
where \(c\) signifies the wave speed and satisfies \(c=c_{0}\in\mathbb{R}_{+}\) in \(\mathbb{R}^{n}\backslash\overline{\Omega}\). If one consider the time-harmonic wave of the form \(w(x,t)=u(x)e^{-\mathrm{i}\mu t}\), one can easily obtain (1.1). Moreover, the Sommerfeld radiation condition in (1.1) characterises the outgoing waves. On the other hand, one may also consider the Schrodinger equation (cf. [29]):
\[\mathrm{i}\hbar\partial_{t}\Psi(x,t)=\left[-\frac{\hbar^{2}}{2m}\Delta+ \mathcal{V}(x)\right]\Psi(x,t). \tag{1.6}\]
If we assume the solution is of the form \(\Psi(x,t)=u(x)e^{-\frac{\mathrm{i}Et}{\hbar}}\) with \(E:=\mu^{2}\), one can obtain (1.1) in a similar manner. Hence, the system (1.1) can also be used to describe the quantum scattering associated with the stationary Schrodinger equation (cf. [21]).
In this paper, we extend the framework described above for the wave scattering in the Euclidean space to the hyperbolic space. In mathematics, hyperbolic space of dimension \(n\) is the unique simply connected, \(n\)-dimensional Riemannian manifold of constant sectional curvature equal to -1. It is homogeneous, and satisfies the stronger property of being a symmetric space. Physically, it is natural to consider the wave scattering on hyperbolic manifolds, say e.g. the wave propagation on an open hyperbolic surface. In fact, there are
many existing studies which are concerned with the wave scattering on hyperbolic manifolds; see e.g. [7, 10, 15, 17, 28] and the references cited therein. However, the focuses of those studies are either on proving the existence of scattering operators analogous to the Euclidean scattering [17, 18] or on certain resolvent estimates. We also note some existing results for inverse scattering problems on hyperbolic manifolds [14, 16]. To our best knowledge, there is no explicit framework as described above for the Euclidean scattering on hyperbolic spaces. Indeed, in order to extend the notions of Sommerfeld radiation condition and far-field pattern to the hyperbolic setup, one needs explicit characterisations of the hyperbolic Green function. In this paper, with the help of the hyperbolic Fourier transform and the limiting absorption principle (LAP), we derive the explicit forms of the ingoing and outgoing Green functions of the Helmholtz operator of hyperbolic spaces, and verify that they are the fundamental solutions. Then we establish accurate characterisations of the asymptotic behaviours of the Green functions and use them to establish the ingoing and outgoing radiation conditions, which are analogues to the Sommerfeld radiation conditions in the Euclidean setting. Moreover, we prove a Rellich's type theorem which guarantees that the scattered field as well as its far-field pattern are uniquely defined. Within the framework, we consider the scattering from a source and a potential respectively. The theoretical framework is new to the literature and it paves the way for many subsequent developments for wave propagation on hyperbolic spaces.
The rest of the paper is organised as follows. In Section 2, we introduce some basic knowledge about hyperbolic space, Helgason-Fourier transform on hyperbolic spaces and Green function of elliptic operator in hyperbolic space. We also give the explicit forms of the ingoing and outgoing Green functions of the Helmholtz operator of hyperbolic spaces. In Section 3, we develop the scattering theory in hyperbolic space \(\mathbb{B}^{n}\). We characterize the explicit asymptotic behavior of Green function of the Helmholtz operator in hyperbolic space, establish the ingoing and outgoing radiation conditions, Rellich's type theorem which are analogues to the counterpart in the Euclidean setting. As an application, we consider the scattering from a source and a potential respectively and prove the existence and uniqueness of the radiation field.
## 2. Auxiliary Results
### Hyperbolic space and Mobius transformations
We introduce the ball model \(\mathbb{B}^{n}\) of the hyperbolic space. Denote \(\mathbb{B}^{n}\) by the Poincare ball, which is a unit ball \(B^{n}\) equipped with the usual Poincare metric \(g=\left(\frac{2}{1-|x|^{2}}\right)^{2}g_{e}\), where \(g_{e}\) represents the standard Euclidean metric. The hyperbolic volume element can be written as \(dV_{\mathbb{H}}=\left(\frac{2}{1-|x|^{2}}\right)^{n}dx\) and the geodesic distance from the origin to \(x\in\mathbb{B}^{n}\) is given by \(\rho(x)=\log\frac{1+|x|}{1-|x|}\). Let \(B_{\mathbb{H}}(0,R)\) denote the hyperbolic ball centered at origin with the radius equal to \(R\). The
associated Laplace-Beltrami operator \(\Delta_{\mathbb{H}}\) and the gradient \(\nabla_{\mathbb{H}}\) are given respectively by
\[\Delta_{\mathbb{H}}=\frac{1-|x|^{2}}{4}\left((1-|x|^{2})\Delta_{\mathbb{R}^{n}}+ 2(n-2)\sum_{i=1}^{n}x_{i}\frac{\partial}{\partial x_{i}}\right),\ \ \nabla_{\mathbb{H}}=\left(\frac{1-|x|^{2}}{2}\right)^{2}\nabla_{ \mathbb{R}^{n}}.\]
Under the polar coordinate system, the hyperbolic metric \(g\) can be decomposed into
\[g^{2}=d\rho^{2}+\sinh^{2}\rho d\sigma,\]
where \(d\sigma\) is the standard sphere metric. Then it is not difficult to check that \(\Delta_{\mathbb{H}}\) and \(\nabla_{\mathbb{H}}\) can be written as
\[\Delta_{\mathbb{H}}=\frac{\partial^{2}}{\partial\rho^{2}}+(n-1)\coth\rho\frac{ \partial}{\partial\rho}+\frac{1}{\sinh^{2}\rho}\Delta_{\mathbb{S}^{n-1}},\ \ \nabla_{\mathbb{H}}=(\frac{\partial}{\partial\rho},\ \frac{1}{\sinh\rho}\nabla_{ \mathbb{S}^{n-1}}).\]
respectively. We can also write
\[\begin{split}\int_{\mathbb{B}^{n}}f(x)dV_{\mathbb{H}}& =\int_{0}^{1}\int_{S^{n-1}}f(r\xi)r^{n-1}\left(\frac{2}{1-r^{2}} \right)^{n}d\xi dr\\ &=\int_{0}^{+\infty}\int_{S^{n-1}}f\left(\tanh(\frac{\rho}{2}) \xi\right)(\sinh\rho)^{n-1}d\xi d\rho.\end{split} \tag{2.1}\]
under the the polar coordinate system.
For each \(a\in\mathbb{B}^{n}\), we define the Mobius transformations \(T_{a}\) by (see [1])
\[T_{a}(x)=\frac{|x-a|^{2}a-(1-|a|^{2})(x-a)}{1-2x\cdot a+|x|^{2}a^{2}},\]
where \(x\cdot a\) denotes the scalar product in \(\mathbb{R}^{n}\). It is known that the volume element \(dV_{\mathbb{H}}\) on \(\mathbb{B}^{n}\) is invariant with the respect to the Mobius transformations, which deduces that for any \(\varphi\in L^{1}(\mathbb{B}^{n})\), there holds
\[\int_{\mathbb{B}^{n}}|\varphi\circ\tau_{a}|dV_{\mathbb{H}}=\int_{\mathbb{B}^{ n}}|\varphi|dV_{\mathbb{H}}.\]
Furthermore, the commutativity of Mobius transformations \(T_{a}\) (hyperbolic translation) with the operator \(-\Delta_{\mathbb{H}}\) still holds. That is to say that for any \(\phi\in C_{c}^{\infty}(\mathbb{B}^{n})\), there holds
\[\int_{\mathbb{B}^{n}}-\Delta_{\mathbb{H}}(\phi\circ\tau_{a})(\phi\circ\tau_{a })dV_{\mathbb{H}}=\int_{\mathbb{B}^{n}}(-\Delta_{\mathbb{H}}\phi)\circ\tau_{ a}\cdot(\phi\circ\tau_{a})dV_{\mathbb{H}}=\int_{\mathbb{B}^{n}}-\Delta_{ \mathbb{H}}\phi\cdot\phi dV_{\mathbb{H}}.\]
Using the Mobius transformation, we can define the geodesic distance from \(x\) to \(y\) in \(\mathbb{B}^{n}\) as follows
\[\rho(x,y)=\rho(T_{x}(y))=\rho(T_{y}(x))=\log\frac{1+T_{y}(x)}{1-T_{y}(x)}\]
Also using the Mobius transformations again, we can define the convolution of measurable functions \(f\) and \(g\) on \(\mathbb{B}^{n}\) by (see [22])
\[(f*g)(x)=\int_{\mathbb{B}^{n}}f(y)g(T_{x}(y))dV_{\mathbb{H}}(y),\]
where \(dV_{\mathbb{H}}(y)=\left(\frac{2}{1-|y|^{2}}\right)^{\frac{n}{2}}dy\).
### The Helgason-Fourier transform on hyperbolic spaces
We now recall some basics facts of Fourier transform on hyperbolic spaces and the reader can refer to [11, 12, 24, 25, 26] for more information about Fourier analysis on Riemannian symmetric spaces of noncompact type.
Set
\[e_{\lambda,\xi}(x)=\left(\frac{\sqrt{1-|x|^{2}}}{|x-\xi|}\right)^{n-1+\mathrm{i }\lambda},\ x\in\mathbb{B}^{n},\ \lambda\in\mathbb{R},\ \ \xi\in\mathbb{S}^{n-1}.\]
The Fourier transform on hyperbolic space of a function \(f\in C^{\infty}_{c}(\mathbb{B}^{n})\) is defined as
\[\widehat{f}(\lambda,\xi)=\int_{\mathbb{B}^{n}}f(x)e_{-\lambda,\xi}(x)dV_{ \mathbb{H}}.\]
Moreover, there holds
\[f(x)=D_{n}\int_{-\infty}^{\infty}\int_{\mathbb{S}^{n-1}}\hat{f}(\lambda,\xi)e _{\lambda,\xi}(x)|c(\lambda)|^{-2}d\lambda d\xi,\]
where \(D_{n}=\frac{1}{2^{3-n}\pi|\mathbb{S}^{n-1}|}\) and \(c(\lambda)\) is the Harish-Chandra function (see [22]). For \(g\in L^{2}(\mathbb{B}^{n})\), \(h\in L^{2}(\mathbb{B}^{n})\), the Plancherel formula on the hyperbolic space
\[\int_{\mathbb{B}^{n}}g(x)h(x)dV_{\mathbb{H}}=D_{n}\int_{-\infty}^{\infty}\int _{\mathbb{S}^{n-1}}\widehat{g}(\lambda,\xi)\widehat{h}(\lambda,\xi)e_{\lambda,\xi}(x)|c(\lambda)|^{-2}d\lambda d\xi\]
still holds. Since \(e_{\lambda,\xi}\) ia an eigenvalue function of \(-\Delta_{\mathbb{H}}\) with eigenvalue equal to \(\frac{(n-1)^{2}+\lambda}{4}\), then for \(f\in C^{\infty}_{c}(\mathbb{B}^{n})\), one can derive that
\[\begin{split}\widehat{-\Delta_{\mathbb{H}}f}(\lambda,\xi)& =\int_{\mathbb{B}^{n}}-\Delta_{\mathbb{H}}(f)e_{-\lambda,\xi}(x)dV_ {\mathbb{H}}\\ &=\int_{\mathbb{B}^{n}}-\Delta_{\mathbb{H}}(e_{-\lambda,\xi})fdV_ {\mathbb{H}}\\ &=\frac{(n-1)^{2}+\lambda}{4}\widehat{f}(\lambda,\xi).\end{split} \tag{2.2}\]
### Green formula in bounded domain of hyperbolic space
**Lemma 2.1**.: _For any \(u\in C^{2}(\overline{B_{\mathbb{H}}(0,R)})\) and \(v\in C^{2}(\overline{B_{\mathbb{H}}(0,R)})\), there holds_
\[\begin{split}&\int_{B_{\mathbb{H}}(0,R)}-\Delta_{\mathbb{H}}(u) vdV_{\mathbb{H}}\\ &\quad=\int_{B_{\mathbb{H}}(0,R)}(-\Delta_{\mathbb{H}}(v)udV_{ \mathbb{H}}+2^{n-1}\left(\cosh(\frac{R}{2})\right)^{2n-2}\int_{\partial B^{n} (0,\tanh(\frac{R}{2}))}\left(\frac{\partial v}{\partial\rho}u-\frac{\partial u }{\partial\rho}v\right)d\sigma\\ &=\int_{B_{\mathbb{H}}(0,R)}(-\Delta_{\mathbb{H}}(v)udV_{ \mathbb{H}}+\int_{\partial B_{\mathbb{H}}(0,R)}\left(\frac{\partial v}{ \partial\rho}u-\frac{\partial u}{\partial\rho}v\right)d\sigma_{\mathbb{H}}. \end{split} \tag{2.3}\]
_and_
\[\begin{split}&\int_{B_{\mathbb{H}}(0,R)}-\Delta_{\mathbb{H}}(u)vdV_{ \mathbb{H}}\\ &=\int_{B_{\mathbb{H}}(0,R)}(\nabla_{\mathbb{H}}u,\nabla_{\mathbb{ H}}v)_{g}dV_{\mathbb{H}}-2^{n-1}\left(\cosh(\frac{R}{2})\right)^{2n-2}\int_{\partial B ^{n}(0,\tanh(\frac{R}{2}))}\frac{\partial u}{\partial\rho}vd\sigma\\ &=\int_{B_{\mathbb{H}}(0,R)}(\nabla_{\mathbb{H}}u,\nabla_{ \mathbb{H}}v)_{g}dV_{\mathbb{H}}-\int_{\partial B_{\mathbb{H}}(0,R)}\frac{ \partial u}{\partial\rho}vd\sigma_{\mathbb{H}},\end{split} \tag{2.4}\]
_where \(d\sigma\) denotes the surface measure on the boundary of Euclidean ball \(B^{n}(0,\tanh(\frac{R}{2}))\) and \(d\sigma_{\mathbb{H}}\) denotes the surface measure on the boundary of hyperbolic ball \(B_{\mathbb{H}}(0,R)\)._
Proof.: Careful computation gives that
\[\begin{split}&\int_{B_{\mathbb{H}}(0,R)}-\Delta_{\mathbb{H}}(u)vdV _{\mathbb{H}}\\ &=\int_{B_{\mathbb{H}}(0,R)}\left(-\Delta_{\mathbb{H}}u-\frac{n( n-2)}{4}u\right)vdV_{\mathbb{H}}+\int_{B_{\mathbb{H}}(0,R)}\frac{n(n-2)}{4}uvdV_{ \mathbb{H}}\\ &=\int_{B^{n}(0,\tanh(\frac{R}{2}))}\left(\frac{2}{1-|x|^{2}} \right)^{-\frac{n}{2}-1}\left(-\Delta_{\mathbb{R}^{n}}\left(\left(\frac{2}{1- |x|^{2}}\right)^{\frac{n}{2}-1}u\right)v\left(\frac{2}{1-|x|^{2}}\right)^{ \frac{n}{2}}\right)dx\\ &\quad+\int_{B_{\mathbb{H}}(0,R)}\frac{n(n-2)}{4}uvdV_{\mathbb{H }}\\ &=\int_{B^{n}(0,\tanh(\frac{R}{2}))}-\Delta_{\mathbb{R}^{n}}( \widetilde{u})\tilde{v}dx+\int_{B_{\mathbb{H}}(0,R)}\frac{n(n-2)}{4}uvdV_{ \mathbb{H}}\\ &=\int_{B^{n}(0,\tanh(\frac{R}{2}))}-\Delta_{\mathbb{R}^{n}}( \widetilde{v})\tilde{u}dx+\int_{\partial B^{n}(0,\tanh(\frac{R}{2}))}\left( \frac{\partial\widetilde{v}}{\partial r}\widetilde{u}-\frac{\partial \widetilde{u}}{\partial r}\tilde{v}\right)d\sigma+\int_{B_{\mathbb{H}}(0,R)} \frac{n(n-2)}{4}uvdV_{\mathbb{H}}\\ &=\int_{B_{\mathbb{H}}(0,R)}-\Delta_{\mathbb{H}}(v)udV_{\mathbb{H }}+\int_{\partial B^{n}(0,\tanh(\frac{R}{2}))}\left(\frac{\partial\widetilde{ v}}{\partial r}\widetilde{u}-\frac{\partial\widetilde{u}}{\partial r}\tilde{v} \right)d\sigma,\end{split} \tag{2.5}\]
where we have used the relation between the conformal Laplacian operator \(-\Delta_{\mathbb{H}}-\frac{n(n-2)}{4}\) of hyperbolic \(\mathbb{B}^{n}\) and Laplacian operator \(-\Delta\) in \(\mathbb{R}^{n}\), and
\[\widetilde{u}=\left(\frac{2}{1-|x|^{2}}\right)^{\frac{n}{2}-1}u,\ \ \widetilde{v}=\left(\frac{2}{1-|x|^{2}}\right)^{\frac{n}{2}-1}v.\]
If we let \(|x|=r=\tanh(\frac{\rho}{2})\), then simple calculation gives that
\[\frac{\partial\rho}{\partial r}=2\cosh^{2}(\frac{\rho}{2}),\ \widetilde{u}=2^{\frac{n}{2}-1}\cosh^{n-2}(\frac{\rho}{2})u,\ \ \widetilde{v}=2^{\frac{n}{2}-1}\cosh^{n-2}(\frac{\rho}{2})v.\]
Hence
\[\begin{split}\frac{\partial\widetilde{u}}{\partial r}&= \frac{\partial\widetilde{u}}{\partial\rho}\frac{\partial\rho}{\partial r}\\ &=\left(2^{\frac{n}{2}-2}(n-2)\cosh^{n-3}(\frac{\rho}{2})\sinh( \frac{\rho}{2})u+2^{\frac{n}{2}-1}\cosh^{n-2}(\frac{\rho}{2})\frac{\partial u }{\partial\rho}\right)\frac{\partial\rho}{\partial r}\\ &=2^{\frac{n}{2}-1}(n-2)\cosh^{n-1}(\frac{\rho}{2})\sinh(\frac{ \rho}{2})u+2^{\frac{n}{2}}\cosh^{n}(\frac{\rho}{2})\frac{\partial u}{\partial \rho}.\end{split} \tag{2.6}\]
Then the boundary integral
\[\int_{\partial B^{n}(0,\tanh(\frac{R}{2}))}\left(\frac{\partial\widetilde{v}}{ \partial r}\widetilde{u}-\frac{\partial\widetilde{u}}{\partial r}\tilde{v} \right)d\sigma\]
can be written as
\[\begin{split}&\int_{\partial B^{n}(0,\tanh(\frac{R}{2}))}\left( \frac{\partial\widetilde{v}}{\partial r}\widetilde{u}-\frac{\partial \widetilde{u}}{\partial r}\tilde{v}\right)d\sigma\\ &=\int_{\partial B^{n}(0,\tanh(\frac{R}{2}))}\left(2^{\frac{n}{2} }\cosh^{n}(\frac{\rho}{2})\frac{\partial u}{\partial\rho}\tilde{v}-2^{\frac{n} {2}}\cosh^{n}(\frac{\rho}{2})\frac{\partial v}{\partial\rho}\tilde{u}\right)d \sigma\\ &=2^{n-1}\left(\cosh(\frac{R}{2})\right)^{2n-2}\int_{\partial B^{ n}(0,\tanh(\frac{R}{2}))}\left(\frac{\partial v}{\partial\rho}u-\frac{\partial u }{\partial\rho}v\right)d\sigma\\ &=\int_{\partial B\ni(0,R)}\left(\frac{\partial v}{\partial\rho}u -\frac{\partial u}{\partial\rho}v\right)d\sigma_{\mathbb{H}}.\end{split} \tag{2.7}\]
Then (2.3) of Lemma 2.1 is proved. For (2.4), similarly, we can also obtain
\[\int_{B_{\mathbb{H}}(0,R)}-\Delta_{\mathbb{H}}(u)vdV_{\mathbb{H}}\] \[\quad=\int_{B^{n}(0,\tanh(\frac{R}{2}))}\left(\frac{2}{1-|x|^{2}} \right)^{-\frac{n}{2}-1}\left(-\Delta_{\mathbb{R}^{n}}\left((\frac{2}{1-|x|^{2 }})^{\frac{n}{2}-1}u\right)v\left(\frac{2}{1-|x|^{2}}\right)^{\frac{n}{2}} \right)dx\] \[\quad+\int_{B_{\mathbb{H}}(0,R)}\frac{n(n-2)}{4}uvdV_{\mathbb{H}}\] \[=\int_{B^{n}(0,\tanh(\frac{R}{2}))}-\Delta_{\mathbb{R}^{n}}( \widetilde{u})\tilde{v}dx+\int_{B_{\mathbb{H}}(0,R)}\frac{n(n-2)}{4}uvdV_{ \mathbb{H}}\] \[=\int_{B^{n}(0,\tanh(\frac{R}{2}))}\nabla_{\mathbb{R}^{n}} \widetilde{v}\cdot\nabla_{\mathbb{R}^{n}}\tilde{u}dx-\int_{\partial B^{n}(0, \tanh(\frac{R}{2}))}\frac{\partial\widetilde{u}}{\partial r}\tilde{v}d\sigma+ \int_{B_{\mathbb{H}}(0,R)}\frac{n(n-2)}{4}uvdV_{\mathbb{H}}\] \[=\int_{B^{n}(0,\tanh(\frac{R}{2}))}\nabla_{\mathbb{R}^{n}} \widetilde{v}\cdot\nabla_{\mathbb{R}^{n}}\tilde{u}dx+\int_{B_{\mathbb{H}}(0, R)}\frac{n(n-2)}{4}uvdV_{\mathbb{H}}\] \[\quad-2^{n-1}\left(\cosh(\frac{R}{2})\right)^{2n-2}\int_{\partial B ^{n}(0,\tanh(\frac{R}{2}))}\frac{\partial u}{\partial\rho}vd\sigma\] \[=\int_{B_{\mathbb{H}}(0,R)}(\nabla_{\mathbb{H}}u,\nabla_{ \mathbb{H}}v)_{g}dV_{\mathbb{H}}-2^{n-1}\left(\cosh(\frac{R}{2})\right)^{2n-2} \int_{\partial B^{n}(0,\tanh(\frac{R}{2}))}\frac{\partial u}{\partial\rho}vd\sigma\] \[=\int_{B_{\mathbb{H}}(0,R)}(\nabla_{\mathbb{H}}u,\nabla_{ \mathbb{H}}v)_{g}dV_{\mathbb{H}}-\int_{\partial\partial B_{\mathbb{H}}(0,R)} \frac{\partial u}{\partial\rho}vd\sigma_{\mathbb{H}}. \tag{2.8}\]
Then we accomplish the proof of Lemma 2.1.
Heat kernel and Green function of the elliptic operator \(\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}+k^{2}\right)\)
Consider the heat equation
\[\begin{cases}\partial_{t}u(x,t)-\Delta_{\mathbb{H}}u(x,t)=0,&(x,t)\in\mathbb{ B}^{n}\times\mathbb{R}^{+}\\ u(x,0)=f(x)\in C_{c}^{\infty}(\mathbb{B}^{n}).\end{cases} \tag{2.9}\]
By Fourier transform on hyperbolic space \(\mathbb{B}^{n}\), we derive that
\[\partial_{t}\widehat{u}(\lambda,\xi,t)+\frac{(n-1)^{2}+\lambda^{2}}{4} \widehat{u}(\lambda,\xi,t)=0,\ \ \widehat{u}(\lambda,\xi,0)=\widehat{f}(\lambda,\xi).\]
Solving this ODE equation, we see that
\[\widehat{u}(\lambda,\xi,t)=e^{-\frac{(n-1)^{2}+\lambda^{2}}{4}t}\widehat{f}( \lambda,\xi).\]
Using the inversion of Fourier transform, we get
\[u(\lambda,\xi,t)=P_{t}*f=\int_{\mathbb{B}^{n}}P_{t}(T_{x}(y))f(y)dV_{\mathbb{H }}(y)\triangleq e^{t\Delta_{\mathbb{H}}}f.\]
where \(P_{t}\) is called the heat kernel of hyperbolic space and satisfies
\[P_{t}(x)=D_{n}\int_{-\infty}^{\infty}\int_{\mathbb{S}^{n-1}}e^{-\frac{(n-1)^{2}+ \lambda^{2}}{4}t}e_{\lambda,\xi}(x)|c(\lambda)|^{-2}d\lambda d\xi.\]
It is not difficult to check that \(P_{t}(x)\) is a radial function and hence can be seen as the function of \(\rho(x)\). The explicit formula of heat kernel \(P_{t}\) can be written as
\[P_{t}(\rho)=(2\pi)^{-\frac{n+1}{2}}t^{-\frac{1}{2}}e^{-\frac{(n-1)^{2}}{4}t} \int_{\rho}^{+\infty}\frac{\sinh r}{\sqrt{\cosh r-\cosh\rho}}\left(-\frac{1}{ \sinh r}\frac{\partial}{\partial r}\right)^{m}e^{-\frac{r^{2}}{4t}}\]
when \(n=2m\) and
\[P_{t}(\rho)=2^{-m-1}\pi^{-m-\frac{1}{2}}t^{-\frac{1}{2}}e^{-\frac{(n-1)^{2}}{4 }t}\left(-\frac{1}{\sinh\rho}\frac{\partial}{\partial\rho}\right)^{m}e^{- \frac{\rho^{2}}{4t}}\]
when \(n=2m+1\) (see [3, 8]). Noticing the spectral of \(-\Delta_{\mathbb{H}}\) belong \((\frac{(n-1)^{2}}{4},+\infty)\), applying the Melin formula, we can deduce that the Bessel-Green-Riesz kernel can be written as
\[\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}+k^{2}\right)^{-1}=\int_{0}^{+ \infty}e^{(\frac{(n-1)^{2}}{4}-k^{2})t}e^{t\Delta_{\mathbb{H}}}dt. \tag{2.10}\]
An explicit expression of Green's function of the operator \(\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}+k^{2}\right)\) is given by (see [19, 27, 25, 26])
\[\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}+k^{2}\right)^{-1}=(2\pi)^{- \frac{n}{2}}(\sinh\rho)^{-\frac{n-2}{2}}e^{-\frac{(n-2)\pi}{2}i}Q_{k-\frac{1} {2}}^{\frac{n-2}{2}}(\cosh\rho),\ \ n\geq 3,\]
where \(Q_{k-\frac{1}{2}}^{\frac{n-2}{2}}(\cosh\rho)\) is the Legendre function of second type (see [5]) and satisfies
\[e^{-\frac{(n-2)\pi}{2}i}Q_{k-\frac{1}{2}}^{\frac{n-2}{2}}(\cosh\rho)=\frac{ \Gamma(\frac{n-1}{2}+k)}{2^{k+\frac{1}{2}}\Gamma(k+\frac{1}{2})\sinh^{\frac{n- 2}{2}}\rho}\int_{0}^{\pi}(\cosh\rho+\cos t)^{\frac{n-3}{2}-k}(\sin t)^{2k}dt.\]
Hence
\[(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}+k^{2})^{-1}=\frac{A_{n,k}}{(\sinh \rho)^{n-2}}\int_{0}^{\pi}(\cosh\rho+\cos t)^{\frac{n-3}{2}-k}(\sin t)^{2k}dt.\]
where \(A_{n,k}=(2\pi)^{-\frac{n}{2}}\frac{\Gamma(\frac{n-1}{2}+k)}{2^{k+\frac{1}{2}} \Gamma(k+\frac{1}{2})}\).
Green function of the Helmholtz operator \(\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2}\right)\) in hyperbolic space
Consider the following time-dependent wave equation in hyperbolic space
\[\partial_{tt}w(x,t)-\left(\Delta_{\mathbb{H}}u+\frac{(n-1)^{2}}{4}u\right)=0, \ \ (x,t)\in\mathbb{B}^{n}\times\mathbb{R}^{+}. \tag{2.11}\]
We try to find the solution taking the form as \(w(x,t)=e^{\mathrm{i}\mu t}u(x)\). Then \(u(x)\) satisfies the Helmholtz equation in hyperbolic space:
\[-\Delta_{\mathbb{H}}u-\frac{(n-1)^{2}}{4}u-\mu^{2}u=0,\ \ x\in\mathbb{B}^{n}. \tag{2.12}\]
We now start to look for the fundamental solution of the Helmholtz equation in hyperbolic space.
Through Helgason-Fourier transform on hyperbolic space, we see that
\[(-\Delta_{\mathbb{H}}-z)^{-1}=\int_{0}^{+\infty}e^{\gamma t}e^{t\Delta_{ \mathbb{H}}}dt \tag{2.13}\]
holds if \(\Re(z)\leq\frac{(n-1)^{2}}{4}\) and fails if \(\Re(z)>\frac{(n-1)^{2}}{4}\). Hence one can not apply the technique of Fourier transform to derive the formula of \((-\Delta_{\mathbb{H}}-\frac{(n-1)2}{4}-\mu^{2})^{-1}\). It is well known that the spectrum of the \(-\Delta_{\mathbb{H}}\) is \((\frac{(n-1)^{2}}{4},+\infty)\). From [9], we know that \((-\Delta_{\mathbb{H}}-z)^{-1}\) is holomorphic with respect to \(z\) when \(z\) is far from the line \([\frac{(n-1)^{2}}{4},+\infty)\). Recall the explicit expression for Green function of \((-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}+k^{2})\):
\[(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}+k^{2})^{-1}=\frac{A_{n,k}}{(\sinh \rho)^{n-2}}\int_{0}^{\pi}(\cosh\rho+\cos t)^{\frac{n-3}{2}-k}(\sin t)^{2k}dt.\]
Observing that
\[\frac{A_{n,k}}{(\sinh\rho)^{n-2}}\int_{0}^{\pi}(\cosh\rho+\cos t)^{\frac{n-3} {2}-k}(\sin t)^{2k}dt\]
can be holomorphic extended to
\[\frac{A_{n,z}}{(\sinh\rho)^{n-2}}\int_{0}^{\pi}(\cosh\rho+\cos t)^{\frac{n-3} {2}-z}(\sin t)^{2z}dt\]
for any complex number \(z\), where \(A_{n,z}=(2\pi)^{-\frac{n}{2}}\frac{\Gamma(\frac{n-1}{2}+z)}{2^{z+\frac{1}{2}} \Gamma(z+\frac{1}{2})}\). By the uniqueness of holomorphic extension, we deduce that
\[(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}+z^{2})^{-1}=\frac{A_{n,z}}{(\sinh \rho)^{n-2}}\int_{0}^{\pi}(\cosh\rho+\cos t)^{\frac{n-3}{2}-z}(\sin t)^{2z}dt\]
for any \(z\in\mathbb{C}\setminus[0,+\infty)\). Now, we define the Green function of the Helmholtz operator
\[\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2}\right)\]
through
\[\lim_{\epsilon\to 0}\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-(\mu+ \epsilon\mathrm{i})^{2}\right)^{-1}\]
or
\[\lim_{\epsilon\to 0}\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-(\mu- \epsilon\mathrm{i})^{2}\right)^{-1}.\]
Then we can calculate that
\[\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2}\right)^{-1}\] \[\quad=\lim_{\epsilon\to 0}\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}- (\mu+\epsilon\mathrm{i})^{2}\right)^{-1} \tag{2.14}\] \[\quad=\lim_{\epsilon\to 0}\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}+ (\epsilon-\mathrm{i}\mu)^{2}\right)^{-1}\] \[\quad=\lim_{\epsilon\to 0}A_{n,\epsilon-\mu\mathrm{i}}\frac{(\cosh \rho)^{\frac{n-3}{2}-(\epsilon-\mu\mathrm{i})}}{(\sinh\rho)^{n-2}}\int_{0}^{ \pi}\left(1+\frac{\cos t}{\cosh\rho}\right)^{\frac{n-3}{2}-\epsilon+\mu \mathrm{i}}(\sin t)^{2\epsilon-2\mu\mathrm{i}}dt.\] \[\quad=A_{n,-\mu\mathrm{i}}\frac{(\cosh\rho)^{\frac{n-3}{2}+\mu \mathrm{i}}}{(\sinh\rho)^{n-2}}\int_{0}^{\pi}\left(1+\frac{\cos t}{\cosh\rho} \right)^{\frac{n-3}{2}+\mu\mathrm{i}}(\sin t)^{-2\mu\mathrm{i}}dt.\]
or
\[\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2}\right)^{ -1}\] \[\quad=\lim_{\epsilon\to 0}\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}- (\mu-\epsilon\mathrm{i})^{2}\right)^{-1}\] \[\quad=\lim_{\epsilon\to 0}\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2} }{4}+(\epsilon+\mathrm{i}\mu)^{2}\right)^{-1}\] \[\quad=A_{n,\mu\mathrm{i}}\frac{(\cosh\rho)^{\frac{n-3}{2}-\mu \mathrm{i}}}{(\sinh\rho)^{n-2}}\int_{0}^{\pi}\left(1+\frac{\cos t}{\cosh\rho} \right)^{\frac{n-3}{2}-\mu\mathrm{i}}(\sin t)^{2\mu\mathrm{i}}dt, \tag{2.15}\]
Define
\[G_{-\mu\mathrm{i}}(\rho(x))=A_{n,-\mu\mathrm{i}}\frac{(\cosh\rho(x))^{\frac{n -3}{2}+\mu\mathrm{i}}}{(\sinh\rho(x))^{n-2}}\int_{0}^{\pi}\left(1+\frac{\cos t }{\cosh\rho(x)}\right)^{\frac{n-3}{2}+\mu\mathrm{i}}(\sin t)^{-2\mu\mathrm{i}}dt\]
and
\[G_{\mu\mathrm{i}}(\rho(x))=A_{n,\mu\mathrm{i}}\frac{(\cosh\rho(x))^{\frac{n-3} {2}-\mu\mathrm{i}}}{(\sinh\rho(x))^{n-2}}\int_{0}^{\pi}\left(1+\frac{\cos t}{ \cosh\rho(x)}\right)^{\frac{n-3}{2}-\mu\mathrm{i}}(\sin t)^{2\mu\mathrm{i}}dt.\]
We will verify that \(G_{-\mu\mathrm{i}}(\rho(x))\) and \(G_{\mu\mathrm{i}}(\rho(x))\) are both the Green function of \(\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2}\right)\) with the singularity at origin.
**Theorem 2.2**.: \(G_{-\mu\mathrm{i}}(\rho(x))\) _satisfies equation_
\[\begin{cases}-\Delta_{\mathbb{H}}G_{-\mu\mathrm{i}}(\rho(x))-\frac{(n-1)^{2}}{4 }G_{-\mu\mathrm{i}}(\rho(x))-\mu^{2}G_{-\mu\mathrm{i}}(\rho(x))=\delta_{0},&x \in\mathbb{B}^{n}\\ G_{-\mu\mathrm{i}}(\rho(x))=O\left(\sinh^{-\frac{n-1}{2}}\rho(x)\right)&\text{ \rm as $\rho(x)\to+\infty$}.\end{cases} \tag{2.16}\]
_An direct application of the above result and Mobius transformation yields that \(G_{-\mu i}(\rho(x,y))\) satisfies equation_
\[\begin{cases}\left(-\Delta_{\mathbb{H}}\right)_{y}G_{-\mu{\rm i}}(\rho(x,y))- \frac{(n-1)^{2}}{4}G_{-\mu{\rm i}}(\rho(x,y))-\mu^{2}G_{-\mu{\rm i}}(\rho(x,y)) =\delta_{x}(y),\ \ x,\ y\in\mathbb{B}^{n},\\ G_{-\mu{\rm i}}(\rho(x,y))=O\left(\sinh^{-\frac{n-1}{2}}\rho(y)\right)\ \ \text{ as }\rho(y)\to+\infty.\end{cases} \tag{2.17}\]
_Similar \(G_{\mu{\rm i}}(\rho(x,y))\) also satisfies equation_
\[\begin{cases}\left(-\Delta_{\mathbb{H}}\right)_{y}G_{\mu{\rm i}}(\rho(x,y))- \frac{(n-1)^{2}}{4}G_{\mu{\rm i}}(\rho(x,y))-\mu^{2}G_{\mu{\rm i}}(\rho(x,y)) =\delta_{x}(y),\ \ x,\ y\in\mathbb{B}^{n}\\ G_{\mu{\rm i}}(\rho(x,y))=O\left(\sinh^{-\frac{n-1}{2}}\rho(y)\right)\ \ \text{ as }\rho(y)\to+\infty.\end{cases} \tag{2.18}\]
Proof.: Let \(G_{\varepsilon-\mu{\rm i}}(\rho)\) denote the Green function of the operator \((-\Delta-\frac{(n-1)^{2}}{4}+(\epsilon-\mu i)^{2})\), then for any \(\phi\in C_{c}^{\infty}(\mathbb{B}^{n})\), there holds
\[\begin{split}&\int_{\mathbb{B}^{n}}G_{\epsilon-\mu{\rm i}}( \rho(x))\left(-\Delta_{\mathbb{H}}\phi-\frac{(n-1)^{2}}{4}\phi+(\epsilon-\mu{ \rm i})^{2}\phi\right)dV_{\mathbb{H}}\\ &=\int_{\mathbb{B}^{n}}\left(-\Delta_{\mathbb{H}}G_{\epsilon-\mu{ \rm i}}(\rho(x))-\frac{(n-1)^{2}}{4}G_{\epsilon-\mu{\rm i}}(\rho(x))+( \epsilon-\mu{\rm i})^{2}G_{\epsilon-\mu{\rm i}}(\rho(x))\right)\phi\\ &=\phi(0).\end{split} \tag{2.19}\]
Since \(\phi\in C_{c}^{\infty}(\mathbb{B}^{n})\), \(G_{\epsilon-\mu{\rm i}}(\rho(x))\lesssim\frac{(\cosh\rho(x))^{\frac{n-3}{2}} }{(\sinh\rho(x))^{n-2}}\) and \(\sinh(\rho(x))\backsim|x|\) when \(\rho(x)\to 0\), the dominated convergence theorem directly gives that
\[\begin{split}&\int_{\mathbb{B}^{n}}G_{-\mu{\rm i}}(\rho(x)) \left(-\Delta_{\mathbb{H}}\phi-\frac{(n-1)^{2}}{4}\phi-\mu^{2}\phi\right)dV_{ \mathbb{H}}\\ &\quad=\lim_{\epsilon\to 0}\int_{\mathbb{B}^{n}}G_{\epsilon-\mu{ \rm i}}\left(-\Delta_{\mathbb{H}}\phi-\frac{(n-1)^{2}}{4}\phi+(\epsilon-\mu{ \rm i})^{2}\phi\right)dV_{\mathbb{H}}=\phi(0),\end{split} \tag{2.20}\]
This proves that \(G_{-\mu{\rm i}}(\rho(x))\) is the Green function of the Helmholtz operator \((-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2})\) with the singularity at origin. Using the commutativity of the hyperbolic translation with the operator \(-\Delta_{\mathbb{H}}\) (see Section 2.1), we deduce that for any \(\phi\in C_{c}^{\infty}(\mathbb{B}^{n})\), it holds
\[\begin{split}&\int_{\mathbb{B}^{n}}G_{-\mu{\rm i}}(\rho(x,y)) \left(-\Delta_{\mathbb{H}}\phi-\frac{(n-1)^{2}}{4}\phi-\mu^{2}\phi\right)dV_{ \mathbb{H}}(y)\\ &=\int_{\mathbb{B}^{n}}G_{-\mu{\rm i}}(\rho(y))\left(-\Delta_{ \mathbb{H}}\phi-\frac{(n-1)^{2}}{4}\phi-\mu^{2}\phi\right)\circ(\tau_{x})dV_{ \mathbb{H}}(y)\\ &=\int_{\mathbb{B}^{n}}G_{-\mu{\rm i}}(\rho(y))\left(-\Delta_{ \mathbb{H}}(\phi\circ\tau_{x})-\frac{(n-1)^{2}}{4}\phi\circ\tau_{x}-\mu^{2} \phi\circ\tau_{x}\right)dV_{\mathbb{H}}(y)\\ &=\phi(\tau_{x}(0))=\phi(x).\end{split} \tag{2.21}\]
Hence we obtain that \(G_{-\mu\mathrm{i}}(\rho(x,y))\) is the Green function of the Helmholtz operator \(\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2}\right)\) with the singularity at \(x\). The proof for \(G_{\mu\mathrm{i}}(\rho(x,y))\) being the Green function of the Helmholtz operator is similar. Then we accomplish the proof of Theorem 2.2.
**Remark 2.3**.: _Recalling the time-dependent wave equation (2.11), we are trying to search solution taking the form as \(w(x,t)=e^{i\mu t}u(x)\). It is not difficult to check that \(G_{-\mu\mathrm{i}}\) creates the outgoing wave and \(G_{\mu\mathrm{i}}\) creates the ingoing wave. We only consider the outgoing wave._
## 3. A scattering theory on hyperbolic space
In this section, we will develop the scattering theory on hyperbolic space \(\mathbb{B}^{n}\). We need to characterize the explicit asymptotic behaviors of Green's function \(G_{-\mu\mathrm{i}}\), introduce some concepts of radiation condition, far-field patterns and prove a Rellich type uniqueness theorem as well as the well-posedness for the wave equation with the source or potential in the structure of hyperbolic space.
### Accurate asymptotic behaviors of Green's function \(G_{-\mu\mathrm{i}}(\rho(x,y))\)
**Theorem 3.1**.: _For fixed \(y\in\mathbb{B}^{n}\), there holds_
\[G_{-\mu\mathrm{i}}(\rho(x,y)) =\frac{A_{n,-\mu\mathrm{i}}}{2^{\frac{n}{2}-\frac{1}{2}}}\frac{ \left(\cosh(\frac{\rho(x)}{2})\right)^{2\mu\mathrm{i}}}{\cosh^{n-1}(\frac{ \rho(x)}{2})}\frac{\left(\cosh(\frac{\rho(y)}{2})\right)^{2\mu\mathrm{i}}}{ \cosh^{n-1}(\frac{\rho(y)}{2})}\] \[\quad\times\left(1-2\hat{x}\cdot y+|y|^{2}\right)^{\mu\mathrm{i}- \frac{n-1}{2}}\int_{0}^{\pi}(\sin t)^{-2\mu\mathrm{i}}dt+O\left(\frac{1}{\sinh ^{\frac{n+1}{2}}(\rho(x))}\right)\]
_as \(\rho(x)\to+\infty\)._
Proof.: Careful calculation gives that
\[G_{-\mu\mathrm{i}}(\rho(x,y)) =A_{n,-\mu\mathrm{i}}\frac{\left(2\sinh^{2}(\frac{\rho(x,y)}{2})+1 \right)^{\frac{n-3}{2}+\mu\mathrm{i}}}{\left(2\sinh(\frac{\rho(x,y)}{2})\cosh( \frac{\rho(x,y)}{2})\right)^{n-2}}\int_{0}^{\pi}\left(1+\frac{\cos t}{\cosh \rho}\right)^{\frac{n-3}{2}+\mu\mathrm{i}}(\sin t)^{-2\mu\mathrm{i}}dt\] \[=\frac{A_{n,-\mu\mathrm{i}}}{2^{\frac{n}{2}-\frac{1}{2}}}\frac{ \left(\sinh(\frac{\rho(x,y)}{2})\right)^{2\mu\mathrm{i}}}{\sinh(\frac{\rho(x,y )}{2})\left(\cosh(\frac{\rho(x,y)}{2})\right)^{n-2}}\int_{0}^{\pi}\left(1+ \frac{\cos t}{\cosh\rho}\right)^{\frac{n-3}{2}+\mu\mathrm{i}}(\sin t)^{-2\mu \mathrm{i}}dt\] \[\quad+O\left(\frac{1}{\sinh^{n-2}(\rho(x))}\right)\] \[=\frac{A_{n,-\mu\mathrm{i}}}{2^{\frac{n}{2}-\frac{1}{2}}}\frac{ \left(\sinh(\frac{\rho(x,y)}{2})\right)^{2\mu\mathrm{i}}}{\sinh(\frac{\rho(x,y)}{2})\left(\cosh(\frac{\rho(x,y)}{2})\right)^{n-2}}\int_{0}^{\pi}(\sin t)^ {-2\mu\mathrm{i}}dt\] \[\quad+O\left(\frac{1}{\sinh^{\frac{n-1}{2}+1}(\rho(x))}\right). \tag{3.1}\]
From [1, 13], we know that
\[\sinh(\frac{\rho(x,y)}{2})=\frac{|x-y|}{\sqrt{(1-|x|^{2})(1-|y|^{2})}}=|x-y| \cosh\left(\frac{\rho(x)}{2}\right)\cosh\left(\frac{\rho(y)}{2}\right),\]
and
\[\cosh\left(\frac{\rho(x,y)}{2}\right)=\frac{\sqrt{1-2x\cdot y+|x|^{2}|y|^{2}}} {\sqrt{(1-|x|^{2})(1-|y|^{2})}}=\sqrt{1-2x\cdot y+|x|^{2}|y|^{2}}\cosh\left( \frac{\rho(x)}{2}\right)\cosh\left(\frac{\rho(y)}{2}\right).\]
Hence we can furthermore write
\[G_{-\mu\mathrm{i}}(\rho(x,y)) =\frac{A_{n,-\mu\mathrm{i}}}{2^{\frac{n}{2}-\frac{1}{2}}}\frac{ \left(\cosh(\frac{\rho(x)}{2})\right)^{2\mu\mathrm{i}}}{\cosh^{n-1}(\frac{ \rho(x)}{2})}\frac{\left(\cosh(\frac{\rho(y)}{2})\right)^{2\mu\mathrm{i}}}{ \cosh^{n-1}(\frac{\rho(y)}{2})}\] \[\quad\times\frac{|x-y|^{2\mu\mathrm{i}}}{|x-y|(\sqrt{1-2x\cdot y +|x|^{2}|y|^{2})^{n-2}}}\int_{0}^{\pi}(\sin t)^{-2\mu\mathrm{i}}dt+O\left( \frac{1}{\sinh^{\frac{n+1}{2}}(\rho(x))}\right)\] \[=\frac{A_{n,-\mu\mathrm{i}}}{2^{\frac{n}{2}-\frac{1}{2}}}\frac{ \left(\cosh(\frac{\rho(x)}{2})\right)^{2\mu\mathrm{i}}}{\cosh^{n-1}(\frac{ \rho(x)}{2})}\frac{\left(\cosh(\frac{\rho(y)}{2})\right)^{2\mu\mathrm{i}}}{ \cosh^{n-1}(\frac{\rho(y)}{2})}\] \[\quad\times\left(1-2\hat{x}\cdot y+|y|^{2}\right)^{\mu\mathrm{i}- \frac{n-1}{2}}\int_{0}^{\pi}(\sin t)^{-2\mu\mathrm{i}}dt+O\left(\frac{1}{ \sinh^{\frac{n+1}{2}}(\rho(x))}\right) \tag{3.2}\]
as \(\rho(x)\to+\infty\) for any fixed \(y\in\mathbb{B}^{n}\), where \(\hat{x}=\frac{x}{|x|}\). Then the proof of Theorem 3.1 is accomplished.
### Radiation condition and far-field pattern
For fixed \(y\in\mathbb{B}^{n}\), through the polar coordinates, we can also see the \(G(\rho(x,y))\) as the function of the variable \(\rho\) and \(\hat{x}\). According to the asymptotic formula (3.2), we can write
\[G_{-\mu\mathrm{i}}(\rho(x,y))=G(\rho,\hat{x},y) =\frac{A_{n,-\mu\mathrm{i}}}{2^{\frac{n}{2}-\frac{1}{2}}}\frac{ \left(\cosh(\frac{\rho}{2})\right)^{2\mu\mathrm{i}}}{\cosh^{n-1}(\frac{\rho} {2})}\frac{\left(\cosh(\frac{\rho(y)}{2})\right)^{2\mu\mathrm{i}}}{\cosh^{n-1 }(\frac{\rho(y)}{2})}\] \[\quad\times\left(1-2\hat{x}\cdot y+|y|^{2}\right)^{\mu\mathrm{i} -\frac{n-1}{2}}\int_{0}^{\pi}(\sin t)^{-2\mu\mathrm{i}}dt+O\left(\frac{1}{ \sinh^{\frac{n+1}{2}}(\rho)}\right) \tag{3.3}\]
when \(\rho\to+\infty\). Direct computation gives that
\[\frac{\partial G}{\partial\rho} =\frac{1}{2}\frac{A_{n,-\mu\mathrm{i}}}{2^{\frac{n}{2}-\frac{1}{2 }}}\left(2\mu\mathrm{i}-(n-1)\right)\left(\cosh(\frac{\rho}{2})\right)^{2\mu \mathrm{i}-n}\sinh(\frac{\rho}{2})\frac{\left(\cosh(\frac{\rho(y)}{2})\right)^ {2\mu\mathrm{i}}}{\cosh^{n-1}(\frac{\rho(y)}{2})}\] \[\quad\times\left(1-2\hat{x}\cdot y+|y|^{2}\right)^{\mu\mathrm{i}- \frac{n-1}{2}}\int_{0}^{\pi}(\sin t)^{-2\mu\mathrm{i}}dt+O\left(\frac{1}{ \sinh^{\frac{n+1}{2}}(\rho)}\right)\] \[=(\mu\mathrm{i}-\frac{n-1}{2})\tanh(\frac{\rho}{2})G+O\left(\frac {1}{\sinh^{\frac{n+1}{2}}(\rho)}\right)\]
as \(\rho(x)\to+\infty\). Hence the Green function \(G_{-\mu\mathrm{i}}(\rho,\hat{x},y)\) satisfies the condition
\[\frac{\partial G}{\partial\rho}-(\mu\mathrm{i}-\frac{n-1}{2})\tanh(\frac{\rho }{2})G=O\left(\frac{1}{\sinh^{\frac{n+1}{2}}(\rho)}\right) \tag{3.4}\]
as \(\rho\to+\infty\). We called the condition (3.4) **radiation condition** and prove that this radiation condition will make sure the existence and uniqueness for solutions of the wave equation involving the source or potential later.
**Remark 3.2**.: _If we consider the ingoing wave, the Green function \(G_{\mu\mathrm{i}}\) satisfies inner radiation condition_
\[\frac{\partial G}{\partial\rho}+(\mu\mathrm{i}+\frac{n-1}{2})\tanh(\frac{\rho }{2})G=O\left(\frac{1}{\sinh^{\frac{n+1}{2}}(\rho)}\right).\]
_One can similar prove the existence and uniqueness for the wave equation involving the source or potential under the inner radiation condition._
### Scattering theory for Helmholtz equation with the source
Consider the Helmholtz equation
\[\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2}\right)u=f(x),\ \ x\in \mathbb{B}^{n}. \tag{3.5}\]
with the source \(f\in C^{\infty}_{c}(\mathbb{B}^{n})\).
Obviously, the above equation has a solution
\[u(x)=\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i}}(\rho(x,y))f(y)dV_{\mathbb{H}}(y)\]
since
\[(-\Delta_{\mathbb{H}})_{y}G_{-\mu\mathrm{i}}(\rho(x,y))-\frac{(n-1)^{2}}{4}G_{ -\mu\mathrm{i}}(\rho(x,y))-\mu^{2}G_{-\mu\mathrm{i}}(\rho(x,y))=\delta_{x}(y).\]
According to the accurate asymptotic behavior of \(G_{-\mu\mathrm{i}}(\rho(x,y))\) when \(\rho(x)\to+\infty\) from (3.2), we derive that
\[u(x) =\frac{\left(\cosh(\frac{\rho(x)}{2})\right)^{2\mu\mathrm{i}}}{ \cosh^{n-1}(\frac{\rho(x)}{2})}\int_{0}^{\pi}(\sin t)^{-2\mu\mathrm{i}}dt\] \[\times\int_{\mathbb{B}^{n}}\frac{\left(\cosh(\frac{\rho(y)}{2}) \right)^{2\mu\mathrm{i}}}{\cosh^{n-1}(\frac{\rho(y)}{2})}\left(1-2\hat{x} \cdot y+|y|^{2}\right)^{\mu\mathrm{i}-\frac{n-1}{2}}f(y)dV_{\mathbb{H}}(y)+O \left(\frac{1}{\sinh^{\frac{n+1}{2}}(\rho(x))}\right). \tag{3.6}\]
We can define the **far-field pattern** of \(u\) as
\[u_{\infty}(\hat{x})=\frac{A_{n,-\mu\mathrm{i}}}{2^{\frac{n}{2}-\frac{1}{2}}} \int_{\mathbb{B}^{n}}\frac{\left(\cosh(\frac{\rho(y)}{2})\right)^{2\mu\mathrm{ i}}}{\cosh^{n-1}(\frac{\rho(y)}{2})}\left(1-2\hat{x}\cdot y+|y|^{2}\right)^{\mu \mathrm{i}-\frac{n-1}{2}}f(y)dV_{\mathbb{H}}(y).\]
Next we will prove that if the solution of the wave equation (3.5) satisfies radiation condition, then the solution is unique and hence
\[u(x)=\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i}}(\rho(x,y))f(y)dV_{\mathbb{H}}(y).\]
**Theorem 3.3**.: _Assume that \(u\in C^{2}(\mathbb{B}^{n})\) satisfies the Helmholtz equation with the source_
\[\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2}\right)u=f(x),\ \ x\in \mathbb{B}^{n} \tag{3.7}\]
_and the radiation condition_
\[\frac{\partial u}{\partial\rho}-\left(\mu\mathrm{i}-\frac{n-1}{2}\right)\tanh (\frac{\rho}{2})u=o\left(\frac{1}{\sinh^{\frac{n-1}{2}}(\rho)}\right),\]
_where \(f\in C_{c}^{\infty}(\mathbb{B}^{n})\). Then the solution \(u\) is unique and_
\[u(x)=\int_{\mathbb{B}^{n}}G_{-\mu{\rm i}}(\rho(x,y))f(y)dV_{\mathbb{H}}(y).\]
_Especially, the solution of Helmholtz equation_
\[\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2}\right)u=0,\ \ x\in\mathbb{B}^{n}\]
_satisfying the radiation condition must be equal to zero in \(\mathbb{B}^{n}\)._
Proof.: We first claim that the radiation condition implies that
\[\lim_{R\to+\infty}\sinh^{n-1}(R)\int_{\partial B^{n}(0,\tanh(\frac{R}{2}))}|u|^ {2}d\sigma=O(1),\]
which is equivalent to
\[\lim_{R\to+\infty}\int_{\partial B_{\mathbb{H}}(0,R)}|u|^{2}d\sigma_{\mathbb{ H}}=O(1).\]
From the hyperbolic radiation condition, we deduce that
\[0 =\lim_{R\to+\infty}\int_{\partial B_{\mathbb{H}}(0,R)}|\frac{ \partial u}{\partial\rho}-\left(\mu i-\frac{n-1}{2}\right)\tanh(\frac{\rho}{2 })u|^{2}\left(\frac{2}{1-|x|^{2}}\right)^{n-1}d\sigma\] \[=\lim_{R\to+\infty}\left(2\cosh(\frac{R}{2})\right)^{n-1}\int_{ \partial B^{n}(0,\tanh(\frac{R}{2}))}o\left(\frac{1}{\sinh^{n-1}(\rho)} \right)d\sigma \tag{3.8}\]
On the other hand, careful direction gives that
\[\int_{\partial B_{\mathbb{H}}(0,R)}\left|\frac{\partial u}{ \partial\rho}-\left(\mu i-\frac{n-1}{2}\right)\tanh(\frac{\rho}{2})u\right|^{ 2}d\sigma_{\mathbb{H}}\] \[\quad=\int_{\partial B_{\mathbb{H}}(0,R)}\left|\frac{\partial u}{ \partial\rho}\right|^{2}d\sigma_{\mathbb{H}}+\int_{\partial B_{\mathbb{H}}(0, R)}\left(\frac{(n-1)^{2}}{4}+\mu^{2}\right)\tanh^{2}(\frac{\rho}{2})u^{2}d \sigma_{\mathbb{H}}\] \[\quad\quad+2\Im\left(\int_{\partial B_{\mathbb{H}}(0,R)}\tanh( \frac{\rho}{2})\frac{\partial\overline{u}}{\partial\rho}ud\sigma_{\mathbb{H}} \right)+(n-1)\Re\left(\int_{\partial B_{\mathbb{H}}(0,R)}\tanh(\frac{\rho}{2} )\frac{\partial\overline{u}}{\partial\rho}ud\sigma_{\mathbb{H}}\right). \tag{3.9}\]
Since
\[\int_{\partial B_{\mathbb{H}}(0,R)}\left(\left|\frac{\partial u}{\partial\rho }\right|^{2}+\frac{(n-1)^{2}}{4}\tanh^{2}(\frac{\rho}{2})u^{2}\right)d \sigma_{\mathbb{H}}\geq(n-1)\Re\left(\int_{\partial B_{\mathbb{H}}(0,R)}\tanh (\frac{\rho}{2})\frac{\partial\overline{u}}{\partial\rho}ud\sigma_{\mathbb{H}} \right),\]
then we deduce from (3.29) and (3.9) that
\[o(1)\geq\int_{\partial B_{\mathbb{H}}(0,R)}\mu^{2}\tanh^{2}(\frac{\rho}{2})u^ {2}d\sigma_{\mathbb{H}}+2\Im\left(\int_{\partial B_{\mathbb{H}}(0,R)}\tanh( \frac{\rho}{2})\frac{\partial\overline{u}}{\partial\rho}ud\sigma_{\mathbb{H}} \right). \tag{3.10}\]
Next, we start to calculate the \(2\Im\left(\int_{\partial B_{\mathbb{H}}(0,R)}\tanh(\frac{\rho}{2})\frac{\partial \overline{u}}{\partial\rho}ud\sigma_{\mathbb{H}}\right)\). Since
\[\int_{\partial B_{\mathbb{H}}(0,R)}\tanh(\frac{\rho}{2})\frac{\partial\overline {u}}{\partial\rho}ud\sigma_{\mathbb{H}}=2^{n-1}\cosh^{2n-2}(\frac{R}{2})\tanh( \frac{R}{2})\int_{\partial B^{n}(0,\tanh(\frac{R}{2}))}\frac{\partial\overline {u}}{\partial\rho}ud\sigma\]
and \(u\) satisfies equation
\[(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2})u=f(x),\ \ x\in\mathbb{B}^{n}.\]
Applying the Green formula (2.4) in \(B_{\mathbb{H}}(0,R)\setminus B_{\mathbb{H}}(0,R_{0})\), we derive that
\[\int_{\partial B_{\mathbb{H}}(0,R)}\tanh(\frac{\rho}{2})\frac{ \partial\overline{u}}{\partial\rho}ud\sigma_{\mathbb{H}}\] \[\quad=\tanh(\frac{R}{2})\int_{\partial B_{\mathbb{H}}(0,R_{0})} \frac{\partial\overline{u}}{\partial\rho}ud\sigma_{\mathbb{H}}+\tanh(\frac{R} {2})\int_{B_{\mathbb{H}}(0,R)\setminus B_{\mathbb{H}}(0,R_{0})}\left(\Delta_ {\mathbb{H}}(\overline{u})u+(\nabla_{\mathbb{H}}\overline{u},\nabla_{ \mathbb{H}}u)_{g}\right)dV_{\mathbb{H}}\] \[\quad=\tanh(\frac{R}{2})\int_{\partial B_{\mathbb{H}}(0,R_{0})} \frac{\partial\overline{u}}{\partial\rho}ud\sigma_{\mathbb{H}}+\tanh(\frac{R} {2})\int_{B_{\mathbb{H}}(0,R)\setminus B_{\mathbb{H}}(0,R_{0})}\left(|\nabla _{\mathbb{H}}u|^{2}-\left(\frac{(n-1)^{2}}{4}+\mu^{2}\right)|u|^{2}\right)dV _{\mathbb{H}}\] \[\quad-\tanh(\frac{R}{2})\int_{supp(f)}\overline{f}udV_{\mathbb{H }}. \tag{3.11}\]
Hence
\[\Im\left(\int_{\partial B_{\mathbb{H}}(0,R)}\tanh(\frac{\rho}{2})\frac{ \partial\overline{u}}{\partial\rho}ud\sigma_{\mathbb{H}}\right)=\Im\left( \tanh(\frac{R}{2})\int_{\partial B_{\mathbb{H}}(0,R_{0})}\frac{\partial \overline{u}}{\partial\rho}ud\sigma_{\mathbb{H}}\right)-\Im\left(\tanh(\frac {R}{2})\int_{supp(f)}\overline{f}udV_{\mathbb{H}}\right).\]
This together with (3.10) and (3.11) yields that
\[o(1)\geq\int_{\partial B_{\mathbb{H}}(0,R)}\mu^{2}\tanh^{2}(\frac{\rho}{2})u^ {2}d\sigma_{\mathbb{H}}+\Im\left(\tanh(\frac{R}{2})\int_{\partial B_{\mathbb{ H}}(0,R_{0})}\frac{\partial\overline{u}}{\partial\rho}ud\sigma_{\mathbb{H}} \right)-\Im\left(\tanh(\frac{R}{2})\int_{supp(f)}\overline{f}udV_{\mathbb{H} }\right). \tag{3.12}\]
Let \(R\to+\infty\), we conclude that
\[2^{n-1}\cosh^{2n-2}(\frac{R}{2})\int_{\partial B^{n}(0,\tanh(\frac{R}{2}))}|u| ^{2}d\sigma=\int_{\partial B_{\mathbb{H}}(0,R)}|u|^{2}d\sigma_{g}=O(1).\]
Now, we are in position to prove that
\[u(x)=\int_{\mathbb{B}^{n}}G_{-\mu i}(\rho(x,y))f(y)dV_{\mathbb{H}}(y).\]
We only need to prove that
\[u(0)=\int_{\mathbb{B}^{n}}G_{-\mu i}(\rho(y))f(y)dV_{\mathbb{H}}(y).\]
In fact, define \(v(y)=(u\circ\tau_{x})(y)\), through the commutative property of the operator \(-\Delta_{\mathbb{H}}\) and \(\tau_{x}\), then \(v(y)\) satisfies equation
\[(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2})v=f(\tau_{x}(y)),\ \ y\in\mathbb{B}^{n}. \tag{3.13}\]
Hence
\[\begin{split} u(x)=v(0)&=\int_{\mathbb{B}^{n}}G_{- \mu\mathrm{i}}(\rho(y))f(\tau_{x}(y))dV_{\mathbb{H}}(y)\\ &=\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i}}(\rho(x,y))f(y)dV_{ \mathbb{H}}(y).\end{split} \tag{3.14}\]
Now, we start to prove that
\[u(0)=\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i}}(\rho(y))f(y)dV_{\mathbb{H}}(y).\]
According to Lemma 2.1, we can write
\[\begin{split}&\int_{B_{\mathbb{H}}(0,R)}(-\Delta_{\mathbb{H}}- \frac{(n-1)^{2}}{4}-\mu^{2})\left(u\right)\cdot G_{-\mu\mathrm{i}}(\rho(y))dV _{\mathbb{H}}(y)\\ &=\int_{B_{\mathbb{H}}(0,R)}(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2 }}{4}-\mu^{2})\left(G_{-\mu\mathrm{i}}(\rho)\right)\cdot u(y)dV_{\mathbb{H}}(y )+\int_{B_{\mathbb{H}}(0,R)}\left(\frac{\partial u}{\partial\rho}G_{-\mu \mathrm{i}}-\frac{\partial G_{-\mu\mathrm{i}}}{\partial\rho}u\right)d\sigma_{ \mathbb{H}}\\ &=\int_{B_{\mathbb{H}}(0,R)}(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2 }}{4}-\mu^{2})(G_{-\mu\mathrm{i}}(\rho))u(y)dV_{\mathbb{H}}(y)+\int_{\partial B _{\mathbb{H}}(0,R)}\left(\frac{\partial u}{\partial\rho}-\left(\mu\mathrm{i}- \frac{n-1}{2}\right)\tanh(\frac{\rho}{2})u\right)G_{-\mu\mathrm{i}}d\sigma_{ \mathbb{H}}\\ &-\int_{\partial B_{\mathbb{H}}(0,R)}\left(\frac{\partial G_{-\mu \mathrm{i}}}{\partial\rho}-\left(\mu\mathrm{i}-\frac{n-1}{2}\right)\tanh( \frac{\rho}{2})G_{-\mu\mathrm{i}}\right)ud\sigma_{\mathbb{H}}\\ &=u(0)+\int_{\partial B_{\mathbb{H}}(0,R)}\left(\frac{\partial u }{\partial\rho}-\left(\mu\mathrm{i}-\frac{n-1}{2}\right)\tanh(\frac{\rho}{2})u \right)G_{-\mu\mathrm{i}}d\sigma_{\mathbb{H}}\\ &-\int_{\partial B_{\mathbb{H}}(0,R)}\left(\frac{\partial G_{- \mu\mathrm{i}}}{\partial\rho}-\left(\mu\mathrm{i}-\frac{n-1}{2}\right)\tanh( \frac{\rho}{2})G_{-\mu\mathrm{i}}\right)ud\sigma_{\mathbb{H}}\end{split} \tag{3.15}\]
Since \(u\) and \(G_{-\mu\mathrm{i}}\) satisfies the radiation condition
\[\frac{\partial u}{\partial\rho}-\left(\mu\mathrm{i}-\frac{n-1}{2}\right)\tanh( \frac{\rho}{2})u=o\left(\frac{1}{\sinh^{\frac{n-1}{2}}(\rho)}\right),\]
\[\frac{\partial G_{-\mu\mathrm{i}}}{\partial\rho}-\left(\mu\mathrm{i}-\frac{n- 1}{2}\right)\tanh(\frac{\rho}{2})G_{-\mu\mathrm{i}}=o\left(\frac{1}{\sinh^{ \frac{n-1}{2}}(\rho)}\right),\]
and \(\int_{\partial B_{\mathbb{H}}(0,R)}|u|^{2}d\sigma_{\mathbb{H}}\) is bounded, hence
\[\begin{split}&\lim_{R\to+\infty}\int_{\partial B_{\mathbb{H}}(0,R)} \left(\left(\frac{\partial u}{\partial\rho}-(\mu\mathrm{i}-\frac{n-1}{2})\tanh( \frac{\rho}{2})u\right)G_{-\mu\mathrm{i}}-\left(\frac{\partial G_{-\mu\mathrm{i }}}{\partial\rho}-(\mu\mathrm{i}-\frac{n-1}{2})\tanh(\frac{\rho}{2})G_{-\mu \mathrm{i}}\right)u\right)d\sigma_{\mathbb{H}}\\ &\leq\lim_{R\to+\infty}\left(\int_{\partial B_{\mathbb{H}}(0,R)} \left|\left(\frac{\partial u}{\partial\rho}-(\mu\mathrm{i}-\frac{n-1}{2})\tanh( \frac{\rho}{2})\right)u\right|^{2}d\sigma_{\mathbb{H}}\right)^{\frac{1}{2}} \left(\int_{\partial B_{\mathbb{H}}(0,R)}|G_{-\mu\mathrm{i}}|^{2}d\sigma_{ \mathbb{H}}\right)^{\frac{1}{2}}\\ &\quad+\lim_{R\to+\infty}\left(\int_{\partial B_{\mathbb{H}}(0,R )}\left|\left(\frac{\partial G_{-\mu\mathrm{i}}}{\partial\rho}-(\mu\mathrm{i}- \frac{n-1}{2})\tanh(\frac{\rho}{2})\right)G_{-\mu\mathrm{i}}\right|^{2}d \sigma_{\mathbb{H}}\right)^{\frac{1}{2}}\left(\int_{\partial B_{\mathbb{H}}(0, R)}|u|^{2}d\sigma_{\mathbb{H}}\right)^{\frac{1}{2}}\\ &=0.\end{split} \tag{3.16}\]
This together with (3.15) implies that
\[u(0)=\lim_{R\to+\infty}\int_{B_{\mathbb{H}}(0,R)}(-\Delta_{\mathbb{H}}-\frac{ (n-1)^{2}}{4}-\mu^{2})(u)G(\rho)dV_{\mathbb{H}}(y)=\int_{\mathbb{B}^{n}}G_{- \mu i}(\rho(y))f(y)dV_{\mathbb{H}}(y).\]
### Rellich Theorem in hyperbolic space
**Theorem 3.4**.: _(Rellich Theorem in hyperbolic space) Let \(B_{\mathbb{H}}(0,R_{0})\) denotes the hyperbolic ball centered at origin with the radius equal to \(R_{0}\). Assume that \(u\in C^{2}(\mathbb{B}^{n}\setminus B_{\mathbb{H}}(0,R_{0}))\) satisfies the Helmholtz equation_
\[-\Delta_{\mathbb{H}}u-\frac{(n-1)^{2}}{4}u-\mu^{2}u=0 \tag{3.17}\]
_in \(\mathbb{B}^{n}\setminus B_{\mathbb{H}}(0,R_{0})\). If \(\lim_{R\to+\infty}\int_{\partial B_{\mathbb{H}}(0,R)}|u|^{2}d\sigma_{\mathbb{ H}}=0\), then \(u\) is equal to zero in \(\mathbb{B}^{n}\setminus B_{\mathbb{H}}(0,R_{0})\)._
Proof.: For any \(x\in\mathbb{B}^{n}\setminus B_{\mathbb{H}}(0,R_{0})\), \(u(\tanh\frac{\rho}{2}x^{\prime})\) is a continuous function defined on the sphere \(S^{n-1}\). According to the spherical harmonic expansion formula, we can write
\[u(\tanh\frac{\rho}{2}x^{\prime})=\sum_{l=1}^{+\infty}\sum_{m=1}^{N(l,n)}a_{m,l }(\rho)Y_{m,l}(x^{\prime}),\]
where \(Y_{m,l}(x^{\prime})\) is the normalized orthogonal basis of \(L^{2}(\mathbb{S}^{n-1})\). Since the Laplace operator \(-\Delta_{\mathbb{H}}\) and gradient operator \(\nabla_{\mathbb{H}}\) on the hyperbolic space can be written as
\[\Delta_{\mathbb{H}}=\frac{\partial^{2}}{\partial\rho^{2}}+(n-1)\coth\rho\frac{ \partial}{\partial\rho}+\frac{1}{\sinh^{2}\rho}\Delta_{\mathbb{S}^{n-1}},\ \ \nabla_{\mathbb{H}}=(\frac{\partial}{\partial\rho},\ \frac{1}{\sinh\rho}\nabla_{ \mathbb{S}^{n-1}}),\]
then it follows that for \(x\in\mathbb{B}^{n}\setminus B_{\mathbb{H}}(0,R_{0})\),
\[\Delta_{\mathbb{H}}(u) =\sum_{l=1}^{+\infty}\sum_{m=1}^{N(l,n)}\Delta_{\mathbb{H}}(a_{m,l} (\rho)Y_{m,l}(x^{\prime}))\] \[=\sum_{l=1}^{+\infty}\sum_{m=1}^{N(l,n)}\left((\Delta_{\mathbb{H} }a_{m,l}(\rho))Y_{m,l}(x^{\prime})+a_{m,l}(\Delta_{\mathbb{H}}Y_{m,l}(x^{ \prime}))-\nabla_{\mathbb{H}}(a_{m,l}(\rho))\nabla_{\mathbb{H}}(Y_{m,l}(x^{ \prime}))\right)\] \[=\sum_{l=1}^{+\infty}\sum_{m=1}^{N(l,n)}\left((\frac{\partial^{2 }}{\partial\rho^{2}}+(n-1)\coth\rho\frac{\partial}{\partial\rho})a_{m,l}(\rho )\times Y_{m,l}(x^{\prime})-\frac{a_{m,l}(\rho)}{\sinh^{2}\rho}l(l+n-2)Y_{m,l} (x^{\prime})\right)\] \[=\sum_{l=1}^{+\infty}\sum_{m=1}^{N(l,n)}\left(\frac{\partial^{2} }{\partial\rho^{2}}+(n-1)\coth\rho\frac{\partial}{\partial\rho}-l(l+n-2)\frac {1}{\sinh^{2}\rho}\right)a_{m,l}(\rho). \tag{3.18}\]
This together with \(u\) satisfying equation \(-\Delta_{\mathbb{H}}u-\frac{(n-1)^{2}}{4}u-\mu^{2}u=0\) in \(\mathbb{B}^{n}\setminus B_{\mathbb{H}}(0,R_{0})\) gives that \(a_{m,l}(\rho)\) satisfies the ordinary equation
\[\left(\frac{\partial^{2}}{\partial\rho^{2}}+(n-1)\coth\rho\frac{\partial}{ \partial\rho}+\frac{(n-1)^{2}}{4}+\mu^{2}-l(l+n-2)\frac{1}{\sinh^{2}\rho} \right)a_{m,l}(\rho)=0.\]
When \(\rho\) is sufficiently large, this equation can be compared with the following equation
\[\left(\frac{\partial^{2}}{\partial\rho^{2}}+(n-1)\coth\rho\frac{\partial}{ \partial\rho}+\frac{(n-1)^{2}}{4}+\mu^{2}\right)a_{m,l}(\rho)=0,\]
which is a standard Jacobi equation. The general Jacobi equation (see [22]) is defined as
\[\left(\frac{\partial^{2}}{\partial\rho^{2}}+(2\alpha+1)\coth\rho\frac{ \partial}{\partial\rho}+(2\beta+1)\tanh\rho\frac{\partial}{\partial\rho}+ \lambda^{2}+(\alpha+\beta+1)^{2}\right)f(\rho)=0. \tag{3.19}\]
When \(\lambda\neq-i,-2i,\cdot\cdot\cdot\), this Jacobi equation has two different solutions \(\Psi^{\lambda}_{\alpha,\beta}(\rho)\) and \(\Psi^{-\lambda}_{\alpha,\beta}(\rho)\), where
\[\Psi^{\lambda}_{\alpha,\beta}(\rho)=(2\sinh\rho)^{i\lambda-\alpha-\beta-1}F( \frac{\alpha+\beta+1-i\lambda}{2},\frac{-\alpha+\beta+1-i\lambda}{2},1-i \lambda,-\sinh^{-2}\rho)\]
and \(F\) denotes the hypergeometric function. The hypergeometric function \(F(a;b;c;z)\) is given by
\[F(a;b;c;z)=\sum_{k=0}^{\infty}\frac{(a)_{k}(b)_{k}}{(c)_{k}}\frac{z^{k}}{k!}\]
with \(a\), \(b\in\mathbb{C}\), \(c\neq 0,-1,2,\cdot\cdot\cdot\), where
\[(a)_{0}=1,\ \ (a)_{k}=a(a+1)\cdot\cdot\cdot(a+k-1)\ \text{for}\ k\geq 1.\]
This series gives an analytic function for \(|z|<1\) and it can be continued to the \(\mathbb{C}\setminus[1,+\infty)\). Furthermore \(F(a,b,c,0)=1\). We refer to Chapter II of [5] for more details
about hypergeometric function. Coming back to the equation which \(a_{m,l}(\rho)\) satisfies
\[\left(\frac{\partial^{2}}{\partial\rho^{2}}+(n-1)\coth\rho\frac{\partial}{ \partial\rho}+\frac{(n-1)^{2}}{4}+\mu^{2}\right)a_{m,l}(\rho)=0,\]
this equation has two different solution \(\Psi_{\frac{n-2}{2},-\frac{1}{2}}^{\mu}(\rho)\) and \(\Psi_{\frac{n-2}{2},-\frac{1}{2}}^{-\mu}(\rho)\). Then we deduce that there exist \(C_{1}\) and \(C_{2}\) such that
\[a_{m,l}(\rho)=C_{1}\Psi_{\frac{n-2}{2},-\frac{1}{2}}^{\mu}(\rho)+C_{2}\Psi_{ \frac{n-2}{2},-\frac{1}{2}}^{-\mu}(\rho).\]
If \(C_{1}^{2}+C_{2}^{2}\neq 0\), in consideration of
\[\Psi_{\frac{n-2}{2},-\frac{1}{2}}^{\mu}(\rho)=(2\sinh\rho)^{-\frac{n-1}{2}+i \mu}F(\frac{n-1-2i\mu}{4},\frac{-(n-3)-2i\mu}{4},1-i\mu,-\sinh^{-2}\rho)\]
and \(F(\frac{n-1-2i\mu}{4},\frac{-(n-3)-2i\mu}{4},1-i\mu,0)=1\) (see Chapter II of [5]), we deduce that
\[a_{l,m}(\rho)=O\left(\sinh^{-\frac{n-1}{2}}\rho\right). \tag{3.20}\]
On the other hand, direct calculation gives that
\[\int_{\partial B_{\mathbb{H}}(0,R)}|u|^{2}d\sigma_{g} =\int_{\partial B(0,\tanh(\frac{R}{2}))}|u|^{2}\left(\frac{2}{1-| x|^{2}}\right)^{n-1}ds\] \[=\int_{\partial B(0,\tanh(\frac{R}{2}))}|u|^{2}\left(2\cosh^{2} \frac{\rho}{2}\right)^{n-1}ds\] \[=\sum_{l=1}^{+\infty}\sum_{m=1}^{N(l,n)}a_{m,l}^{2}(R)\int_{ \partial B(0,\tanh(\frac{R}{2}))}|Y_{m,l}(x^{\prime})|^{2}ds\] \[=\sum_{l=1}^{+\infty}\sum_{m=1}^{N(l,n)}a_{m,l}^{2}(R)\left(2 \cosh^{2}\frac{R}{2}\right)^{n-1}\tanh^{n-1}(\frac{R}{2})\int_{\partial B(0,1 )}|Y_{m,l}(x^{\prime})|^{2}dx^{\prime}\] \[=\sum_{l=1}^{+\infty}\sum_{m=1}^{N(l,n)}a_{m,l}^{2}(R)\left(2 \cosh^{2}\frac{R}{2}\right)^{n-1}\tanh^{n-1}(\frac{R}{2}). \tag{3.21}\]
This together with \(\lim_{R\to+\infty}\int_{\partial B_{\mathbb{H}}(0,R)}|u|^{2}d\sigma_{g}=0\) yields that
\[\lim_{R\to+\infty}\sum_{l=1}^{+\infty}\sum_{m=1}^{N(l,n)}a_{m,l}^{2}(R)\left( 2\cosh^{2}\frac{R}{2}\right)^{n-1}\tanh^{n-1}(\frac{R}{2})=0,\]
which implies that \(a_{m,l}(R)=o(\sinh^{-\frac{n-1}{2}}R)\) when \(R\to+\infty\). This arrives at a contradiction with (3.20). Then we conclude that \(u\) is equal to zero \(\mathbb{B}^{n}\setminus B_{\mathbb{H}}(0,R_{0})\) and the proof of Theorem 3.4 is proved.
### Scattering theory for the Helmholtz equation involving the potential
Let us describe the scattering theory for the Helmholtz equation involving the potential on hyperbolic space \(\mathbb{B}^{n}\). We are concerned with an incident wave \(u^{i}\) satisfying the Helmholtz equation
\[\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2}\right)u^{i}=0,\ \ x\in \mathbb{B}^{n}.\]
A \(L^{\infty}\) potential satisfying \(\mathcal{V}(x)=1\) in the supplement of \(B_{\mathbb{H}}(0,R_{0})\) will create the new wave \(u\) solving the equation
\[\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2}\mathcal{V}\right)u=0, \ \ x\in\mathbb{B}^{n}.\]
The wave \(u\) and \(u^{i}\) is linked, the essential connection being that the scattered wave \(u^{s}=u-u^{i}\) will satisfies the radiation condition
\[\frac{\partial u^{s}}{\partial\rho}-\left(\mu i-\frac{n-1}{2}\right)\tanh( \frac{\rho}{2})u^{s}=o\left(\frac{1}{\sinh^{\frac{n-1}{2}}(\rho)}\right)\]
as \(\rho\to+\infty\). It is not difficult to check that the scattered wave \(u^{s}\) satisfies equation
\[\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2}\right)u^{s}=\left( \mathcal{V}(x)-1\right)\mu^{2}u^{s}+\left(\mathcal{V}(x)-1\right)\mu^{2}u^{i}, \ \ x\in\mathbb{B}^{n}. \tag{3.22}\]
We will prove that the scattered wave \(u^{s}\) is existing and unique.
**Theorem 3.5**.: _Assume that \(u^{s}\) satisfies equation_
\[\left(-\Delta_{\mathbb{H}}-\frac{(n-1)^{2}}{4}-\mu^{2}\right)u^{s}=f(x),\ \ x\in\mathbb{B}^{n}, \tag{3.23}\]
_and the radiation condition_
\[\frac{\partial u^{s}}{\partial\rho}-(\mu\mathrm{i}-\frac{n-1}{2})\tanh(\frac {\rho}{2})u^{s}=o\left(\frac{1}{\sinh^{\frac{n-1}{2}}(\rho)}\right),\]
_where_
\[f(x)=\left(\mathcal{V}(x)-1\right)\mu^{2}u^{s}+\left(\mathcal{V}(x)-1\right) \mu^{2}u^{i}.\]
_If imaginary part \(\Im(V)\)of the potential \(V\) is non-negative, then \(u^{s}\) is existing and unique. Furthermore,_
\[u^{s}(x)=\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i}}(\rho(x,y))f(y)dV_{\mathbb{H}} (y),\ \ x\in\mathbb{B}^{n}.\]
Proof.: Since \(u^{s}\) satisfies the radiation condition, applying Theorem 3.3, we derive that the existence and uniqueness of equation (3.23) under the assumption of radiation condition is equivalent to the existence and uniqueness of the following integral equation
\[u^{s}(x)=\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i}}(\rho(x,y))f(y)dV_{\mathbb{H}} (y),\ \ x\in\mathbb{B}^{n}. \tag{3.24}\]
Define the integral operator \(K\) by
\[K(u^{s})=\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i}}(\rho(x,y))\left(\mathcal{V}(y)-1 \right)\mu^{2}u^{s}(y)dV_{\mathbb{H}}(y).\]
The integral equation (3.24) can be rewritten as
\[(I-K)u^{s}(x)=\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i}}(\rho(x,y))\left( \mathcal{V}(y)-1\right)\mu^{2}u^{i}(y)dV_{\mathbb{H}}(y),\ \ x\in\mathbb{B}^{n}.\]
Now we claim \(\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i}}(\rho(x,y))\left(\mathcal{V}(y)-1 \right)\mu^{2}u^{i}(y)dV_{\mathbb{H}}(y)\) is bounded in \(L^{2}(B^{n})\). Recall the Hardy-Littlewood-Sobolev inequality in \(\mathbb{R}^{n}\) which states that
\[\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{h(x)g(y)}{|x-y|^{\lambda}}dxdy \lesssim\left(\int_{\mathbb{R}^{n}}|h|^{p}dx\right)^{\frac{1}{p}}\left(\int_{ \mathbb{R}^{n}}|g|^{q}dx\right)^{\frac{1}{q}}\]
for any \(0<\lambda<n\), \(1<p,q<+\infty\) with \(\frac{1}{p}+\frac{1}{q}+\frac{\lambda}{n}=2\). Since
\[\begin{split} G_{-\mu\mathrm{i}}(\rho(x,y))&=A_{n, -\mu\mathrm{i}}\frac{(\cosh\rho)^{\frac{n-3}{2}+\mu\mathrm{i}}}{(\sinh\rho)^{n -2}}\int_{0}^{\pi}\left(1+\frac{\cos t}{\cosh\rho}\right)^{\frac{n-3}{2}+\mu \mathrm{i}}(\sin t)^{-2\mu\mathrm{i}}dt\\ &\lesssim\left(\frac{1}{\sinh\frac{\rho(x,y)}{2}}\right)^{n-2}, \end{split} \tag{3.25}\]
then it follows that
\[\begin{split}&\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i}}(\rho(x,y)) \left(\mathcal{V}(y)-1\right)\mu^{2}u^{i}(y)dV_{\mathbb{H}}(y)\\ &=\int_{\mathbb{B}^{n}}\left(|x-y|\cosh(\frac{\rho(x)}{2})\cosh (\frac{\rho(y)}{2})\right)^{-(n-2)}\left(\mathcal{V}(y)-1\right)\mu^{2}u^{i}( y)dV_{\mathbb{H}}(y)\\ &\lesssim\int_{B^{n}}\left(V(y)-1\right)\frac{\mu^{2}u^{i}(y)}{| x-y|^{n-2}}dy\end{split} \tag{3.26}\]
where we use \(\sinh(\frac{\rho(x,y)}{2})=|x-y|\cosh(\frac{\rho(x)}{2})\cosh(\frac{\rho(y)}{2})\) and \(MathcalV(x)=1\) on \(\mathbb{B}^{n}\setminus B_{\mathbb{H}}(0,R_{0})\). Applying the Hardy-Littlewood-Sobolev inequality, one can get
\[\begin{split}&\int_{B^{n}}|\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i}} (\rho(x,y))\left(\mathcal{V}(y)-1\right)\mu^{2}u^{i}(y)dV_{\mathbb{H}}(y)|^{2 }dx\\ &\lesssim\int_{B^{n}}|\int_{B^{n}}\left(\mathcal{V}(y)-1\right) \frac{\mu^{2}u^{i}(y)}{|x-y|^{n-2}}dy|^{2}dx\\ &\lesssim\left(\int_{\Omega}|u^{i}|^{\frac{2n}{n+4}}dx\right)^{ \frac{n+4}{n}}\end{split} \tag{3.27}\]
if \(n\geq 5\) and
\[\begin{split}&\int_{B^{n}}|\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i}}( \rho(x,y))\left(\mathcal{V}(y)-1\right)\mu^{2}u^{i}(y)dV_{\mathbb{H}}(y)|^{2}dx \\ &\lesssim\left(\int_{B^{n}}|\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i} }(\rho(x,y))\left(\mathcal{V}(y)-1\right)\mu^{2}u^{i}(y)dV_{\mathbb{H}}(y)|^{ 4}dx\right)^{\frac{1}{2}}\\ &\lesssim\left(\int_{\Omega}|u^{i}|^{\frac{4n}{n+8}}dx\right)^{ \frac{n+8}{2n}}\end{split} \tag{3.28}\]
if \(n=3,4\). Now, we prove that the integral equation
\[(I-K)u^{s}(x)=\int_{\mathbb{B}^{n}}G_{-\mu\mathrm{i}}(\rho(x,y))\left(\mathcal{ V}(y)-1\right)\mu^{2}u^{i}(y)dV_{\mathbb{H}}(y)\in L^{2}(B^{n}),\ \ x\in\mathbb{B}^{n} \tag{3.29}\]
has a unique solution \(u^{s}\in L^{2}(B^{n})\). We first claim that \(K\) is a compact operator from \(L^{2}(B^{n})\) to \(L^{2}(B^{n})\). Recalling the definition of the operator \(K\), since
\[G_{-\mu\mathrm{i}}(\rho(x,y))\simeq|x-y|^{-(n-2)}\text{ when }\rho(x,y)\to 0\]
and \(\mathcal{V}(x)-1\) has the compact support, hence one can obtain for any \(h\in L^{2}(B^{n})\), there holds
\[\int_{B^{n}}\left(|\nabla K(h)|^{2}+|K(h)|^{2}\right)dx\lesssim\int_{B^{n}}|h| ^{2}dx,\]
which together with the Sobolev imbedding in \(W^{1,2}(B^{n})\) implies that \(K\) is a compact operator from \(L^{2}(B^{n})\) to \(L^{2}(B^{n})\). Hence according to Fredholm theorem for compact operator (see [6]), we know that the integral equation (3.29) has a unique solution \(u^{s}\in L^{2}(B^{n})\) if and only if the linear equation
\[(I-K)u^{s}(x)=0,\ \ \ \ x\in\mathbb{B}^{n}\]
has just the zero solution. This is equivalent to prove that the differential equation
\[\left(-\Delta-\frac{(n-1)^{2}}{4}-\mathcal{V}(x)\mu^{2}\right)u^{s}=0,\ \ x\in \mathbb{B}^{n}\]
has only zero solution through Green representative formula in Theorem 3.3. Noticing \(\mathcal{V}(x)=1\) in \(\mathbb{B}^{n}\setminus B_{\mathbb{H}}(0,R_{0})\), if we can prove that \(u^{s}\) satisfies the condition of Rellich theorem in hyperbolic space (Theorem 3.4), we conclude that \(u^{s}(x)\) is equal to zero in \(\mathbb{B}^{n}\setminus B_{\mathbb{H}}(0,R_{0})\). This together with the strong unique continuation theorem yields that \(u^{s}\) must be equal to zero on the whole hyperbolic space \(\mathbb{B}^{n}\). We now turn to proving
\[\lim_{R\to+\infty}\int_{\partial B_{\mathbb{H}}(0,R)}|u^{s}|^{2}d\sigma_{g}=0.\]
Recalling the proof of Theorem 3.3, we have obtained that
\[o(1)\geq\int_{\partial B_{\mathbb{H}}(0,R_{0})}\mu^{2}\tanh^{2}(\frac{\rho}{2} )|u^{s}|^{2}d\sigma_{\mathbb{H}}+2\Im\left(\tanh(\frac{R}{2})\int_{\partial B _{\mathbb{H}}(0,R_{0})}\frac{\partial\overline{u^{s}}}{\partial\rho}u^{s}d \sigma_{\mathbb{H}}\right). \tag{3.30}\]
Applying the Green formula in Lemma 2.1, one has
\[\int_{\partial B_{\mathbb{H}}(0,R_{0})}\frac{\partial\overline{u^{s}} }{\partial\rho}u^{s}d\sigma_{\mathbb{H}} =\int_{B_{\mathbb{H}}(0,R_{0})}\Delta_{\mathbb{H}}(\overline{u^{s }})u^{s}dV_{\mathbb{H}}+\int_{B_{\mathbb{H}}(0,R_{0})}|\nabla_{\mathbb{H}}u^{s} |^{2}dV_{\mathbb{H}}\] \[=-\int_{B_{\mathbb{H}}(0,R_{0})}\left(\frac{(n-1)^{2}}{4}+\mu^{2} \overline{\mathcal{V}}(x)\right)|u^{s}|^{2}dV_{\mathbb{H}}+\int_{B_{\mathbb{H }}(0,R_{0})}|\nabla_{\mathbb{H}}u^{s}|^{2}dV_{\mathbb{H}}, \tag{3.31}\]
which implies that
\[\Im\left(\tanh(\frac{R}{2})\int_{\partial B_{\mathbb{H}}(0,R_{0})}\frac{ \partial\overline{u^{s}}}{\partial\rho}u^{s}d\sigma_{\mathbb{H}}\right)\geq 0\]
if \(\Im\left(\mathcal{V}\right)\geq 0\). Combining this and (3.30), we conclude that
\[\lim_{R\to+\infty}\int_{\partial B_{\mathbb{H}}(0,R)}|u^{s}|^{2}d\sigma_{ \mathbb{H}}=0.\]
Then the proof of Theorem 3.5 is accomplished.
## 4. Acknowledgement
The authors would like to thank Prof. Qiaohua Yang for helpful discussion on the Green function of Helmholtz operator in hyperbolic space, which inspired us a lot.
|
2308.13724 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon
Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | 2023-08-26T01:31:35Z | http://arxiv.org/abs/2308.13724v1 | # ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
###### Abstract
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: _preprocessing_, _planning_, and _iterative self-refinement_. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions. The code related to this work is available at [https://github.com/zhehuazhou/ISR-LLM](https://github.com/zhehuazhou/ISR-LLM).
## 1 Introduction
Large Language Models (LLMs), underpinned by deep learning architectures, have recently revolutionized artificial intelligence (AI) by demonstrating unprecedented abilities in understanding, generating, and manipulating natural language text Bormmasani et al. (2021); Brown et al. (2020); Devlin et al. (2018); Radford et al. (2019); Raffel et al. (2020). This surge in LLM research has been accompanied by a growing interest in leveraging these models to tackle a diverse array of challenges across various research fields, including data analysis Agrawal et al. (2022), code genera
tion Vaithilingam et al. (2022), reasoning Zelikman et al. (2022), robotic control Ahn et al. (2022), and so on.
Due to their rich internalized knowledge about the world Petroni et al. (2019); Davison et al. (2019), LLMs have also garnered considerable attention within the field of long-horizon sequential task planning Roijers et al. (2013). Unlike short-term robotic planning problems, _long-horizon sequential task planning_ often involves devising interconnected actions that are spanned over extended timeframes to achieve control objectives. Since the execution of actions at one point in time can greatly impact subsequent actions and outcomes, long-horizon planning is usually considered a more challenging problem due to its inherent intricacy in managing temporal dependencies and combinatorial complexity Hartmann et al. (2022), thereby necessitating innovative planning approaches that are able to balance the trade-offs between efficiency, optimality, and adaptability.
The traditional way to address long-horizon sequential task planning typically relies on first establishing a symbolic and logic-based representation of the planning problem Haslum et al. (2019), and then employing techniques such as state space search Zhang (1999) or heuristic search Edelkamp and Schrodl (2011) to find a feasible solution. However, this method usually requires the manual specification of symbolic planning domains, which demands a notable degree of expertise in the field. Furthermore, many desirable properties of plans, e.g., user preferences, which can be specified in natural language by individuals without specialized training, may prove intricate or even infeasible to be encapsulated within formal logic frameworks. As a result, the adaptability of conventional methods is constrained, limiting their utility in diverse contexts.
To overcome this limitation, there is a growing trend in recent studies to explore the potential of utilizing LLMs as task-agnostic reasoning modules, with the aim of facilitating more generalized and intelligent robotic planning Ahn et al. (2022); Huang et al. (2022c). Leveraging their pre-trained knowledge, these LLM-based planners are able to effectively comprehend both explicit human-generated natural language directives and the inherent constraints interwoven within planning tasks Huang et al. (2022a). This greatly reduces the necessity for labor-intensive manual rule encoding and circumvents the need for intricate specification of symbolic planning domains Lin et al. (2023). Moreover, the intuitive nature of textual prompts allows for seamless interactions between LLM-based planners and human instructors, facilitating the integration of human expertise into the planning process. However, the efficacy and reliability of such LLM-based planners are often not satisfying due to the inherent design and training methodologies of LLMs. LLMs are essentially engineered to generate word sequences that align with human-like context, yet the assurance of their planning capabilities is not guaranteed Brown et al. (2020). Recent investigations have revealed instances where the correctness of generated actions and the success rate of task accomplishment by LLM-based planners fall short of expectations Valmeekam et al. (2022). This limitation becomes further pronounced in long-horizon sequential task planning, where complex action dependencies and extended temporal considerations introduce additional difficulties that challenge the planning abilities of LLMs.
In this work, we aim to enhance the performance of LLM in long-horizon sequential task planning. Drawing inspiration from recent research that reveals the potential for LLM improvements through self-refinement Madaan et al. (2023); Huang et al. (2022b), we propose the Iterative Self-Refined LLM (ISR-LLM) framework that utilizes the power of iterative self-refinement to improve planning outcomes. Our framework consists of three steps (see Fig. 1): (1) _Preprocessing_, where an LLM translator is employed to translate the natural language inputs into their respective Planning Domain Definition Language (PDDL) Haslum et al. (2019) formulations; (2) _Planning_, where an LLM planner takes the translated PDDL problem as input and determines the action sequence to accomplish the long-horizon sequential task planning; (3) _Iterative self-refinement_, where a validator is used to examine the correctness of the generated action plan and provide feedback to the LLM planner. Then based on the feedback, the LLM planner performs the iterative self-refinement process to find a revised action plan. We consider two different types of validators in our approach: an LLM-based self-validator and an external validator that leverages auxiliary verification tools.
Through comprehensive experiments across diverse planning problem domains, we show that, compared to state-of-the-art approaches, ISR-LLM achieves better feasibility and success rate in long-horizon sequential task planning. The contributions of this work are threefold:
* We present ISR-LLM, a novel framework achieved by integrating a self-refinement mechanism into LLM. This approach addresses long-horizon sequential task planning and offers remarkable advancements in both feasibility and correctness.
* We introduce and evaluate the effectiveness of two types of validators, i.e., an LLM-based self-validator and an external validator, in providing feedback to the LLM planner for executing the iterative self-refinement process.
* We highlight the superiority of our proposed framework in comparison to contemporary state-of-the-art methods, through an investigation of ISR-LLM across three diverse planning domains.
## 2 Related Work
### Long-Horizon Sequential Task Planning
Long-horizon sequential task planning aims to find an optimal action sequence capable of accomplishing a specified task objective Helmert (2006). In recent robotic studies, PDDL or Answer Set Programming (ASP) Brewka et al. (2011) are often utilized as the language for representing the planning problems Jiang et al. (2019). A prevalent method employed to tackle these planning tasks is to utilize a search-based or sampling-based algorithm to find a viable plan Levine and Humphreys (2003); Segovia-Aguas et al. (2021); Cohen et al. (2010). This strategy has found successful applications across diverse robotic domains, e.g., mobile robots Zhang et al. (2015), autonomous vehicles Ding et al. (2020), and robotic manipulators Garrett et al. (2020). However, these approaches rely on a predetermined symbolic and logical representation of the planning domain, which usually demands a high level of expert knowledge for formulation. Moreover, due to the inherent abundance of potential action options associated with long-horizon sequential task planning, search-based or sampling-based strategies may encounter impediments in such scenarios. Some approaches also use example plans to construct novel plans, which are often represented through a finite state machine Levesque (2005); Winner (2008). However, finding a useful example plan may be challenging or even impossible within certain task scenarios.
It is also worth mentioning that, another important category of robotic planning is Task and Motion Planning (TAMP) Garrett et al. (2021), which combines high-level task planning in discrete spaces and low-level robot motion planning in continuous space as a hierarchical planning framework. In TAMP, the focus extends beyond mere task planning to encompass the executability of the determined actions, i.e., the actions must be executable by the robot with a viable motion trajectory that is subject to both robotic and environmental constraints Toussaint (2015); Driess et al. (2019). However, how to accurately ground actions generated by LLMs into feasible robot motions remains a challenging and ongoing area of research Ahn et al. (2022); Huang et al. (2022c). Therefore, in this work, we focus only on exploring the task planning capabilities of LLMs.
### Planning with LLM
To overcome the limited generalizability of traditional task planners, researchers have started investigating the possibility of utilizing LLMs as task-agnostic planners Sharma et al. (2021); Li et al. (2022); Zeng et al. (2022); Singh et al. (2023). A multitude of studies have delved into grounding the language commands generated by LLMs to executable robotic actions Ahn et al. (2022); Huang et al. (2022c); Ding et al. (2023); Lin et al. (2023). For instance, in Ahn et al. (2022), scores are assigned to potential actions through a value function, and the action with the highest likelihood of
Figure 1: Overview of the proposed ISR-LLM framework. It consists of three steps: preprocessing, planning, and iterative self-refinement.
success is selected. Similarly, Huang et al. (2022) adopts prompt engineering to extract actions that are executable for the robots. In Huang et al. (2022), environmental feedback is introduced to enable online adjustment of action plans that are infeasible for the robots. Although the focus of this work is not the grounding of actions, these studies illustrate the competencies of LLMs in addressing diverse robotic planning tasks.
Besides grounding language instructions, recent studies have also sought to combine LLMs with PDDL as a means of elevating the performance of LLM-based planners Valmeekam et al. (2022); Silver et al. (2022, 2023); Liu et al. (2023). In Valmeekam et al. (2022), a Blocksworld Slaney and Thiebaux (2001) benchmark is proposed to assess the LLM's capability in handling natural language inputs for planning. However, the results reveal a discouraging performance of LLMs in long-horizon task planning, even within seemingly uncomplicated tasks. In Silver et al. (2022, 2023), instead of natural language inputs, planning problems in PDDL syntax are directly presented to LLMs for generating action sequences. While this strategy contributes to enhanced performance, it inevitably diminishes the LLM's generalizability and often demands additional effort and expert knowledge for composing the corresponding PDDL files. In Liu et al. (2023), LLM is employed not as a planner, but rather as a translator that converts natural language inputs into PDDL problems, which are subsequently solved using classical PDDL planners. However, such an approach requires an external solver, potentially impeding the wider applicability of LLMs as task-agnostic planners. An analogous notion akin to our self-refinement concept is introduced in Raman et al. (2022). After the generation of an action plan based on natural language inputs, it collects the error information returned from the execution of the plan. This information is then constructed as re-prompts that direct the LLM towards correcting the erroneous actions. However, such a refinement process occurs subsequent to the action execution phase. Our approach, in comparison, not only considers the utilization of an external validator to perform a similar self-refinement process, but also investigates the potential of LLMs for enabling pre-execution action corrections through self-validation capabilities.
## 3 Preliminary
### Task Planning
In this work, we consider the problem of task planning in a setting with discrete and fully observable states, finite actions, and deterministic transitions. Such a problem \(P\) is often represented by a tuple \(P=\langle S,A,T,s_{\mathrm{init}},G\rangle\). For each state \(s\in S\) within the discrete set of states \(S\), an action \(a\in A\) can be selected from the set of applicable actions \(A(s)\subseteq A\), i.e., the preconditions of the action \(a\) must be fulfilled. The transition function \(T:S\times A\to S\) determines the next state based on the current state and the selected action. \(s_{\mathrm{init}}\in S\) represents the initial state and \(G\subseteq S\) is a set of goal states. A solution to the planning problem \(P\) is a sequential action plan \(\pi=(a_{1},a_{2},\ldots,a_{n})\) that controls the initial state \(s_{\mathrm{init}}\) to a goal state, i.e., we have \(s_{i+1}=T(s_{i},a_{i})\) satisfied for all \(0\leq i\leq n\) and \(s_{n+1}\in G\). For long-horizon sequential task planning, the number of actions \(n\) tends to be relatively large. In this work, we focus on investigating the capabilities of LLM in solving the designated task planning problem \(P\). Thus, our primary focus is the feasibility and success rate of planning rather than its optimality.
### PDdl
PDDL is a standardized encoding format designed for classical planning problems Aeronautiques et al. (1998); Fox and Long (2003). A planning problem \(P\) represented in PDDL syntax consists of two files: a domain file and a problem file. The domain file embodies the foundational rules of the planning domain. It not only defines the predicates that elucidate the configuration of the state space \(S\), but also formulates the preconditions and effects of all possible actions \(a\in A\), i.e., the transition function \(T\). The problem file is used to define the available objects within the planning domain, as well as the initial state and goal conditions. Concrete examples of PDDL domain and problem files for the experiments considered in this work can be found in Appendix A.1. In this work, we assume that the natural language input provided to the LLM should include both the initial state and the goal conditions, such that the LLM translator is able to convert it into corresponding PDDL files. For more details about PDDL, we direct the interested readers to Haslum et al. (2019).
## 4 Isr-Llm
In this section, we introduce ISR-LLM, a novel framework that utilizes iterative self-refinement to find an action plan with improved accuracy and feasibility. It includes three steps: preprocessing with an LLM translator, planning with an LLM planner, and iterative self-refinement loop with a validator that is selected from either an LLM-based self-validator or an external validator. Details are explained as follows.
### Preprocessing with LLM Translator
As illustrated in Fig. 1, the LLM translator first converts the given natural language instructions into a PDDL formulation, specifically representing them using the domain and problem files. The rationale for employing such a translator is grounded in its notable advantages, even though an LLM planner could be designed to operate directly on natural language inputs, as demonstrated in Lin et al. (2023). The adoption of a formal representation, i.e., PDDL, offers twofold benefits to the subsequent validation process of the generated plan. Firstly, it enables the usage of existing PDDL validators as the external validator, e.g., VAL Howey et al. (2004) or PDDL.lj Zhi-Xuan (2022). This obviates the necessity of developing a custom validator and thereby saves substantial time and effort. Secondly, rather than relying solely on language cues, this approach enables the LLM-based self-validator to acquire a comprehension akin to a state-machine understanding of the system state. This, in turn, facilitates a more precise evaluation of the correctness of the selected actions.
In order to ensure the structural accuracy of the translated PDDL files, we adopt a technique known as few-shot in-context learning Brown et al. (2020). This technique involves embedding illustrative examples within the prompt, effectively instructing the LLM on how to formulate responses to given queries in a desired manner. Similar to Liu et al. (2023), we assume that the domain-specific knowledge pertinent to each considered planning task is available in advance, and thus include it within the few-shot examples provided to the LLM translator. An example of the prompt presented to the LLM translator for the Blocksworld planning domain (see Sec. 5.1 for a detailed explanation about this domain) is shown in Fig. 2, and a complete list of all employed few-shot examples within this work is given in Appendix A.1.
### Planning with LLM Planner
Once the natural language input is translated, the LLM planner takes these PDDL files as inputs and determines an action sequence aimed at achieving the given task (see Fig. 1). In addition to few-shot in-context learning, we also integrate the Chain-of-Thought (CoT) technique Wei et al. (2022) into the prompts provided to the LLM planner. CoT operates by decomposing the overall problem into intermediate steps, thus enabling the LLM to tackle complex reasoning problems that may not be solvable via standard prompting methods. An illustrative example of the prompt presented to the LLM planner is given in Fig. 2, and a comprehensive list of all the employed few-shot examples is accessible in Appendix A.2.
Within this step, we obtain an initial action plan for addressing the given planning problem. Subsequently, as detailed in the next subsection, such an initial plan is examined by a validator. Utilizing the feedback received from the validator, the LLM planner performs a self-refinement to find a new plan that attempts to correct erroneous actions.
### Iterative Self-Refinement Loop with Validator
The central component of the iterative self-refinement loop is the validator, as demonstrated in Fig. 1. Through the examination of the generated action sequence, the validator constructs feedback, pinpointing any actions considered incorrect, and subsequently conveys this information to the LLM planner. Then based on the feedback, the LLM planner initiates a self-refinement process to rectify the incorrect action and devise a new action plan. Note that, while the generated action sequence may contain multiple errors, analyzing actions subsequent to the initial error is often unnecessary, since the first error could potentially render the foundation of all ensuing actions fundamentally flawed. Thus, the self-refinement process is executed iteratively within a loop, where in each step, the validator stops at the first identified error. The information concerning this error is then returned, ensuring that each iterative stage is solely focused on rectifying this detected mistake. The iterative
self-refinement loop persists until either the validator identifies no errors or a predefined maximum number of iterations is reached. The action sequence, resulting from the iterative self-refinement loop, is then accepted as the final generated action sequence.
We consider two types of validators: a self-validator, which employs the LLM to assess the correctness of the generated action plan, and an external validator, which leverages external tools for performing the analysis. It is worth mentioning that, although the external validator is capable of providing accurate feedback on the feasibility of the generated plan, its implementation often demands a considerable amount of effort and may be unavailable for certain tasks. Conversely, the usage of an LLM as an internal self-validator economizes both time and effort. However, it has the inherent risk of possibly yielding imprecise or even erroneous feedback. The selection of the validator type, therefore, hinges upon the specific evaluation requirements and the context of the validation scenario.
An example of the prompts provided to the LLM-based self-validator is shown in Fig. 2, where few-shot learning and CoT techniques are also employed. All examples used for the experimental domains explored in this work are given in Appendix A.3.
## 5 Experimental Results
To evaluate the performance of ISR-LLM in long-horizon sequential task planning, we perform experiments across three diverse planning domains. Moreover, we also investigate the influence of different LLMs on the performance of ISR-LLM, as well as the impact of the LLM translator. A detailed explanation of the experimental setup and results is provided in the following subsections.
Figure 2: Examples of the prompts used in ISR-LLM. The prompt provided to the LLM contains two parts: the few-shot examples (shaded with a yellow color) and the actual question (blue). Details about the few-shot examples are given in Appendix A. The texts shaded with a green color represent the LLM’s responses. The LLM translator first converts the natural language instructions into PDDL domain and problem files. Then, an initial plan is generated using the translated files, which is subsequently revised through an iterative self-refinement process.
### Experimental Setup
We utilize the following three planning domains as benchmark problems to evaluate the performance of ISR-LLM. These domains are derived from existing literature and are extensively employed in planning research Liu et al. (2023); Silver et al. (2023); Valmeekam et al. (2022); Silver et al. (2022). Detailed examples about each planning domain are presented in Appendix A.
* _Cooking:_ There are \(n\) pots and a total of 6 different ingredients (see Fig. 2(a)). The robot's task is to add ingredients to each pot according to a prescribed recipe. Each pot possesses its own randomly generated recipe, which stipulates the inclusion of 2 to 4 different ingredients. The robot has three actions: picking up an ingredient, putting down an ingredient, and adding the ingredient to a pot. A constraint that must be fulfilled is that each ingredient may only be retrieved once by the robot, i.e., once the robot has picked up an ingredient, it must distribute it to all pots that require this ingredient as per their individual recipes.
* _Blocksworld:_ There are \(n\) blocks, initially randomly placed on a table. The objective of the robot is to assemble these blocks into a stack, adhering to a specific prescribed order (see Fig. 2(b)). The robot has four actions: picking up a block that is on the table, putting down a block that is currently in its hand onto the table, unstacking a block from the top of another block to hold it in its hand, and stacking the block that is currently in its hand on top of another block. However, the robot can only manipulate one block at a time, i.e., any block that has other blocks situated on top of it is considered fixed.
* _Ball Moving:_ There are \(n\) balls, initially randomly distributed among 4 rooms (see Fig. 2(c)). The robot needs to relocate the balls to their predefined goal rooms, with the constraint that it can hold no more than one ball at a time. The robot has three actions: picking up a ball, putting down a ball, and moving from its current room to another room.
Figure 3: Three planning domains used in this work.
For all three planning domains, we investigate two specific cases with \(n=3\) and \(n=4\), to examine the influence of the number of objects, which is directly correlated with the complexity of the task, on the performance of the proposed ISR-LLM framework. Furthermore, to evaluate the impacts of various LLMs on the planning outcomes, we employ two LLMs, namely GPT3.5 and GPT4, and compare their capabilities in task planning within the ISR-LLM framework.
For each planning task, we evaluate three different methods: (1) _LLM-direct_, which is the baseline approach grounded in Silver et al. (2023, 2022); Valmeekam et al. (2022). It leverages the LLM to formulate an action plan directly from the given PDDL input. To ensure a fair comparison with ISR-LLM, we utilize the LLM translator to convert natural language inputs into PDDL files in this method. (2) _ISR-LLM-self_, which employs the ISR-LLM framework with an LLM-based self-validator; (3) _ISR-LLM-external_, which incorporates an external validator to generate feedback for ISR-LLM. In order to mitigate the influence of existing PDDL validators and focus on analyzing the performance of ISR-LLM, we implement our own custom external validators in this work.
We randomly generate 30 unique cases with varying initial states and goal conditions for each planning task. The few-show examples used for the LLM translator, the LLM planner, and the LLM-based self-validator are given in Appendix A. All LLM's responses during the experiments are presented in our website1. The success rates of task accomplishments for the three aforementioned methods are recorded. All experiments are conducted on a laptop equipped with an Intel(R) Core(TM) i7-10870H CPU @ 2.20GHz Processor with 8 CPUs, and an NVIDIA RTX 3080 Max-Q GPU with 16 GB VRAM. The detailed results are presented in the next subsection.
Footnote 1: [https://github.com/zhehuazhou/ISR-LLM](https://github.com/zhehuazhou/ISR-LLM)
### Performance of ISR-LLM
The results of the experiments are summarized in Table 1. In the cases utilizing GPT3.5, the proposed ISR-LLM framework demonstrates a notable enhancement in success rates across all planning domains when compared to the baseline approach. While the LLM-based self-validator contributes to an approximate \(15\%\) increase in performance, the external validator can further amplify the success rate by roughly \(40\%\) to \(50\%\). The only exception occurs in the case \(n=4\) for the Cooking domain, where a \(23\%\) increase is observed. This might be attributed to the excessive number of required actions in this planning task, rendering LLMs less effective at correcting errors.
The success rates are also influenced by task complexity, as indicated by the number of objects. Increases in object numbers correspond to decreased success rates in the Cooking, Blocksworld, and Ball Moving domains for all three approaches (LLM-direct: \(-7\%\), \(-10\%\), \(-16\%\); ISR-LLM-self: \(-14\%\), \(-20\%\), \(-23\%\); ISR-LLM-external:\(-37\%\), \(-17\%\), \(-13\%\)). This trend reflects the increased difficulty in rectifying erroneous actions as the planning horizon extends. Moreover, the success rate varies among planning domains. Compared to the Cooking and the Ball Moving domains, the Blocksworld domain, which demands more sophisticated logical thinking, demonstrates lower success rates. Nevertheless, the proposed ISR-LLM is still able to improve the planning outcomes within this domain.
It can also be observed that GPT4 greatly outperforms GPT3.5 in long-horizon sequential task planning, corroborating the common assertion that GPT4 possesses a markedly superior reasoning capability. The baseline method, i.e., LLM-direct, when coupled with GPT4, is able to achieve a success rate exceeding \(90\%\) in the Cooking and the Ball Moving domains, where ISR-LLM also maintains this high performance level. However, in the more logically complex Blocksworld domain, GPT4 demonstrates diminished performance using the baseline approach. Nevertheless,
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{GPT3.5} & \multicolumn{3}{c}{GPT4} \\ Planning domain & LLM-direct & ISR-LLM-self & ISR-LLM-external & LLM-direct & ISR-LLM-self & ISR-LLM-external \\ \hline Cooking (\(n=3\)) & 47\% & 67\% & 100\% & 100\% & 100\% & 100\% \\ \hline Cooking (\(n=4\)) & 40\% & 53\% & 63\% & 100\% & 100\% & 100\% \\ \hline Blocksworld (\(n=3\)) & 20\% & 37\% & 70\% & 43\% & 60\% & 97\% \\ \hline Blocksworld (\(n=4\)) & 10\% & 17\% & 53\% & 40\% & 60\% & 80\% \\ \hline Ball Moving (\(n=3\)) & 33\% & 50\% & 70\% & 93\% & 100\% & 100\% \\ \hline Ball Moving (\(n=4\)) & 17\% & 27\% & 57\% & 90\% & 93\% & 97\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Success rate of ISR-LLM in different planning domains.
the employment of ISR-LLM also elevates the success rate for this domain, with the self-validator contributing an increase of about \(20\%\), and the external validator enhancing it by more than \(40\%\). Interestingly, the influence of the number of objects appears to be less pronounced when GPT4 is utilized. This may be attributed to GPT4's enhanced reasoning capabilities, which facilitate more effective logical thinking, and thereby mitigate the impact of the number of objects on the results.
### Influence of the LLM Translator
We also evaluate the influence of the LLM translator using the Blocksworld domain with \(n=3\) and GPT3.5 as an example, as this case demonstrates where the efficacy of ISR-LLM is most obvious. By omitting the LLM translator and directly utilizing natural language input, we compare the success rates of task planning and present the results in Table 2. It can be observed that, while the LLM translator slightly improves the planning performance of the baseline approach, the self-validator greatly benefits from the translator, showing a \(20\%\) increase in the success rate. The reason could be that the translated PDDL files offer a symbolic and logical representation of the planning domain, thereby allowing the LLM to form a more concrete understanding of the system state, as opposed to relying solely on linguistic cues. In contrast, the performance of the external validator remains relatively consistent, irrespective of the presence of the LLM translator. This consistency arises from our custom validator's ability to provide accurate feedback, whether PDDL formulations are employed or not. However, as previously mentioned, introducing translated PDDL files enables the usage of existing PDDL validators, potentially saving substantial time and effort needed for implementing a custom validator.
\begin{table}
\begin{tabular}{c|c|c} \hline Method & With LLM Translator & Without LLM Translator \\ \hline LLM-direct & 20\% & 13\% \\ \hline ISR-LLM-self & 36\% & 16\% \\ \hline ISR-LLM-external & 70\% & 63\% \\ \hline \end{tabular}
\end{table}
Table 2: Success rate of ISR-LLM with and without the LLM translator in Blocksworld domain with \(n=3\) and GPT3.5.
Figure 4: Grounding of actions in the Blocksworld domain with four blocks. Initially, block b2 (red), b3 (green), b4 (pink) are on the table, and block b1 (blue) is on top of block b2. The goal is to stack the blocks in the given order: b4 on b1, b1 on b3, b3 on b2, and b2 on the table.
### Grounding the Actions
Although it is beyond the scope of this work, we further demonstrate that the generated action plan can be directly grounded into feasible robot actions when paired with a suitable motion planner. This highlights another advantage of employing the LLM translator within the ISR-LLM framework, as the use of PDDL formulation ensures that each generated action conforms to a predefined definition and structure. Consequently, this simplifies the task of the motion planner in converting the action plan into executable robot movements. Figure 4 illustrates this grounding process, using an example from the Blocksworld domain with four blocks. Here, a pick-and-place controller is employed to execute the four different types of actions, assuming the robot knows the locations of the blocks. The simulation is conducted in NVIDIA Omniverse Isaac Sim2.
Footnote 2: [https://developer.nvidia.com/isaac-sim](https://developer.nvidia.com/isaac-sim)
## 6 Discussion
Self-Validator and External ValidatorGenerally, the external validator is capable of providing feedback to a degree of precision that identifies the exact action in which an error resides. Conversely, the self-validator usually only provides an overarching estimation regarding the correctness of the entire generated action plan. As a consequence, the external validator often leads to superior performance, as precise feedback can greatly facilitate the correction of erroneous actions. This benefit becomes more obvious as the planning horizon extends, or when complex logical thinking is demanded. However, as aforementioned, the external validator requires additional design and implementation effort. In contrast, the self-validator is advantageous in that it can be easily and directly employed without necessitating extra work. Therefore, the selection between these validator types should be carefully considered in light of the specific task requirements and the resources available.
Planning DomainsThe planning capabilities of LLMs are influenced by the inherent characteristics of the planning domains. As observed from our experimental results, LLMs appear to excel in planning tasks that focus on adhering to specific instructions, such as Cooking, or performing repeated actions with identifiable patterns, e.g., Ball Moving. Conversely, when the planning tasks demand more complex logical thinking, as seen in the Blocksworld domain, their planning performance tends to diminish. This phenomenon is more pronounced in the GPT4 cases. The underlying reason could be that LLMs are essentially trained to generate word sequences that mirror human-like thought processes, which suits tasks requiring instruction or pattern following. However, when critical logical reasoning becomes a vital component of the task, the inherent reasoning abilities of the LLMs become more important. This suggests that enhancing the reasoning capabilities of LLMs could be a priority when aiming to utilize them as planners for more intricate planning tasks.
LimitationsOne limitation of the current LLM-based planners - even with the proposed ISR-LLM framework - is that the overall success rate often fails to exceed that of traditional search-based planners. However, as an initial exploratory work, we demonstrate the potential of utilizing LLM as a versatile and task-agnostic planner. This could significantly facilitate the deployment of various robotic systems across diverse scenarios and minimize the required effort in planning system design. Moreover, the planning abilities of the ISR-LLM framework may see substantial improvements through refinements in the underlying reasoning capabilities of the LLMs. This could be potentially achieved through parameter fine-tuning technologies, such as integrating a fine-tuned LLM specifically designed for task planning. Another limitation stems from the inherent randomness within LLMs, complicating assurances such as correctness or constraint satisfaction in the generated action plan. Therefore, the employment of LLMs may be inappropriate for certain tasks, especially those that are safety-critical.
## 7 Conclusion
In this paper, we explore the potential of leveraging LLMs for long-horizon sequential task planning based on natural language input. To improve the correctness of the generated action plan, we introduce the ISR-LLM framework, which employs an iterative self-refinement approach for automatic plan
revisions. This framework consists of three steps. First, an LLM translator converts the natural language input into a PDDL formulation, represented by PDDL files. Second, using these translated PDDL files, an LLM planner formulates an initial action plan. Third, an iterative self-refinement loop is initiated, wherein either an LLM-based self-validator or an external validator provides feedback on the correctness of the action plan, allowing the LLM planner to make necessary revisions to the action plan. Through extensive experiments across three diverse planning domains, we demonstrate that ISR-LLM surpasses the performance of existing state-of-the-art LLM-based planners in long-horizon sequential task planning. While maintaining the flexibility and generalizability to work with natural language input, our ISR-LLM framework consistently achieves high success rates in task accomplishments. For future work, we plan to incorporate motion planning within the ISR-LLM framework, aiming to facilitate reliable and efficient task and motion planning across various robotic application scenarios.
|
2310.09536 | CarExpert: Leveraging Large Language Models for In-Car Conversational
Question Answering | Large language models (LLMs) have demonstrated remarkable performance by
following natural language instructions without fine-tuning them on
domain-specific tasks and data. However, leveraging LLMs for domain-specific
question answering suffers from severe limitations. The generated answer tends
to hallucinate due to the training data collection time (when using
off-the-shelf), complex user utterance and wrong retrieval (in
retrieval-augmented generation). Furthermore, due to the lack of awareness
about the domain and expected output, such LLMs may generate unexpected and
unsafe answers that are not tailored to the target domain. In this paper, we
propose CarExpert, an in-car retrieval-augmented conversational
question-answering system leveraging LLMs for different tasks. Specifically,
CarExpert employs LLMs to control the input, provide domain-specific documents
to the extractive and generative answering components, and controls the output
to ensure safe and domain-specific answers. A comprehensive empirical
evaluation exhibits that CarExpert outperforms state-of-the-art LLMs in
generating natural, safe and car-specific answers. | Md Rashad Al Hasan Rony, Christian Suess, Sinchana Ramakanth Bhat, Viju Sudhi, Julia Schneider, Maximilian Vogel, Roman Teucher, Ken E. Friedl, Soumya Sahoo | 2023-10-14T08:46:24Z | http://arxiv.org/abs/2310.09536v1 | # CarExpert: Leveraging Large Language Models for
###### Abstract
Large language models (LLMs) have demonstrated remarkable performance by following natural language instructions without fine-tuning them on domain-specific tasks and data. However, leveraging LLMs for domain-specific question answering suffers from severe limitations. The generated answer tends to hallucinate due to the training data collection time (when using off-the-shelf), complex user utterance and wrong retrieval (in retrieval-augmented generation). Furthermore, due to the lack of awareness about the domain and expected output, such LLMs may generate unexpected and unsafe answers that are not tailored to the target domain. In this paper, we propose CarExpert, an in-car retrieval-augmented conversational question-answering system leveraging LLMs for different tasks. Specifically, CarExpert employs LLMs to control the input, provide domain-specific documents to the extractive and generative answering components, and controls the output to ensure safe and domain-specific answers. A comprehensive empirical evaluation exhibits that CarExpert outperforms state-of-the-art LLMs in generating natural, safe and car-specific answers.
## 1 Introduction
Conversational question answering (CQA) has recently gained increased attention due to the advancements of Transformer-based Vaswani et al. (2017) large language models (LLMs). These LLMs Devlin et al. (2019); Brown et al. (2020); OpenAI (2023); Touvron et al. (2023) are nowadays widely adopted for performing question answering in both open-domain and domain-specific settings Robinson and Wingate (2023). As the source of additional knowledge conversational question answering systems are typically provided with text paragraphs Kim et al. (2021); Rony et al. (2022), and knowledge graphs Rony et al. (2022); Chaudhuri et al. (2021) for generating informative dialogues in a domain-specific setting, where such systems typically engage in a multi-turn interaction with a user in form of speech or text. Figure 1 demonstrates a conversation between a user and a conversational question answering system (CarExpert) in a BMW car.
Leveraging LLMs end-to-end has several drawbacks Liang et al. (2022); Srivastava et al. (2023); OpenAI (2023). **Firstly**, the generated answer is often hallucinated as the knowledge from the pre-trained weights of LLMs is limited to their training data collection time Ji et al. (2022). Furthermore, retrieval-augmented answer generation suffers from hallucination as well, due to wrong retrieval, complexity of the user utterance and retrieved document. **Secondly**, LLMs can be exploited using adversarial instructions that may lead the system to ingest malicious input and generate unsafe output Perez and Ribeiro (2022); Greshake et al. (2023). In the context of a car, the aforementioned downsides imply that the answer could lead to unsafe handling of the vehicle due to a lack of instructions, preservation, warning messages, or appropriate information; or by providing erroneous or confusing information.
Addressing the aforementioned issues, in this paper we propose CarExpert, an in-car conversational question-answering system, powered by LLMs. CarExpert is a modular, language model agnostic, easy to extend and controllable conversational question-answering system developed to work on
Figure 1: Illustration of a multi-turn in-car conversation between a user (in gray) and CarExpert (in blue).
the text level. On a high-level CarExpert performs question answering in two steps. First, given a user utterance it retrieves domain-specific relevant documents wherein the potential answer may exist. Second, for predicting the answer, CarExpert employs both extractive and generative answering mechanisms. Specifically, there are four sub-tasks involved in the overall process: 1) orchestration, 2) semantic search, 3) answer generation, and 4) answer moderation. Furthermore, CarExpert tackles unsafe scenarios by employing control mechanisms in three ways: i) in the _Orchestrator_ using an input filter, ii) by defining prompts for controlling LLM-based answer generation, and iii) by an output filter in the _Answer Moderator_. Furthermore, CarExpert employs a heuristic during answer moderation to select answers from multiple models (extractive and generative) and provide the user with the potential best answer as the output.To facilitate voice-based user interaction in the car for real-life use, we encapsulate CarExpert with text-to-speech and speech-to-text services. Figure 2 depicts a high-level overview of the CarExpert architecture. Such modular design of CarExpert allows flexible integration to various types of interfaces such as web browser and mobile app (i.e., BMW App).
To assess the performance of CarExpert we conduct exhaustive evaluations (both qualitative and quantitative). An empirical evaluation exhibits that CarExpert outperforms off-the-shelf state-of-the-art LLMs in in car question answering. The contribution of this paper can be summarized as follows:
* We introduce CarExpert, a modular, language model agnostic, safe and controllable in-car conversational question answering system.
* A novel answer moderation heuristic for selecting a potential best answer from multiple possible outputs.
* A comprehensive empirical evaluation, demonstrating the effectiveness of CarExpert over the state-of-the-art LLMs for in-car conversational question answering.
## 2 Approach
CarExpert aims to generate domain-specific document-grounded answers. The task is divided into four sub-tasks: 1) Orchestration, 2) Semantic Search, 3) Answer Generation, and 4) Answer Moderation. We describe the sub-tasks below.
### Orchestration
A prompt-based _Orchestrator_ component is incorporated in CarExpert to tackle unsafe content and deal with multi-turn scenarios. Depending on the user utterance, CarExpert also can e.g. respond by saying that it does not have enough information or ask a clarification question, since the system is designed to only answer questions about the car. Thus the _Orchestrator_ controls the input in CarExpert. The prompt used for this purpose is as follows:
_Task: Given a question and paragraphs:_
1. _For unsafe or harmful questions, politely decline to answer as they are out of context. Stop any further generation._
2. _Flag any unsafe or harmful questions by politely stating that you cannot provide an answer. Stop any further generation._
3. _If the question is safe and relevant, suggest a clarification question that demonstrates comprehension of the concept and incorporates information from the provided paragraphs. Start the question with "Do you mean"._
4. _If unsure about suggesting a specific clarification question, politely request more information to provide an accurate response. Stop any further generation._
Figure 2: High level overview of the CarExpert system architecture.
_Question_: {user utterance} _Paragraphs_: {paragraphs} _Answer:_
where, user utterance represent the current turn's user utterance and paragraphs the top-3 retrieved documents obtained from the semantic search (discussed in Section SS2.2).
### Semantic Search
For efficient and fast semantic search of the relevant documents, CarExpert pre-processes data and parses clean contents from various curated sources (owners' manuals, self-service FAQs, car configu-rator feature descriptions and press club publications) utilizing a data pipeline (more details in the Appendix A.1.1). The parsed data is utilized in two different ways. Firstly, we put humans in the loop to obtain high quality and domain expert annotated question-answer pairs for training an answer extraction model (discussed in Section 2.3.1). Secondly, the vector representation of the text is indexed only once as a pre-processing step to facilitate fast _Semantic Search_ over a large set of text during the inference (see Figure 3). In the next step LLMs are fed with top-3 retrieved document for the answer generation. We use the terms 'document' and 'paragraph' interchangeably throughout this paper.
### Answer Generation
CarExpert employs both extractive and generative models to get answers for the same user utterance. The answer generation step is controlled by instructing the LLM using prompts and next by an _Answer Moderator_ component. It selects the best answer based on an extraction ratio-based heuristic (discussed in Section 2.4). We describe the answer generation methods in the following sections.
#### 2.3.1 LLM-based Answer generation
In this step, CarExpert takes off-the-shelf GPT-3.5-turbo and instructs it in a few-shot manner for answer generation based on the current user utterance, retrieved documents and the dialogue history. The probability distribution of generating a response can be formally defined as:
\[p(S_{t}|\mathcal{P};\mathcal{H};\mathcal{Q})=\prod_{i=1}^{n}p(s_{i}|s_{<i}, \mathcal{P};\mathcal{H};\mathcal{Q},\theta), \tag{1}\]
where \(S_{t}\) is the generated answer, \(\mathcal{P}\) is the prompt, \(\mathcal{H}\) is the dialogue history, \(\mathcal{Q}\) is the user utterance in the current turn, \(\theta\) is model parameters, and \(n\) is the length of the response. Here, \(";"\) indicates a concatenation operation between two texts. Depending on the type of questions that the user may ask, the generation task is split into two major categories: 1) Abstractive Summarization and 2) Informal Talk. We design separate prompt templates for both the categories to handle various types of user utterances. We provide a brief description of both the categories below.
_i. Abstractive Summarization:_ We design a prompt template to handle information seeking user utterances that can be answered from the semantic search results where the template aims to generate the answer in a natural sentence. The abstractive summarization template is as follows:
_Task: Answer questions about the car given the following context and dialog. Answer always helpful. Answer in complete sentences. Don't use more than two sentences. Extract the answer always from the context as literally as possible. Dialogue 1:_{example dialogue 1}
_Dialogue 6: Context:_ {top paragraphs, dialogue history} _User:_{user utterance} _System:_
where example dialogue 1 is a variable that represents a complete multi-turn conversation. Each dialogue may contain 1 to 5 user-system utterance pairs. The variables top paragraphs and dialogue history represent top-3 paragraphs from the semantic search results and the complete dialogue history such as adjacent user-system pairs, respectively. Furthermore, user utterance indicates the current user utterance that the system needs to answer.
_ii. Informal Talk_: A conversational AI system not only deals with information-seeking utterances but also needs to tackle follow-up questions, clarifications, commands, etc. which makes the conversation engaging and natural. To tackle various forms of user utterances we design an _Informal Talk_ template as follows:
_Task_: _Answer the user feedback in a friendly and positive way. When asked about factual knowledge or about your opinion, just say that you can't answer these questions. Please never answer a question with a factual statement. If a question is about something else than the car, you may append a 'Please ask me something about the car'. Dialogue 1:_{example dialogue 1}
_Dialogue 20: User:_{user utterance} _System:_
In the _Informal Talk_ template we provide 20 example dialogues covering various forms of user utterance. This way both abstract summarization and informal talk templates leverages pre-trained
large language model in a few-shot manner to generate natural and engaging dialogues. The prompt templates are stored in the _Prompt Template Store_.
#### 2.3.2 Answer extraction
In CarExpert, we investigate two different answer extraction methods:
i.Machine Reading Comprehension Reader:Given a user utterance and a document the task of a MRC _Reader_ model is to predict a continuous text span from the provided document that answers the user question. We fined-tune an Albert (Lan et al., 2020) model for the answer extraction task.
ii.LLM-based Reader:Engineering prompts is a popular way to instruct LLMs how to leverage their knowledge to solve downstream NLP tasks. In this approach, we leverage the pre-trained knowledge of LLMs, contained in their parameters to perform the same answer extraction task as the MRC _Reader_. However, in this case CarExpert does not need training data to perform the answer extraction. Specifically, in CarExpert we design a prompt that instructs the LLMs to perform answer extraction as literally as possible using both question and top-3 paragraphs from the semantic search results. The prompt template is as follows:
_Task: Given the following question and paragraphs, extract exactly one continuous answer span from only one of the paragraphs. Question: {user utterance} Paragraphs: {paragraphs} Answer:_
During the inference, the variables user utterance and paragraphs are replaced with the actual user utterance and top three paragraphs retrieved from the semantic search.
### Answer Moderation
An _Answer Moderator_ component selects the best answer given the user utterance and potential answers (extractive and generative). We investigate the following two moderation techniques for answer moderation.
i.Cosine Similarity:This approach measures the semantic similarity between a user utterance and system response. The answer with a higher similarity score is selected as the system response. Formally, in this approach the answer selection can be defined as: \(max(cosine(\vec{a_{ex}},\vec{\mathcal{Q}}),cosine(\vec{a_{g}},\vec{\mathcal{Q }}))\), where \(\vec{a_{ex}}\), \(\vec{a_{g}}\), and \(\vec{\mathcal{Q}}\) are the embedding representation of extracted answer, generated answer and user utterance.
ii.Extraction Score:This is a weighted Levenshtein distance-based heuristic that measures how syntactically close the system response is to the retrieved paragraphs. Formally, the Extraction Score (ES) can be defined as:
\[ES=\frac{1}{n}*\sum_{i=1}^{n}1-\frac{dist(x,y_{i})}{max(|x|,|y_{i}|)}, \tag{2}\]
where \(x\) is the generated answer, \(y_{i}\) is the \(i\)th paragraph and \(n\) is the number of paragraphs. The cost of edit operation is computed by \(dist(\cdot)\). This moderation technique allows CarExpert to generate a controlled and document grounded answer by (i) grounding the system response to the retrieved documents, and (ii) filtering out incorrect and hallucinated responses. More details on the edit operations can be found in Appendix A.5.
## 3 Experimental Setup
Data:The reader and retriever models in CarExpert are fine-tuned and evaluated on car-specific
Figure 3: Semantic search during the inference (the vector space is depicted as a vector database for demonstration). The potential answer to the question is encapsulated in the box of retrieved document **A**.
data from various sources (owners' manuals, self-service FAQs, car configurator feature descriptions and press club publications).
Baselines:We choose Dense Passage Retriever (DPR) Karpukhin et al. (2020), BM25 Robertson et al. (2009), Sentence-transformer Reimers and Gurevych (2019) and SPLADE Formal et al. (2022) as the baseline retriever. For answer generation we experiment with Albert Lan et al. (2020) (extractive) and GPT-3.5-turbo 1 (generative) and Luminous-extended 2 (generative).
Footnote 1: [https://openai.com/](https://openai.com/)
Footnote 2: [https://www.aleph-alpha.com/](https://www.aleph-alpha.com/)
MetricsTo measure the performance of the _Retriever_ we use Mean Reciprocal Rank (MRR@3). For evaluating extractive _Reader_, we utilize token-level metrics, such as F1-Score and Exact Match (EM). Furthermore, we employ Cosine Similarity and METEOR Banerjee and Lavie (2005) to capture the similarity of generated answer against the reference response.
Further details of the datasets, hyper-parameter settings, and metrics can be found in the Appendix, in A.1, A.3 and A.4 respectively.
## 4 Experiments and Results
We conduct both qualitative and quantitative experiments to assess different parts contributing to the overall performance of CarExpert.
### Quantitative Analysis
Table 2 and Table 3 demonstrate that the fine-tuned DPR and fine-tuned Reader perform better than the baseline models in the corresponding tasks. The performance improvement may attributed to their inherent capability of effectively learning and capturing the distribution and characteristics of the training data. In Table 2, we notice that a fine-tuned DPR outperforms a fine-tuned Sentence-transformer. The fine-tuned DPR model preforms in MRR@1 and hence we integrate DPR as the retriever used for semantic search in CarExpert.
From Table 4 we observe that GPT-3.5-turbo performs better than the Luminous-extended model since the former is a larger model and hence offers better representations and generalization.
Table 5 exhibits that _Extraction Score_ does a better job in moderating and selecting the best answer which aligns better to the retrieved documents. CarExpert incorporate the _Extraction Score_-based heuristic for answer moderation. The _Extraction Score_ technique is described in Appendix A.5.
### Qualitative Analysis
Table 1 demonstrates a qualitative comparison between CarExpert (with document) and GPT-3.5-turbo (with and without document) of answer generation. When provided with the document we instruct both the models to answer from the provided documents. In the first case, without any documents provided GPT-3.5-turbo could not answer the question, where with the document it generated a very long answer. Furthermore, when answering it is referring to a specific paragraph such as "..The first paragraph mentions...", which is irrelevant to the user. CarExpert in this case correctly generated the expected answer. In the second case, we asked the system about how to mount a child seat. Off-the-shelf GPT-3.5-turbo generated generic answer from its pre-trained knowledge, which includes unnecessary detail such as "..Read the instruction...", and is not tailored to the target car brand. On the contrary, although GPT-3.5-turbo generated a better answer, it includes additional irrelevant and lengthy details which are not suitable for in-car CQA. Still adding irrelevant information (right column, 3nd row: item 1., 6. & 7.). Overall, in both the cases, CarExpert exhibits precise answer prediction then off-the-shelf GPT-3.5-turbo with and without documents. Although, CarExpert leverages GPT-3.5-turbo for the answer generation, carefully designed prompts in CarExpert helped the system to generate precise answers. Precise answers are suitable for real-time use in the car, where the user may find an unnecessary detailed answer (which GPT-3.5-turbo generated) very exhausting. More lemon- and cherry-picked examples can be found in Appendix D.
## 5 Discussions and Potential Impact
CarExpert is built in a modular fashion, which allows for expansion and adaptability to diverse industrial use cases. Furthermore, the proposed architecture enables the system to maintain, modify and scale the data more effectively. Moreover, a pipeline approach such as CarExpert improves the overall interpretability and debugging of a system. Finally, the introduced system is controllable and domain-specific as it allows for explicit control over the design and behavior of each of the
modules such as _Orchestrator_ and answer generation. We anticipate that CarExpert will aid other industrial use cases leverage LLMs in developing fine-grained and regulated conversational question answering systems.
## 6 Related Works
Large Language Models:Large language model (LLM) such as GPT-3 Brown et al. (2020), PaLM Chowdhery et al. (2022), LaMDA Tupperi et al. (2022), LLaMA Touvron et al. (2023) and GPT-4 OpenAI (2023) are capable of performing complex downstream tasks without being trained for that tasks. A different line of recent research focuses on controlling the behaviour of LLMs such as NeMo-Guardrails 3. Inspired by humans capabilities of following instructions in natural language, recent research works fine-tuned LLMs so that it can understand instructions in a zero-shot or few-shot settings and perform a given task following the language instruction Wei et al. (2022); Taori et al. (2023); Brown et al. (2020); Rony et al. (2022); Schick and Schutze (2021); Prasad et al. (2023). In CarExpert, prompt-guided LLMs are employed to control various tasks of the answer generation process.
Footnote 3: [https://developer.nvidia.com/nemo](https://developer.nvidia.com/nemo)
Conversational Question Answering:Recent advancements of LLMs significantly improved multi-turn question answering systems Chowdhrey et al. (2022); Thoppilan et al. (2022); Zaib et al. (2021). However, in multi-task objectives these models lack robustness Liang et al. (2022); Srivastava et al. (2022).
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Retaver** & **MRR@1** & **MRR@3** \\ \hline BM25 & 0.26 & 0.31 \\ Fine-tuned DPR & **0.47** & 0.52 \\ Fine-tuned Sentence-transformer & 0.42 & 0.49 \\ SPLADE & 0.44 & **0.53** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dialog examples with answers of CarExpert compared to GPT-3.5-turbo standard alone and GPT-3.5-turbo with relevant paragraphs provided through prompting. Longer texts are truncated and replaced by [...] for the demonstration purpose.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Retaver** & **F1** & **EM** \\ \hline Pre-trained Albert-large & 0.31 & 0.01 \\ Fine-tuned Albert-large & **0.60** & **0.21** \\ GPT-3.5-turbo & 0.51 & 0.14 \\ Luminous-extended & 0.36 & 0.01 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison of retriever models.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**User** & **CarExpert** (with paragraphs) & **GPT-3.5-turbo** (w/o paragraphs) & **GPT-3.5-turbo** (with paragraphs) \\ \hline Does the car have a parking & Yes, Park Assist supports the following functions: When parallel parking: Parking in reverse, parallel to the road. When perpendicular parking: Parking forward or in reverse, perpendicular to the road. When driving out of parallel parking spaces. When driving out of perpendicular parking spaces. & I’m sorry, but I would need more specific information about the car you’re referring to. There are many different car models with various features, and the availability of parking assist functions can vary between manufacturers and models. Can you provide me with the make and model of the car you’re interested in?
\begin{table}
\begin{tabular}{l c} \hline \hline
**Generator** & **Cos. Sim.** & **METEOR** \\ \hline GPT-3.5-turbo & **0.68** & **0.38** \\ Luminous-extended & 0.52 & 0.14 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of _LLM-based Generator_ models.
\begin{table}
\begin{tabular}{l|l c} \hline \hline
**Retaver** & **F1** & **EM** \\ \hline Pre-trained Albert-large & 0.31 & 0.01 \\ Fine-tuned Albert-large & **0.60** & **0.21** \\ GPT-3.5-turbo & 0.51 & 0.14 \\ Luminous-extended & 0.36 & 0.01 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation results on the module: _Reader_.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Generator** & **Cos. Sim.** & **METEOR** \\ \hline GPT-3.5-turbo & **0.68** & **0.38** \\ Luminous-extended & 0.52 & 0.14 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of _Answer Moderator_ approaches.
tava et al., 2023). A different line of work (Daull et al., 2023) emphasised on the needs for hybrid approaches to take advantage of multiple learning models to better handle the limitations. Architectural compositions such as LLM + semantic information retrieval (de Jong et al., 2023; Borgeaud et al., 2022), LLM + instruction tuning module (Khattab et al., 2022), LLM + Router (Xu et al., 2023), cascaded LLMs (Dohan et al., 2022), LLM + RLHF/RLAIF (Ouyang et al., 2022; Bai et al., 2022). Despite significant progress over time, CQA systems still struggle. with long-standing issues like hallucination, the ability to scale models and data, and formal reasoning.
## 7 Conclusion
We have introduced CarExpert, a new and controlled in-car conversational question-answering system powered by LLMs. Specifically, CarExpert employed semantic search to restrict the system generated answer within the car domain and incorporated LLMs to predict natural, controlled and safe answers. Furthermore, to tackle hallucinated answers, CarExpert proposed an Extraction Score-based Answer Moderator. We anticipate that the proposed approach can not only be applicable for the in-car question answering but also be easily extendable and adapted for other domain-specific settings. In future, we plan to integrate multi-task models to handle multiple task using a single LLM and reduce error propagation in the system.
## Limitations
While our modular framework offers considerable flexibility in employing diverse models and aligning them with specific tasks and objectives, it comes with few challenges as well. One major drawback is the difficulty in jointly optimizing and fine-tuning the individual components toward a common objective. When optimized independently, each module may overfit to certain tasks and subsequently propagate errors due to intricate interactions, ultimately impacting the overall system performance. Furthermore, given our reliance on LLMs, occasional hallucinations may occur despite our efforts to maintain control. Moreover, our system may struggle with handling highly complex and ambiguous queries, potentially requiring external resolution modules. In future, we intend to tackle the existing issues to develop a more robust conversational question answering system.
## Acknowledgement
We would like to thank Dr. Hans-Joerg Voegel, Dr. Robert Bruckmeier, and Dr. Peter Lehnert from the BMW Group in Munich, Germany for their support in this work. We would like to extend our thank to Dr. Nicolas Flores-Herr, Dr. Joachim Koehler, Alexander Arno Weber and the Fraunhofer IAIS team for the helpful discussions and contributions to this work, and the members who contributed to this project from BIG PICTURE GmbH and OMSEI GmbH.
|
2305.13900 | Revisiting the computation of the critical points of the Keplerian
distance | We consider the Keplerian distance $d$ in the case of two elliptic orbits,
i.e. the distance between one point on the first ellipse and one point on the
second one, assuming they have a common focus. The absolute minimum $d_{\rm
min}$ of this function, called MOID or orbit distance in the literature, is
relevant to detect possible impacts between two objects following approximately
these elliptic trajectories. We revisit and compare two different approaches to
compute the critical points of $d^2$, where we squared the distance $d$ to
include crossing points among the critical ones. One approach uses
trigonometric polynomials, the other uses ordinary polynomials. A new way to
test the reliability of the computation of $d_{\rm min}$ is introduced, based
on optimal estimates that can be found in the literature. The planar case is
also discussed: in this case we present an estimate for the maximal number of
critical points of $d^2$, together with a conjecture supported by numerical
tests. | Giovanni F. Gronchi, Giulio Baù, Clara Grassi | 2023-05-23T10:25:08Z | http://arxiv.org/abs/2305.13900v2 | # Revisiting the computation of the critical points of the Keplerian distance
###### Abstract
We consider the Keplerian distance \(d\) in the case of two elliptic orbits, i.e. the distance between one point on the first ellipse and one point on the second one, assuming they have a common focus. The absolute minimum \(d_{\rm min}\) of this function, called MOID or orbit distance in the literature, is relevant to detect possible impacts between two objects following approximately these elliptic trajectories. We revisit and compare two different approaches to compute the critical points of \(d^{2}\), where we squared the distance \(d\) to include crossing points among the critical ones. One approach uses trigonometric polynomials, the other uses ordinary polynomials. A new way to test the reliability of the computation of \(d_{\rm min}\) is introduced, based on optimal estimates that can be found in the literature. The planar case is also discussed: in this case we present an estimate for the maximal number of critical points of \(d^{2}\), together with a conjecture supported by numerical tests.
## 1 Introduction
The distance \(d\) between two points on two Keplerian orbits with a common focus, that we call _Keplerian distance_, appears in a natural way in Celestial Mechanics. The absolute minimum of \(d\) is called MOID (minimum orbital intersection distance), or simply _orbit distance_ in the literature, and we denote it by \(d_{\rm min}\). It is important to be able to track \(d_{\rm min}\), and actually all the local minimum points of \(d\), to detect possible impacts between two celestial bodies following approximately these trajectories, e.g. an asteroid with the Earth [16, 17], or two Earth satellites [19]. Moreover, the information given by \(d_{\rm min}\) is useful to understand observational biases in the distribution of the known population of NEAs, see [13]. Because of the growing number of Earth satellites (e.g. the mega constellations of satellites that are going to be launched [2]) and discovered asteroids, fast and reliable methods to compute the minimum values of \(d\) are required.
The computation of the minimum points of \(d\) can be performed by searching for all the critical points of \(d^{2}\), where considering the squared distance allows us to include trajectory-crossing points in the results.
There are several papers in the literature concerning the computation of the critical points of \(d^{2}\), e.g. [20, 7, 15, 9, 10, 3].
Some authors also propose methods for a fast computation of \(d_{\rm min}\) only, e.g. [21, 14].
We will focus on an algebraic approach for the case of two elliptic trajectories, as in [15], [9]. In [15] the critical points of \(d^{2}\) are found by computing the roots of a trigonometric polynomial \(g(u)\) of degree \(8\), where \(u\) is the eccentric anomaly parametrizing one of the trajectories. The polynomial \(g(u)\) is obtained by the computation of a Groebner basis, implying that generically we can not solve this problem by a polynomial with a
smaller degree. In [9], resultant theory is applied to a system of two bivariate ordinary polynomials, together with the discrete Fourier transform, to obtain (generically) a univariate polynomial of degree \(20\) in a variable \(t\), with a factor \((1+t^{2})^{2}\) leading to \(4\) pure imaginary roots that are discarded, so that we may have at most \(16\) real roots. Note that the trigonometric polynomial \(g(u)\) of degree \(8\) corresponds to an ordinary polynomial of degree \(16\) in the variable \(t\) through the transformation \(t=\tan(u/2)\). These methods were extended to the case of unbounded conics with a common focus in [3], [10].
In this paper we revisit the computation of the critical points of \(d^{2}\) for two elliptic trajectories by applying resultant theory to polynomial systems written in terms of the eccentric or the true anomalies. We obtain different methods using either ordinary or trigonometric polynomials. Moreover, we are able to compute via resultant theory the \(8\)-th degree trigonometric polynomial \(g(u)\) found by [15], and its analogue using the true anomalies (see Sections 4, 5). Some numerical tests comparing these methods are presented. We also test the reliability of the methods by taking advantage of the estimates for the values of \(d_{\min}\) introduced in [13] when one trajectory is circular. For the case of two ellipses, since we do not have such estimates for \(d_{\min}\), we use the optimal bounds for the nodal distance \(\delta_{\mathrm{nod}}\) derived in [11].
After introducing some notation in Section 2, we deal with the problem using eccentric anomalies and ordinary polynomials in Section 3. In Sections 4 and 5 we describe other procedures employing trigonometric polynomials and, respectively, eccentric or true anomalies. Some numerical tests and the reliability of our computations are discussed in Section 6. Finally, we present results for the maximum number of critical points in the planar problem in Section 7, and draw some conclusions in Section 8. Additional details of the computations can be found in the Appendix.
## 2 Preliminaries
Let \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) be two confocal elliptic trajectories, with \(\mathcal{E}_{i}\) defined by the five Keplerian orbital elements \(a_{i},e_{i},i_{i},\Omega_{i},\omega_{i}\). We introduce the Keplerian distance
\[d(V)=\sqrt{\langle\mathcal{X}_{1}-\mathcal{X}_{2},\mathcal{X}_{1}-\mathcal{X}_ {2}\rangle}, \tag{1}\]
where \(\mathcal{X}_{1},\mathcal{X}_{2}\in\mathbb{R}^{3}\) are the Cartesian coordinates of a point on \(\mathcal{E}_{1}\) and a point on \(\mathcal{E}_{2}\), corresponding to the vector \(V=(v_{1},v_{2})\), where \(v_{i}\) is a parameter along the trajectory \(\mathcal{E}_{i}\). In this paper we will parametrize the orbits either with the eccentric anomalies \(u_{i}\) or with the true anomalies \(f_{i}\).
Let \((x_{1},y_{1})\) and \((x_{2},y_{2})\) be Cartesian coordinates of two points on the two trajectories, each in its respective plane. The origin for both coordinate systems is the common focus of the two ellipses. We can write
\[\mathcal{X}_{1} =x_{1}\,\mathcal{P}+y_{1}\,\mathcal{Q},\] \[\mathcal{X}_{2} =x_{2}\,\mathfrak{p}+y_{2}\,\mathfrak{q},\]
with
\[\mathcal{P} =(P_{x}\,,P_{y}\,,P_{z})\,, \mathcal{Q} =(Q_{x}\,,Q_{y}\,,Q_{z})\,,\] \[\mathfrak{p} =(p_{x}\,,p_{y}\,,p_{z})\,, \mathfrak{q} =(q_{x}\,,q_{y}\,,q_{z})\,,\]
where
\[P_{x} =\cos\omega_{1}\cos\Omega_{1}-\cos i_{1}\sin\omega_{1}\sin\Omega_{1},\] \[P_{y} =\cos\omega_{1}\sin\Omega_{1}+\cos i_{1}\sin\omega_{1}\cos\Omega_{ 1},\] \[P_{z} =\sin\omega_{1}\sin i_{1},\] \[Q_{x} =-\sin\omega_{1}\cos\Omega_{1}-\cos i_{1}\cos\omega_{1}\sin\Omega _{1},\] \[Q_{y} =-\sin\omega_{1}\sin\Omega_{1}+\cos i_{1}\cos\omega_{1}\cos\Omega _{1},\] \[Q_{z} =\cos\omega_{1}\sin i_{1},\] \[p_{x} =\cos\omega_{2}\cos\Omega_{2}-\cos i_{2}\sin\omega_{2}\sin\Omega _{2},\] \[p_{y} =\cos\omega_{2}\sin\Omega_{2}+\cos i_{2}\sin\omega_{2}\cos\Omega _{2},\] \[p_{z} =\sin\omega_{2}\sin i_{2},\] \[q_{x} =-\sin\omega_{2}\cos\Omega_{2}-\cos i_{2}\cos\omega_{2}\sin \Omega_{2},\] \[q_{y} =-\sin\omega_{2}\sin\Omega_{2}+\cos i_{2}\cos\omega_{2}\cos \Omega_{2},\] \[q_{z} =\cos\omega_{2}\sin i_{2}.\]
If we use the eccentric anomalies \(u_{i}\) we have
\[x_{i}=a_{i}(\cos u_{i}-e_{i}),\qquad y_{i}=a_{i}\sqrt{1-e_{i}^{2}}\sin u_{i},\]
for \(i=1,2\), while with the true anomalies \(f_{i}\) we have
\[x_{i}=r_{i}\cos f_{i},\qquad y_{i}=r_{i}\sin f_{i},\qquad r_{i}=\frac{a_{i}(1- e_{i}^{2})}{1+e_{i}\cos f_{i}}.\]
Note that
\[|\mathcal{P}|=|\mathcal{Q}|=|\mathfrak{p}|=|\mathfrak{q}|=1,\qquad\langle \mathcal{P},\mathcal{Q}\rangle=\langle\mathfrak{p},\mathfrak{q}\rangle=0,\]
and set
\[K=\langle\mathcal{P},\mathfrak{p}\rangle,\quad L=\langle\mathcal{Q},\mathfrak{ p}\rangle,\quad M=\langle\mathcal{P},\mathfrak{q}\rangle,\quad N=\langle \mathcal{Q},\mathfrak{q}\rangle.\]
## 3 Eccentric anomalies and ordinary polynomials
We look for the critical points of the squared distance \(d^{2}\) as a function of the eccentric anomalies \(u_{1},u_{2}\), that is we consider the system
\[\nabla d^{2}(u_{1},u_{2})=\mathbf{0}, \tag{2}\]
where \(\nabla d^{2}=\left(\frac{\partial d^{2}}{\partial u_{1}},\frac{\partial d^{2} }{\partial u_{2}}\right)\). We can write
\[\left\{\begin{aligned} &\frac{1}{2}\frac{\partial d^{2}}{ \partial u_{1}}=\langle\frac{\partial\mathcal{X}_{1}}{\partial u_{1}}, \mathcal{X}_{1}-\mathcal{X}_{2}\rangle=\frac{\partial x_{1}}{\partial u_{1}}(x _{1}-Kx_{2}-My_{2})+\frac{\partial y_{1}}{\partial u_{1}}(y_{1}-Lx_{2}-Ny_{2}),\\ &\frac{1}{2}\frac{\partial d^{2}}{\partial u_{2}}=-\langle\frac{ \partial\mathcal{X}_{2}}{\partial u_{2}},\mathcal{X}_{1}-\mathcal{X}_{2} \rangle=\frac{\partial x_{2}}{\partial u_{2}}(x_{2}-Kx_{1}-Ly_{1})+\frac{ \partial y_{2}}{\partial u_{2}}(y_{2}-Mx_{1}-Ny_{1}),\end{aligned}\right. \tag{3}\]
where
\[\frac{\partial x_{i}}{\partial u_{i}}=-a_{i}\sin u_{i},\qquad\frac{\partial y_ {i}}{\partial u_{i}}=a_{i}\sqrt{1-e_{i}^{2}}\cos u_{i},\qquad i=1,2.\]
System (3) can be written as
\[\begin{cases}2(A_{1}-A_{3})\sin u_{1}\cos u_{1}+A_{7}\cos u_{1}\sin u_{2}+A_{8} \cos u_{1}\cos u_{2}\\ \quad-A_{9}\sin u_{1}\sin u_{2}-A_{10}\sin u_{1}\cos u_{2}+A_{11}\cos u_{1}-A_ {12}\sin u_{1}=0,\\ 2(A_{4}-A_{6})\sin u_{2}\cos u_{2}+A_{7}\sin u_{1}\cos u_{2}-A_{8}\sin u_{1} \sin u_{2}\\ \quad+A_{9}\cos u_{1}\cos u_{2}-A_{10}\cos u_{1}\sin u_{2}+A_{13}\cos u_{2}-A _{14}\sin u_{2}=0,\end{cases} \tag{4}\]
with
\[\begin{split}& A_{1}=a_{1}^{2}(1-e_{1}^{2}),\qquad\qquad\qquad \qquad A_{2}=0,\\ & A_{3}=a_{1}^{2},\qquad\qquad\qquad\qquad\qquad A_{4}=a_{2}^{2}(1-e_{2 }^{2}),\\ & A_{5}=0,\qquad\qquad\qquad\qquad A_{6}=a_{2}^{2},\\ & A_{7}=-2a_{1}a_{2}\sqrt{1-e_{1}^{2}}\sqrt{1-e_{2}^{2}}N,\qquad A_{8}=-2a _{1}a_{2}\sqrt{1-e_{1}^{2}}L,\\ & A_{9}=-2a_{1}a_{2}\sqrt{1-e_{2}^{2}}M,\qquad\qquad\qquad A_{10}=-2a_{1}a_ {2}K,\\ & A_{11}=2a_{1}a_{2}e_{2}\sqrt{1-e_{1}^{2}}L,\qquad\qquad\qquad A_{12}=2a_{1}(a _{2}e_{2}K-a_{1}e_{1}),\\ & A_{13}=2a_{1}a_{2}e_{1}\sqrt{1-e_{2}^{2}}M,\qquad\qquad\qquad A_{14}=2a_{2}( a_{1}e_{1}K-a_{2}e_{2}),\\ & A_{15}=a_{1}^{2}e_{1}^{2}+a_{2}^{2}e_{2}^{2}-2a_{1}a_{2}e_{1}e_{2}K.\end{split}\]
Following [9], we can transform (4) into a system of two bivariate ordinary polynomials in the variables \(t,s\) through
\[\sin u_{1}=\frac{2t}{1+t^{2}},\qquad\cos u_{1}=\frac{1-t^{2}}{1+t^{2}},\qquad \sin u_{2}=\frac{2s}{1+s^{2}},\qquad\cos u_{2}=\frac{1-s^{2}}{1+s^{2}}.\]
Then, (4) becomes
\[\begin{cases}p(t,s)=\alpha(t)s^{2}+\beta(t)s+\gamma(t)=0,\\ q(t,s)=A(t)s^{4}+B(t)s^{3}+D(t)s-A(t)=0,\end{cases} \tag{5}\]
where
\[\begin{split}\alpha(t)&=(A_{11}-A_{8})+(4A_{1}-4A_{3}+2A_{ 10}-2A_{12})t\\ &\qquad+(-4A_{1}+4A_{3}+2A_{10}-2A_{12})t^{3}+(A_{8}-A_{11})t^{4},\\ \beta(t)&=2A_{7}-4A_{9}t-4A_{9}t^{3}-2A_{7}t^{4},\\ \gamma(t)&=(A_{11}+A_{8})+(4A_{1}-4A_{3}-2A_{10}-2A_{12})t\\ &\qquad+(-4A_{1}+4A_{3}-2A_{10}-2A_{12})t^{3}-(A_{8}+A_{11})t^{4}\end{split}\]
and
\[\begin{split} A(t)&=-(A_{9}+A_{13})-2A_{7}t+(A_{9}-A_ {13})t^{2},\\ B(t)&=(-4A_{4}+4A_{6}-2A_{10}-2A_{14})-4A_{8}t\\ &\qquad+(-4A_{4}+4A_{6}+2A_{10}-2A_{14})t^{2},\\ D(t)&=(4A_{4}-4A_{6}-2A_{10}-2A_{14})-4A_{8}t\\ &\qquad+(4A_{4}-4A_{6}+2A_{10}-2A_{14})t^{2}.\end{split}\]
Let
\[S_{0}=\left(\begin{array}{ccccccc}\alpha&0&0&0&A&0\\ \beta&\alpha&0&0&B&A\\ \gamma&\beta&\alpha&0&0&B\\ 0&\gamma&\beta&\alpha&D&0\\ 0&0&\gamma&\beta&-A&D\\ 0&0&0&\gamma&0&-A\end{array}\right) \tag{6}\]
be the Sylvester matrix related to (5). From resultant theory [6] we know that the complex roots of \(\det(S_{0}(t))\) correspond to all the \(t\)-components of the solutions \((t,s)\in\mathbb{C}^{2}\) of (5). The determinant \(\det(S_{0}(t))\) is in general a polynomial of degree \(20\) in \(t\). We notice that it can be factorized as
\[\det\bigl{(}S_{0}(t)\bigr{)}=(1+t^{2})^{2}\det\bigl{(}\hat{S}(t)\bigr{)}\]
with
\[\hat{S}=\left(\begin{array}{ccccccc}\widetilde{\sigma}_{1}&-\widetilde{ \sigma}_{2}&-\widetilde{\sigma}_{1}&\widetilde{\sigma}_{2}&0&-\widetilde{ \sigma}_{3}\\ \widetilde{\sigma}_{2}&\widetilde{\sigma}_{1}&-\widetilde{\sigma}_{2}&- \widetilde{\sigma}_{1}&\widetilde{\sigma}_{3}&0\\ \sigma_{4}&\sigma_{2}&\sigma_{1}&-\sigma_{2}&\sigma_{6}&\sigma_{3}\\ 0&\sigma_{4}&\sigma_{2}&\sigma_{1}&\sigma_{5}&\sigma_{6}\\ 0&0&\sigma_{4}&\sigma_{2}&-\sigma_{6}&\sigma_{5}\\ 0&0&0&\sigma_{4}&0&-\sigma_{6}\end{array}\right), \tag{7}\]
where
\[\sigma_{1} =\alpha-\gamma, \sigma_{2} =\beta, \sigma_{3} =B-D, \tag{8}\] \[\widetilde{\sigma}_{1} =\frac{\alpha-\gamma}{1+t^{2}}, \widetilde{\sigma}_{2} =\frac{\beta}{1+t^{2}}, \widetilde{\sigma}_{3} =\frac{B-D}{1+t^{2}},\] \[\sigma_{4} =\gamma, \sigma_{5} =D, \sigma_{6} =A.\]
Therefore, to find the \(t\)-components corresponding to the critical points, we can look for the solutions of \(\det(\hat{S})=0\), which in general is a polynomial equation of degree \(16\). We can follow the same steps explained in [10, Sect. 4.3] to obtain the coefficients of the polynomial \(\det(\hat{S})\) by an evaluation/interpolation procedure based on the discrete Fourier transform. Then, the method described in [4] is applied to compute its roots. We substitute each of the real roots \(t\) of \(\det(\hat{S})\) in (5) and use the first equation \(p(t,s)=0\) to compute the two possible values of the \(s\) variable. Finally, we evaluate \(q\) at these points \((t,s)\) and choose the value of \(s\) that gives the evaluation with the smallest absolute value.
### Angular shifts
Define a shifted angle \(v_{1}=u_{1}-s_{1}\) and let
\[\sin v_{1}=\frac{2z}{1+z^{2}},\qquad\cos v_{1}=\frac{1-z^{2}}{1+z^{2}}. \tag{9}\]
Then, system (4) becomes
\[\begin{cases}\tilde{p}(z,s)=\tilde{\alpha}(z)s^{2}+\tilde{\beta}(z)s+\tilde{ \gamma}(z)=0,\\ \tilde{q}(z,s)=\tilde{A}(z)s^{4}+\tilde{B}(z)s^{3}+\tilde{D}(z)s-\tilde{A}(z) =0.\end{cases} \tag{10}\]
The coefficients \(\tilde{\alpha},\tilde{\beta},\tilde{\gamma},\tilde{A},\tilde{B},\tilde{D}\) are written in Appendix A. If \(T_{0}(z)\) is the Sylvester matrix related to (10) we get
\[\det(T_{0}(z))=(1+z^{2})^{2}\det(\hat{T}(z)),\]
with
\[\hat{T}=\left(\begin{array}{cccccc}\widetilde{\tau}_{1}&-\widetilde{\tau}_{2}&- \widetilde{\tau}_{1}&\widetilde{\tau}_{2}&0&-\widetilde{\tau}_{3}\\ \widetilde{\tau}_{2}&\widetilde{\tau}_{1}&-\widetilde{\tau}_{2}&-\widetilde{ \tau}_{1}&\widetilde{\tau}_{3}&0\\ \tau_{4}&\tau_{2}&\tau_{1}&-\tau_{2}&\tau_{6}&\tau_{3}\\ 0&\tau_{4}&\tau_{2}&\tau_{1}&\tau_{5}&\tau_{6}\\ 0&0&\tau_{4}&\tau_{2}&-\tau_{6}&\tau_{5}\\ 0&0&0&\tau_{4}&0&-\tau_{6}\end{array}\right),\]
where
\[\begin{array}{llll}\tau_{1}=\tilde{\alpha}-\tilde{\gamma},&\tau_{2}=\tilde {\beta},&\tau_{3}=\tilde{B}-\tilde{D},\\ \widetilde{\tau}_{1}=\frac{\tilde{\alpha}-\tilde{\gamma}}{1+z^{2}},&\widetilde {\tau}_{2}=\frac{\tilde{\beta}}{1+z^{2}},&\widetilde{\tau}_{3}=\frac{\tilde{ B}-\tilde{D}}{1+z^{2}},\\ \tau_{4}=\tilde{\gamma},&\tau_{5}=\tilde{D},&\tau_{6}=\tilde{A}.\end{array} \tag{11}\]
We find the values of \(z\) by solving the polynomial equation \(\det(\hat{T})=0\), which again has generically degree 16. We compute the values of \(v_{1}\) from (9) and shift back to obtain the \(u_{1}\) components of the critical points. Substituting in (4) and applying the angular shift \(u_{2}=v_{2}+s_{2}\), we consider the system
\[\begin{cases}\mathsf{A}\cos v_{2}+\mathsf{B}\sin v_{2}+\mathsf{C}=0,\\ \cos^{2}v_{2}+\sin^{2}v_{2}-1=0,\end{cases}\]
where the first equation corresponds to the first equation in (4), and
\[\begin{split}\mathsf{A}&=(A_{8}\cos u_{1}-A_{10}\sin u_{1}) \cos s_{2}+(A_{7}\cos u_{1}-A_{9}\sin u_{1})\sin s_{2},\\ \mathsf{B}&=-(A_{8}\cos u_{1}-A_{10}\sin u_{1})\sin s_{2}+(A_{7}\cos u_{1}-A _{9}\sin u_{1})\cos s_{2},\\ \mathsf{C}&=2(A_{1}-A_{3})\sin u_{1}\cos u_{1}+A_{11}\cos u_{1}-A_{12}\sin u _{1}.\end{split}\]
For each value of \(u_{1}\) we compute two solutions for \(\cos v_{2}\), \(\sin v_{2}\) and the corresponding values of \(\cos u_{2}\), \(\sin u_{2}\). We choose between them by substituting in the second equation in (4).
## 4 Eccentric anomalies and trigonometric polynomials
To work with trigonometric polynomials, we write system (3) as
\[\begin{cases}\lambda\sin u_{1}\cos u_{1}+\mu\cos u_{1}+\nu\sin u_{1}=0,\\ \alpha\cos u_{1}+\beta\sin u_{1}+\gamma=0,\end{cases} \tag{12}\]
where
\[\begin{split}\lambda&=a_{1}e_{1}^{2},\\ \mu&=a_{2}\sqrt{1-e_{1}^{2}}\Big{(}\sqrt{1-e_{2}^{2}}N\sin u _{2}+L\cos u_{2}-e_{2}L\Big{)},\\ \nu&=a_{2}e_{2}K-a_{1}e_{1}-a_{2}\sqrt{1-e_{2}^{2}}M\sin u_{2}-a_{2}K\cos u _{2},\\ \alpha&=a_{1}\Big{(}\sqrt{1-e_{2}^{2}}M\cos u_{2}-K\sin u _{2}\Big{)},\\ \beta&=a_{1}\sqrt{1-e_{1}^{2}}\Big{(}\sqrt{1-e_{2}^{2}}N\cos u _{2}-L\sin u_{2}\Big{)},\\ \gamma&=a_{2}e_{2}^{2}\sin u_{2}\cos u_{2}-a_{1}e_{1}\sqrt{1-e_{2}^{2}}M \cos u_{2}+(a_{1}e_{1}K-a_{2}e_{2})\sin u_{2}.\end{split}\]
Inserting relation
\[\sin u_{1}=-\frac{1}{\beta}(\alpha\cos u_{1}+\gamma) \tag{13}\]
into \(\cos^{2}u_{1}+\sin^{2}u_{1}-1=0\) and into the first equation in (12), we obtain
\[\begin{cases}(\alpha^{2}+\beta^{2})\cos^{2}u_{1}+2\alpha\gamma\cos u_{1}+\gamma ^{2}-\beta^{2}=0,\\ -\alpha\lambda\cos^{2}u_{1}+(\beta\mu-\lambda\gamma-\alpha\nu)\cos u_{1}- \gamma\nu=0.\end{cases} \tag{14}\]
We call \(p_{1}\), \(p_{2}\) the two trigonometric polynomials appearing on the left-hand side of (14). The Sylvester matrix of \(p_{1}\) and \(p_{2}\) is
\[\mathscr{S}=\left[\begin{array}{cccc}\alpha^{2}+\beta^{2}&0&-\alpha\lambda& 0\\ 2\alpha\gamma&\alpha^{2}+\beta^{2}&\beta\mu-\lambda\gamma-\alpha\nu&-\alpha \lambda\\ \gamma^{2}-\beta^{2}&2\alpha\gamma&-\gamma\nu&\beta\mu-\lambda\gamma-\alpha \nu\\ 0&\gamma^{2}-\beta^{2}&0&-\gamma\nu\end{array}\right].\]
We define
\[\mathscr{G}(u_{2})=\det\mathscr{S}(u_{2}),\]
which corresponds to the resultant of \(p_{1}\), \(p_{2}\) with respect to \(\cos u_{1}\) and is a trigonometric polynomial in \(u_{2}\) only. The \(u_{2}\) component of each critical point satisfies \(\mathscr{G}(u_{2})=0\).
**Proposition 1**.: _We can extract a factor \(\beta^{2}\) from \(\det\mathscr{S}\)._
Proof.: Using simple properties of determinants, we can write \(\det\mathscr{S}\) as a sum of different terms. The terms independent from \(\beta\) in this sum are given by
\[\left|\begin{array}{cccc}\alpha^{2}&0&-\alpha\lambda&0\\ 2\alpha\gamma&\alpha^{2}&-\lambda\gamma-\alpha\nu&-\alpha\lambda\\ \gamma^{2}&2\alpha\gamma&-\gamma\nu&-\lambda\gamma-\alpha\nu\\ 0&\gamma^{2}&0&-\gamma\nu\end{array}\right|\] \[=\left|\begin{array}{cccc}\alpha^{2}&0&\alpha\lambda&0\\ \alpha\gamma&0&\lambda\gamma&0\\ \gamma^{2}&2\alpha\gamma&\gamma\nu&\lambda\gamma+\alpha\nu\\ 0&\gamma^{2}&0&\gamma\nu\end{array}\right|+\left|\begin{array}{cccc}\alpha ^{2}&0&\alpha\lambda&0\\ \alpha\gamma&\alpha^{2}&\alpha\nu&\alpha\lambda\\ \gamma^{2}&2\alpha\gamma&\gamma\nu&\lambda\gamma+\alpha\nu\\ 0&\gamma^{2}&0&\gamma\nu\end{array}\right|\] \[=\left|\begin{array}{cccc}\alpha^{2}&0&\alpha\lambda&0\\ \alpha\gamma&\alpha^{2}&\alpha\nu&\alpha\lambda\\ 0&\alpha\gamma&0&\alpha\nu\\ 0&\gamma^{2}&0&\gamma\nu\end{array}\right|+\left|\begin{array}{cccc}\alpha ^{2}&0&\alpha\lambda&0\\ \alpha\gamma&\alpha^{2}&\alpha\nu&\alpha\lambda\\ \gamma^{2}&\gamma\alpha&\gamma\nu&\gamma\lambda\\ 0&\gamma^{2}&0&\gamma\nu\end{array}\right|,\]
and both determinants are \(0\). The linear terms in \(\beta\) are given by
\[\left|\begin{array}{cccc}\alpha^{2}&0&-\alpha\lambda&0\\ 2\alpha\gamma&\alpha^{2}&-\lambda\gamma-\alpha\nu&0\\ \gamma^{2}&2\alpha\gamma&-\gamma\nu&\beta\mu\\ 0&\gamma^{2}&0&0\end{array}\right|+\left|\begin{array}{cccc}\alpha^{2}&0&0& 0\\ 2\alpha\gamma&\alpha^{2}&\beta\mu&-\alpha\lambda\\ \gamma^{2}&2\alpha\gamma&0&-\lambda\gamma-\alpha\nu\\ 0&\gamma^{2}&0&-\gamma\nu\end{array}\right|,\]
and this sum is \(0\), because the two determinants are opposite. Therefore, \(\mathscr{G}(u_{2})\) is made by terms of order higher than \(1\) in \(\beta\). It results
\[\mathscr{G}(u_{2})=\mathfrak{D}_{1}+\mathfrak{D}_{2}+\mathfrak{D}_{3},\]
where
\[\mathfrak{D}_{1} =\left|\begin{array}{cccc}\beta^{2}&0&-\alpha\lambda&0\\ 0&\beta^{2}&\beta\mu-\lambda\gamma-\alpha\nu&-\alpha\lambda\\ -\beta^{2}&0&-\gamma\nu&\beta\mu-\lambda\gamma-\alpha\nu\\ 0&-\beta^{2}&0&-\gamma\nu\end{array}\right|,\] \[\mathfrak{D}_{2} =\left|\begin{array}{cccc}\beta^{2}&0&-\alpha\lambda&0\\ 0&\alpha^{2}&\beta\mu-\lambda\gamma-\alpha\nu&-\alpha\lambda\\ -\beta^{2}&2\alpha\gamma&-\gamma\nu&\beta\mu-\lambda\gamma-\alpha\nu\\ 0&\gamma^{2}&0&-\gamma\nu\end{array}\right|+\left|\begin{array}{cccc}\alpha ^{2}&0&-\alpha\lambda&0\\ 2\alpha\gamma&\beta^{2}&\beta\mu-\lambda\gamma-\alpha\nu&-\alpha\lambda\\ \gamma^{2}&0&-\gamma\nu&\beta\mu-\lambda\gamma-\alpha\nu\\ 0&-\beta^{2}&0&-\gamma\nu\end{array}\right|,\] \[\mathfrak{D}_{3} =\left|\begin{array}{cccc}\alpha^{2}&0&0&0\\ 2\alpha\gamma&\alpha^{2}&\beta\mu&0\\ \gamma^{2}&2\alpha\gamma&0&\beta\mu\\ 0&\gamma^{2}&0&0\end{array}\right|.\]
Their explicit expressions read
\[\mathfrak{D}_{1} =\beta^{4}\big{[}(\alpha\lambda+\gamma\nu)^{2}-(\beta\mu-\lambda \gamma-\alpha\nu)^{2}\big{]},\] \[\mathfrak{D}_{2} =\beta^{2}\left[2\alpha\beta\gamma\mu(\gamma\nu-\alpha\lambda)+(( \beta\mu-\lambda\gamma-\alpha\nu)^{2}-4\alpha\gamma\lambda\nu)(\gamma^{2}- \alpha^{2})\right],\] \[\mathfrak{D}_{3} =\alpha^{2}\beta^{2}\gamma^{2}\mu^{2}.\]
The trigonometric polynomial
\[g(u_{2}) =\mathscr{G}(u_{2})/\beta^{2}\] \[=\gamma^{4}\lambda^{2}-\beta^{4}\mu^{2}-\alpha^{4}\nu^{2}+2\,\mu \,\lambda\,\gamma\,\beta\,\left(\beta^{2}-\gamma^{2}\right)+2\,\mu\,\nu\, \alpha\,\beta\,\left(\alpha^{2}+\beta^{2}\right)\] \[\quad+2\,\lambda\,\nu\,\gamma\,\alpha\,\left(\alpha^{2}-\gamma^{2 }\right)+\left(-\lambda^{2}+\mu^{2}+\nu^{2}\right)\left(-\alpha^{2}\beta^{2}+ \gamma^{2}\alpha^{2}+\gamma^{2}\beta^{2}\right)\]
has total degree \(8\) in the variables \(\cos u_{2}\), \(\sin u_{2}\), and corresponds to the polynomial \(g\) introduced in [15] with Groebner bases theory. For this reason, generically, there is no polynomial of smaller degree giving all the \(u_{2}\) components of the critical points of \(d^{2}\).
We now explain the procedure to reduce the problem to the computation of the roots of a univariate polynomial. We set
\[\mathfrak{g}(x,y)=g(u_{2}),\]
where
\[x=\cos u_{2},\qquad y=\sin u_{2}.\]
We find that
\[\mathfrak{g}(x,y)=\sum_{j=0}^{6}g_{j}(x)y^{j}\]
for some polynomial coefficients \(g_{j}\) such that
\[\deg g_{0}=\deg g_{1}=\deg g_{2}=6,\] \[\deg g_{3}=5,\quad\deg g_{4}=4,\quad\deg g_{5}=3,\quad\deg g_{6}=2.\]
Then, we consider the polynomial system
\[\begin{cases}\mathfrak{g}(x,y)=0,\\ x^{2}+y^{2}-1=0.\end{cases} \tag{15}\]
Using relations
\[y^{2k}=(1-x^{2})^{k},\qquad y^{2k+1}=y(1-x^{2})^{k},\qquad k\in\mathbb{N},\]
obtained from the second equation in (15), we can substitute \(\mathfrak{g}\) in system (15) with
\[\tilde{\mathfrak{g}}(x,y)=a(x)y+b(x),\]
where
\[a(x) =g_{1}(x)+(1-x^{2})g_{3}(x)+(1-x^{2})^{2}g_{5}(x),\] \[b(x) =g_{0}(x)+(1-x^{2})g_{2}(x)+(1-x^{2})^{2}g_{4}(x)+(1-x^{2})^{3}g_{ 6}(x).\]
We can also write
\[a(x) =a_{0}(x)+x^{2}a_{2}(x)+x^{4}a_{4}(x),\] \[b(x) =b_{0}(x)+x^{2}b_{2}(x)+x^{4}b_{4}(x)+x^{6}b_{6}(x),\]
with
\[a_{0} =g_{1}+g_{3}+g_{5}, a_{2} =-g_{3}-2g_{5}, a_{4} =g_{5},\] \[b_{0} =g_{0}+g_{2}+g_{4}+g_{6}, b_{2} =-g_{2}-2g_{4}-3g_{6}, b_{4} =g_{4}+3g_{6}, b_{6} =-g_{6}.\]
Note that \(a\) and \(b\) have degree \(7\) and \(8\), respectively. We eliminate \(y\) from system
\[\begin{cases}\tilde{\mathfrak{g}}(x,y)=0,\\ x^{2}+y^{2}-1=0\end{cases}\]
by computing the resultant \(\mathfrak{u}(x)\) of the two polynomials with respect to \(y\), and obtain
\[\mathfrak{u}(x)=\left|\begin{array}{ccc}a(x)&0&1\\ b(x)&a(x)&0\\ 0&b(x)&x^{2}-1\end{array}\right|=a^{2}(x)(x^{2}-1)+b^{2}(x), \tag{16}\]
which is a univariate polynomial of degree \(16\).
Each of the real roots \(x\) of \(\mathfrak{u}\), with \(|x|\leq 1\), is substituted into the equation \(\tilde{\mathfrak{g}}(x,y)=0\) to get the value of \(y\). Finally, we evaluate \(\alpha,\beta,\gamma,\mu,\nu\) at the computed pairs \((x,y)\) and solve system (12) by computing the values of \(\cos u_{1}\) and \(\sin u_{1}\) from (14) and (13), respectively.
### Finding the roots of \(\mathfrak{u}\) with Chebychev's polynomials
To compute the roots of the polynomial \(\mathfrak{u}(x)\) in a numerically stable way, we need to express \(\mathfrak{u}\) in a basis ensuring that the roots are well-conditioned functions of its coefficients. This can be achieved using Chebyshev's polynomials [18] in place of the standard monomial basis.
In the monomial basis we have
\[\mathfrak{u}(x)=\sum_{j=0}^{n}p_{j}x^{j}, \tag{17}\]
for some coefficients \(p_{j}\). The same polynomial can be written as
\[\mathfrak{u}(x)=\sum_{j=0}^{n}c_{j}T_{j}(x), \tag{18}\]
where \(T_{j}\) are Chebyshev's polynomials, recursively defined by
\[T_{0}(x)=1,\qquad T_{1}(x)=x,\qquad T_{j+1}=2xT_{j}-T_{j-1},\ j=1,\ldots,n-1, \tag{19}\]
which are a basis for the vector space of polynomials of degree at most \(n\). The coefficients \(c_{j}\) are obtained from the \(p_{j}\) as follows. Setting
\[X=(1,x,x^{2},\ldots,x^{n})^{t},\qquad Y=(T_{0}(x),T_{1}(x),\ldots,T_{n}(x))^{t}\]
we have
\[AX=Y, \tag{20}\]
with
\[A=\left[\begin{array}{cccc}a_{00}&0&\ldots&0\\ a_{10}&a_{11}&\ddots&\vdots\\ \vdots&&\ddots&0\\ a_{n0}&a_{n1}&\ldots&a_{nn}\end{array}\right],\]
where the integer coefficients \(a_{ij}\) are determined from relations (19). We invert \(A\) by the following procedure. Define
\[\tilde{A}=\left[\begin{array}{cccc}1&0&\ldots&0\\ \frac{a_{10}}{a_{11}}&1&\ddots&\vdots\\ \vdots&&\ddots&0\\ \frac{a_{n0}}{a_{nn}}&\frac{a_{n1}}{a_{nn}}&\ldots&1\end{array}\right], \qquad\tilde{Y}=\left(\frac{T_{0}}{a_{00}},\frac{T_{1}}{a_{11}},\ldots,\frac{ T_{n}}{a_{nn}}\right)^{t}.\]
Equation (20) becomes
\[\tilde{A}X=\tilde{Y},\]
with
\[\tilde{A}=I+N,\]
where \(N^{n}=0\), that is \(N\) is a nilpotent matrix of order \(n\). Relation
\[(I+N)(I-N+N^{2}-N^{3}+\ldots+(-1)^{n-1}N^{n-1})=I,\]
implies that the inverse of \(\tilde{A}\) is
\[\tilde{A}^{-1}=I+\sum_{j=1}^{n-1}(-1)^{j}N^{j}. \tag{21}\]
Let us introduce the vectors
\[P=(p_{0},p_{1},\ldots,p_{n})^{t},\qquad C=(c_{0},c_{1},\ldots,c_{n})^{t}\]
made by the coefficients of the polynomials in (17), (18), and the diagonal matrix
\[D=\mbox{diag}\{a_{00}^{-1},a_{11}^{-1},\ldots,a_{nn}^{-1}\}.\]
From (20) and (21) we can write
\[X=\tilde{A}^{-1}DY,\]
so that
\[\mathfrak{u}(x)=C^{t}Y=P^{t}X=(P^{t}\tilde{A}^{-1}D)Y.\]
Therefore, the relation between the coefficients \(c_{j}\) and \(p_{j}\) is given by
\[C=(D\tilde{A}^{-t})P.\]
Searching for the roots of \(\mathfrak{u}(x)\) corresponds to computing the eigenvalues of an \(n\times n\) matrix \(\mathscr{C}\), called _colleague matrix_[8]. We use the form of the colleague matrix described in [5]:
\[\mathscr{C}=\frac{1}{2}\left[\begin{array}{ccccc}0&1&0&\ldots&0\\ 1&0&\ddots&&\vdots\\ 0&\ddots&\ddots&1&0\\ \vdots&\ddots&1&0&\sqrt{2}\\ 0&\ldots&0&\sqrt{2}&0\end{array}\right]-\frac{1}{2c_{n}}\left[\begin{array}[] {c}1\\ 0\\ \vdots\\ 0\end{array}\right]\left[\begin{array}{ccccc}c_{n-1}&c_{n-2}&\ldots&c_{1}& \sqrt{2}c_{0}\end{array}\right]. \tag{22}\]
The computation of the roots of a polynomial using the colleague matrix and a backward stable eigenvalue algorithm, such as the QR algorithm, is backward stable, provided that the 2-norm of the polynomial is moderate (see [18]).
## 5 True anomalies and trigonometric polynomials
The same steps described in Section 4 can be applied to look for the critical points of the squared distance function expressed in terms of the true anomalies \(f_{1}\), \(f_{2}\). Note that using true anomalies allows to deal with both bounded and unbounded trajectories [20, 10].
We write the system
\[\nabla d^{2}(f_{1},f_{2})=\mathbf{0}\]
as
\[\begin{cases}\alpha\cos f_{1}+\beta\sin f_{1}+\gamma=0,\\ \kappa\cos^{2}f_{1}+\lambda\sin f_{1}\cos f_{1}+\mu\cos f_{1}+\nu\sin f_{1}+ \kappa=0,\end{cases} \tag{23}\]
where
\[\alpha =p_{1}(1+e_{2}\cos f_{2})(K\sin f_{2}-M(e_{2}+\cos f_{2}))+p_{2}e _{1}e_{2}\sin f_{2},\] \[\beta =p_{1}(1+e_{2}\cos f_{2})(L\sin f_{2}-N(e_{2}+\cos f_{2})),\] \[\gamma =p_{2}e_{2}\sin f_{2},\] \[\kappa =-p_{2}e_{1}(L\cos f_{2}+N\sin f_{2}),\] \[\lambda =p_{2}e_{1}(K\cos f_{2}+M\sin f_{2}),\] \[\mu =-p_{2}(1+e_{1}^{2})(L\cos f_{2}+N\sin f_{2}),\] \[\nu =p_{1}e_{1}(1+e_{2}\cos f_{2})+p_{2}(K\cos f_{2}+M\sin f_{2}).\]
We also set
\[\tilde{\kappa} =p_{2}(L\cos f_{2}+N\sin f_{2}),\] \[\tilde{\lambda} =p_{2}(K\cos f_{2}+M\sin f_{2}),\]
so that
\[\kappa=-e_{1}\tilde{\kappa}=\frac{e_{1}}{1+e_{1}^{2}}\mu,\qquad\lambda=e_{1} \tilde{\lambda},\qquad\nu=p_{1}e_{1}(1+e_{2}\cos f_{2})+\tilde{\lambda}.\]
Inserting relation
\[\sin f_{1}=-\frac{1}{\beta}(\alpha\cos f_{1}+\gamma) \tag{24}\]
into \(\cos^{2}f_{1}+\sin^{2}f_{1}-1=0\) and into the second equation of (23), we obtain
\[\begin{cases}(\alpha^{2}+\beta^{2})\cos^{2}f_{1}+2\alpha\gamma\cos f_{1}+\gamma ^{2}-\beta^{2}=0,\\ (\beta\kappa-\alpha\lambda)\cos^{2}f_{1}+(\beta\mu-\lambda\gamma-\alpha\nu) \cos f_{1}+\beta\kappa-\gamma\nu=0.\end{cases} \tag{25}\]
As in Section 4, we consider the Sylvester matrix of the two polynomials in (25)
\[\mathscr{T}=\left[\begin{array}{cccc}\alpha^{2}+\beta^{2}&0&\beta\kappa- \alpha\lambda&0\\ 2\alpha\gamma&\alpha^{2}+\beta^{2}&\beta\mu-\lambda\gamma-\alpha\nu&\beta \kappa-\alpha\lambda\\ \gamma^{2}-\beta^{2}&2\alpha\gamma&\beta\kappa-\gamma\nu&\beta\mu-\lambda \gamma-\alpha\nu\\ 0&\gamma^{2}-\beta^{2}&0&\beta\kappa-\gamma\nu\end{array}\right]\]
and define
\[\mathscr{H}(f_{2})=\det\mathscr{T},\]
that we are able to factorize. In particular, we can write
\[\mathscr{H}(f_{2})=(1+e_{2}\cos f_{2})^{2}\beta^{2}h(f_{2}),\]
where
\[\begin{split} h(f_{2})&=\tilde{\beta}^{4}\xi^{2}\mu^{2}(4 \eta^{2}-1)+2\tilde{\beta}^{3}\xi\mu\left[\lambda\gamma+\alpha\nu-2\eta(\alpha \lambda+\gamma\nu)\right]\\ &\quad+\tilde{\beta}^{2}(\alpha^{2}-\gamma^{2})\left[\lambda^{2}- \nu^{2}+\mu^{2}(4\eta^{2}-1)\right]\\ &\quad-2\mu\tilde{\beta}\tilde{\alpha}^{2}\xi^{2}\left[\tilde{ \alpha}(\lambda\eta-\nu)-3\eta e_{1}^{3}p_{1}\gamma\right]+\mu^{2}\tilde{ \alpha}^{2}\left[\gamma(1-2\eta e_{1})-\eta\tilde{\alpha}\xi\right]^{2}\\ &\quad-2\mu\tilde{\beta}p_{1}e_{1}(1-e_{1}^{2})\eta\gamma^{2} \left[3e_{1}\tilde{\alpha}\xi-\gamma(1-e_{1}^{2})\right]-(\alpha^{2}-\gamma^{ 2})(\nu\tilde{\alpha}+p_{1}e_{1}^{2}\gamma)^{2},\end{split} \tag{26}\]
with
\[\begin{split}\eta&=\frac{e_{1}}{1+e_{1}^{2}},\qquad \qquad\qquad\qquad\qquad\qquad\xi=1+e_{2}\cos f_{2},\\ \tilde{\alpha}&=p_{1}[K\sin f_{2}-M(e_{2}+\cos f_{2}) ],\qquad\qquad\tilde{\beta}=p_{1}[L\sin f_{2}-N(e_{2}+\cos f_{2})].\end{split}\]
We can show that \(h(f_{2})\) has degree \(8\) in \((\cos f_{2},\sin f_{2})\). The related computations are displayed in Appendix B.
Let us set
\[\mathfrak{h}(x,y)=h(f_{2}),\]
where
\[x=\cos f_{2},\qquad y=\sin f_{2}.\]
We find that
\[\mathfrak{h}(x,y)=\sum_{j=0}^{6}h_{j}(x)y^{j}\]
for some polynomial coefficients \(h_{j}\) such that
\[\begin{split}\deg h_{0}&=8,\quad\deg h_{1}=7,\quad \deg h_{2}=6,\\ \deg h_{3}&=5,\quad\deg h_{4}=4,\quad\deg h_{5}=3, \quad\deg h_{6}=2.\end{split}\]
Then, we consider the system
\[\begin{cases}\mathfrak{h}(x,y)=0,\\ x^{2}+y^{2}-1=0.\end{cases} \tag{27}\]
Proceeding as in Section 4 we can substitute \(\mathfrak{h}(x,y)\) with
\[\tilde{\mathfrak{h}}(x,y)=\mathfrak{a}(x)y+\mathfrak{b}(x), \tag{28}\]
with
\[\mathfrak{a}(x) =a_{0}(x)+x^{2}a_{2}(x)+x^{4}a_{4}(x),\] \[\mathfrak{b}(x) =b_{0}(x)+x^{2}b_{2}(x)+x^{4}b_{4}(x)+x^{6}b_{6}(x),\]
where
\[a_{0} =h_{1}+h_{3}+h_{5}, a_{2} =-h_{3}-2h_{5}, a_{4} =h_{5},\] \[b_{0} =h_{0}+h_{2}+h_{4}+h_{6}, b_{2} =-h_{2}-2h_{4}-3h_{6}, b_{4} =h_{4}+3h_{6}, b_{6} =-h_{6}.\]
We apply resultant theory to eliminate the dependence on \(y\) as in Section 4 and obtain a univariate polynomial \(\mathfrak{v}\) of degree 16. The real roots of \(\mathfrak{v}\) with absolute value \(\leq 1\) correspond to the values of \(\cos f_{2}\) we are searching for. We compute \(\sin f_{2}\) from (28) and substitute \(\cos f_{2}\) and \(\sin f_{2}\) in (25). Finally, \(\cos f_{1}\) and \(\sin f_{1}\) are found by solving (25) and using (24).
### Angular shifts
Also for the method presented in this section we consider the application of an angular shift. If we define the new shifted angle \(v_{2}\) by
\[v_{2}=f_{2}-s_{2},\]
for some \(s_{2}\in[0,2\pi)\), the coefficients of the polynomial (26) written in terms of \(v_{2}\) are derived following the computations of Appendix C. Then, following a procedure analogous to Section 5, we find the values of \(v_{2}\) and shift back to get \(f_{2}\). Finally, we can apply an angular shift also to the angle \(f_{1}\) when solving system (25). Defining the shifted angle as
\[v_{1}=f_{1}-s_{1},\]
for \(s_{1}\in[0,2\pi)\), system (25) becomes
\[\begin{cases}A\cos v_{1}+B\sin v_{1}+C=0,\\ D\cos^{2}v_{1}+E\sin v_{1}\cos v_{1}+F\cos v_{1}+G\sin v_{1}+H=0,\end{cases} \tag{29}\]
where
\[A =\alpha\cos s_{1}+\beta\sin s_{1},\] \[B =\beta\cos s_{1}-\alpha\sin s_{1},\] \[C =\gamma,\] \[D =\kappa\cos^{2}s_{1}-\kappa\sin^{2}s_{1}+2\lambda\sin s_{1}\cos s _{1},\] \[E =\lambda\cos^{2}s_{1}-\lambda\sin^{2}s_{1}-2\kappa\sin s_{1}\cos s _{1},\] \[F =\mu\cos s_{1}+\nu\sin s_{1},\] \[G =\nu\cos s_{1}-\mu\sin s_{1},\] \[H =\kappa\sin^{2}s_{1}-\lambda\sin s_{1}\cos s_{1}+\kappa,\]
with \(\alpha,\beta,\gamma,\kappa,\lambda,\mu,\nu\) defined at the beginning of this section.
## 6 Numerical tests
We have developed Fortran codes for each of the methods presented in this paper. We denote these methods with (OE, OES, TE, TEC, TT, TTS), see Table 1. Moreover, we denote by (OT) the method presented in [10]. Numerical tests have been carried out for pairs of bounded trajectories to compare the different methods.
Taking the NEA catalogue available at [https://newton.spacedys.com/neodys/](https://newton.spacedys.com/neodys/), we applied these methods to compute the critical points of the squared distance between each NEA and the Earth, and between all possible pairs of NEAs. We applied a few simple checks to detect errors in the results:
* Weierstrass check (W): for each pair of trajectories we have to find at least one maximum and one minimum point;
* Morse check (M): for each pair of trajectories, let \(N\) be the total number of critical points, and \(M\) and \(m\) be the number of maximum and minimum points, respectively. Then (assuming \(d^{2}\) is a Morse function) we must have \(N=2(M+m)\);
* Minimum distance check (\(d_{\min}\)): we sample the two trajectories with \(k\) uniformly distributed points each (we used \(k=10\)), and compute the distance between each pair of points. We check that the minimum value of \(d\) computed through this sampling is greater than the value of \(d_{\min}\) obtained with our methods.
For each method a small percentage of cases fail due to some of the errors above. However, in our tests, for each pair of orbits, at least one method passes all three checks.
The angular shifts (see Sections 3.1, 5.1) allow us to solve the majority of detected errors for the methods of Sections 3 and 5 without shift (OE, TT). Applying a shift could also be a way to solve most of the errors detected by the method of Section 4 (TE, TEC).
Some data on the detected errors for each method is reported in Table 1. Here we show the percentages of cases failing some of the three checks described above. We note that the NEA catalogue contains 31,563 NEAs with bounded orbits (to the date of February 25, 2023). Therefore, for the NEA-Earth test we are considering 31,563 pairs of orbits, while for the NEA-NEA test the number of total pairs is 498,095,703. From Table 1 we see that the method TE is improved with Chebychev's polynomials (TEC). In the same way, the methods OE, TT are improved by applying angular shifts in case of detected errors (OES, TTS). Indeed, the method TTS turns out to be the most reliable for the computation of \(d_{\min}\).
\begin{table}
\begin{tabular}{l l|c c c|c c c} \multicolumn{1}{c|}{**algorithm**} & \multicolumn{3}{c|}{**NEA – Earth**} & \multicolumn{3}{c}{**NEA – NEA**} \\ & & W & M & \(d_{\min}\) & W & M & \(d_{\min}\) \\ \hline Ord poly, true anom & (OT) & 0 & 0 & 0 & \(4\cdot 10^{-6}\) & \(4\cdot 10^{-6}\) & \(4\cdot 10^{-6}\) \\ Ord poly, ecc anom & (OE) & 0.0095 & 0.0221 & 0 & 0.0008 & 0.0633 & 0.0005 \\ Ord poly, ecc anom, shift & (OES) & 0 & 0 & 0 & \(1.1\cdot 10^{-5}\) & \(1.7\cdot 10^{-5}\) & \(4\cdot 10^{-6}\) \\ Trig poly, ecc anom & (TE) & 0 & 0.5732 & 0 & 0.0004 & 0.5234 & 0.0013 \\ Trig poly, ecc anom, Cheb & (TEC) & 0 & 0.0095 & 0 & 0.0003 & 0.0216 & 0.0003 \\ Trig poly, true anom & (TT) & 0 & 0.0253 & 0 & 0.0086 & 0.0408 & 0.0025 \\ Trig poly, true anom, shift & (TTS) & 0 & 0 & 0 & \(1.3\cdot 10^{-5}\) & 0.0006 & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Percentages of detected errors with each method applied to the computation of all critical points between all NEAs and the Earth and between all pairs of NEAs.
Two additional ways to check in particular the computation of \(d_{\rm min}\) are discussed below.
### Reliability test for \(d_{\rm min}\)
Although all the presented methods allow us to find all the critical points of \(d^{2}\), we are particularly interested in the correct computation of the minimum distance \(d_{\rm min}\). For this reason, we introduce two different tests to check whether the computed values of \(d_{\rm min}\) are reliable.
The first test is based on the results of [13], where the authors found optimal upper bounds for \(d_{\rm min}\) when one orbit is circular. Let us denote with \({\cal A}_{1}\) and \({\cal A}_{2}\) the two trajectories. Assume that \({\cal A}_{2}\) is circular with orbital radius \(r_{2}\), and call \(q_{1},e_{1},i_{1},\omega_{1}\) the pericenter distance, eccentricity, inclination and argument of pericenter of \({\cal A}_{1}\). Moreover, set
\[{\cal C}=[0,1]\times[0,\pi/2],\qquad{\cal D}=[0,q_{\rm max}]\times[0,\pi/2], \tag{30}\]
where we used \(q_{\rm max}=1.3\), which is the maximum perihelion distance of near-Earth objects. Then, for each choice of \((q_{1},\omega_{1})\in{\cal D}\) we have
\[\max_{(e_{1},i_{1})\in{\cal C}}d_{\rm min}=\max\{r_{2}-q_{1},\delta(q_{1}, \omega_{1})\}, \tag{31}\]
where \(\delta(q_{1},\omega_{1})\) is the distance between \({\cal A}_{1}\) and \({\cal A}_{2}\) with \(e_{1}=1,i_{1}=\pi/2\):
\[\delta(q,\omega)=\sqrt{(\xi-r_{2}\sin\omega)^{2}+\left(\frac{\xi^{2}-4q^{2}}{4 q}+r_{2}\cos\omega\right)^{2}}, \tag{32}\]
with \(\xi=\xi(q,\omega)\) the unique real solution of
\[x^{3}+4q(q+\cos\omega)x-8r_{2}q^{2}\sin\omega=0.\]
We compare this optimal bound with the maximum values of \(d_{\rm min}\) computed with the method OE, for a grid of values in the \((q_{1},\omega_{1})\) plane. The results are reported in Figure 1. Here we see that the maximum values of \(d_{\rm min}\) obtained through our computation appear to lie on the grey surface corresponding to the graph of \(\max_{\cal C}d_{\rm min}(q_{1},\omega_{1})\) defined in (31). This test confirms the reliability of our computations. Similar checks were successful with all the methods of Table 1.
To test our results also in case of two elliptic orbits, we consider the following bound introduced in [11] for the nodal distance \(\delta_{\rm nod}\) defined below. Let
\[r_{+}=\frac{q_{1}(1+e_{1})}{1+e_{1}\cos\omega_{1}^{u}}, r_{-}=\frac{q_{1}(1+e_{1})}{1-e_{1}\cos\omega_{1}^{u}},\] \[r_{+}^{\prime}=\frac{q_{2}(1+e_{2})}{1+e_{2}\cos\omega_{2}^{u}}, r_{-}^{\prime}=\frac{q_{2}(1+e_{2})}{1-e_{2}\cos\omega_{2}^{u}},\]
where \(q_{2}\), \(e_{2}\) are the pericenter distance and eccentricity of \({\cal A}_{2}\), and \(\omega_{1}^{u}\), \(\omega_{2}^{u}\) are the mutual arguments of pericenter (see [11]).
We introduce the ascending and descending nodal distances
\[d_{\rm nod}^{+}=r_{+}^{\prime}-r_{+},\qquad d_{\rm nod}^{-}=r_{-}^{\prime}-r_ {-}.\]
The (minimal) nodal distance \(\delta_{\rm nod}\) is defined as
\[\delta_{\rm nod}=\min\bigl{\{}|d_{\rm nod}^{+}|,|d_{\rm nod}^{-}|\bigr{\}}. \tag{33}\]
Set
\[\mathcal{C}^{\prime}=[0,1]\times[0,\pi].\]
For each choice of \((q_{1},\omega_{1})\in\mathcal{D}\), defined as in (30), we have
\[\max_{(e_{1},\omega_{2}^{\omega})\in\mathcal{C}^{\prime}}\delta_{\mathrm{nod}}= \max\bigl{\{}u_{\mathrm{int}}^{\omega},u_{\mathrm{ext}}^{\omega},u_{\mathrm{ link}}^{\omega}\bigr{\}}, \tag{34}\]
where, denoting by \(Q_{2}\) the apocenter distance of \(\mathcal{A}_{2}\) and by \(p_{2}=q_{2}(1+e_{2})\) its conic parameter,
\[u_{\mathrm{int}}^{\omega}(q,\omega)=p_{2}-q,\]
\[u_{\mathrm{ext}}^{\omega}(q,\omega)=\min\Bigl{\{}\frac{2q}{1-\cos\omega}- \frac{p_{2}}{1-\hat{\xi}_{*}^{\prime}},\ \frac{2q}{1+\cos\omega}-q_{2}\Bigr{\}},\]
\[u_{\mathrm{link}}^{\omega}(q,\omega)=\min\left\{Q_{2}-\frac{q(1+\hat{e}_{*}) }{1+\hat{e}_{*}\cos\omega},\frac{2q}{1-\cos\omega}-q_{2}\right\},\]
with
\[\hat{\xi}_{*}^{\prime}=\min\{\xi_{*}^{\prime},e_{2}\},\qquad\xi_{*}^{\prime} (q,\omega)=\frac{4q\cos\omega}{p_{2}\sin^{2}\omega+\sqrt{p_{2}^{2}\sin^{4} \omega+16q^{2}\cos^{2}\omega}},\]
and
\[\hat{e}_{*}=\max\bigl{\{}0,\min\{e_{*},1\}\bigr{\}},\] \[e_{*}(q,\omega)=\frac{2(p_{2}-q(1-e_{2}^{2}))}{q(1-e_{2}^{2})+ \sqrt{q^{2}(1-e_{2}^{2})^{2}+4p_{2}\cos^{2}\omega(p_{2}-q(1-e_{2}^{2}))}}.\]
We compare the computed values of \(d_{\min}\) with the bound (34) on the maximum nodal distance. The results are displayed in Figure 2 where, for four different values of \(e_{2}\), the
Figure 1: Comparison between the graph of \(\max_{(e_{1},i_{1})}d_{\min}(q,\omega)\) defined in (31) and the maximum values of \(d_{\min}\) computed with the algorithm of Section 3 for a grid of values of \((q_{1},\omega_{1})\).
grey surface represents the bound of [11], while the black dots correspond to the maximum value of \(d_{\rm min}\) for a grid in the \((q_{1},\omega_{1})\) plane computed with the method OE. Since the value of \(\delta_{\rm nod}\) is always greater than or equal to the value of \(d_{\rm min}\), for the test to be satisfied, we need all the black dots to fall below or lie on the grey surface. From Figure 2 we see that this is indeed what happens.
Similar checks done with the methods OT, OE, OES, TT, TTS were successful.
## 7 The planar case
Let us consider the case of two coplanar conics parametrized by the true anomalies \(f_{1}\), \(f_{2}\). Then, \((f_{1},f_{2})\) is a critical point of \(d^{2}\) iff \(d^{2}(f_{1},f_{2})=0\) or the tangent vectors
\[\boldsymbol{\tau}_{1}(f_{1})=\frac{\partial\mathcal{X}_{1}}{\partial f_{1}}, \qquad\boldsymbol{\tau}_{2}(f_{2})=\frac{\partial\mathcal{X}_{2}}{\partial f _{2}}\]
to the first and second conic at \(\mathcal{X}_{1}(f_{1})\) and \(\mathcal{X}_{2}(f_{2})\), respectively, are parallel. If one trajectory, say the second one, is circular, then the tangent vector \(\boldsymbol{\tau}_{2}\) is orthogonal to the position vector \(\mathcal{X}_{2}\) for any value of \(f_{2}\). Therefore, to find critical points that do not correspond to trajectory intersections, it is enough to look for values of \(f_{1}\) such that
Figure 2: Comparison of the maximum MOID obtained with the method of Section 3 and the bound on the nodal distance found in [11]. These plots were drawn using values of \(e_{2}\) equal to \(0.2\) (top-left), \(0.3\) (top-right), \(0.4\) (bottom-left) and \(0.5\) (bottom-right).
\(\mathbf{r}_{1}\cdot\boldsymbol{\tau}_{1}=0\). By symmetry, we can write
\[\mathcal{X}_{1}=\begin{pmatrix}r_{1}\cos f_{1}\\ r_{1}\sin f_{1}\end{pmatrix},\qquad\text{with}\quad r_{1}=\frac{p_{1}}{1+e_{1} \cos f_{1}},\]
that is we can assume \(\omega_{1}=0\). Thus, up to a multiplicative factor, we have
\[\boldsymbol{\tau}_{1}=\begin{pmatrix}-\sin f_{1}\\ \cos f_{1}+e_{1}\end{pmatrix},\]
so that
\[\mathbf{r}_{1}\cdot\boldsymbol{\tau}_{1}=\frac{p_{1}e_{1}\sin f_{1}}{1+e_{1} \cos f_{1}}=0\]
is satisfied iff \(f_{1}=0,\pi\). Therefore, in general, we have the four critical points
\[(\bar{f}_{1},\bar{f}_{2})=(0,0),(0,\pi),(\pi,0),(\pi,\pi).\]
We may have at most two additional critical points that correspond to trajectory intersections, see [12, Sect. 7.1]. In conclusion, the maximum number of critical points with a circular and an elliptic trajectory in the planar case is \(6\).
We consider now the case of two ellipses. The position vectors can be written as
\[\mathcal{X}_{1}=\begin{pmatrix}r_{1}\cos f_{1}\\ r_{1}\sin f_{1}\end{pmatrix},\qquad\text{with}\quad r_{1}=\frac{p_{1}}{1+e_{1} \cos f_{1}}\]
and
\[\mathcal{X}_{2}=\begin{pmatrix}r_{2}\cos(f_{2}+\omega_{2})\\ r_{2}\sin(f_{2}+\omega_{2})\end{pmatrix},\qquad\text{with}\quad r_{2}=\frac{p_ {2}}{1+e_{2}\cos f_{2}}.\]
Up to a multiplicative factor, we have
\[\boldsymbol{\tau}_{1}=\begin{pmatrix}-\sin f_{1}\\ \cos f_{1}+e_{1}\end{pmatrix},\qquad\boldsymbol{\tau}_{2}=\begin{pmatrix}-\sin (f_{2}+\omega_{2})-e_{2}\sin\omega_{2}\\ \cos(f_{2}+\omega_{2})+e_{2}\cos\omega_{2}\end{pmatrix}.\]
The critical points that do not correspond to trajectory intersections are given by the values of \(f_{1},f_{2}\) such that \(\boldsymbol{\tau}_{1},\boldsymbol{\tau}_{2}\) are parallel and both orthogonal to \(\mathcal{X}_{2}-\mathcal{X}_{1}\). These two conditions lead to the system
\[\begin{cases}(\cos f_{1}+e_{1})[\sin(f_{2}+\omega_{2})+e_{2}\sin\omega_{2}]- \sin f_{1}[\cos(f_{2}+\omega_{2})+e_{2}\cos\omega_{2}]=0,&(35)\\ (\cos f_{1}+e_{1})[r_{2}\sin(f_{2}+\omega_{2})-r_{1}\sin f_{1}]-\sin f_{1}[r_{ 2}\cos(f_{2}+\omega_{2})-r_{1}\cos f_{1}]=0.&(36)\end{cases}\]
Multiplying (35) by \(r_{2}\) and subtracting (36) we get
\[p_{1}e_{1}(1+e_{2}\cos f_{2})\sin f_{1}+p_{2}e_{2}(1+e_{1}\cos f_{1})(\cos f_{ 1}\sin\omega_{2}-\sin f_{1}\cos\omega_{2}+e_{1}\sin\omega_{2})=0. \tag{37}\]
Equations (35) and (37) can be written as
\[\begin{cases}\alpha\cos f_{2}+\beta\sin f_{2}+\alpha e_{2}=0,&(38)\\ \mu\cos f_{2}+\delta=0,&(39)\end{cases} \tag{38}\]
where
\[\alpha =\sin(\omega_{2}-f_{1})+e_{1}\sin\omega_{2},\] \[\beta =\cos(\omega_{2}-f_{1})+e_{1}\cos\omega_{2},\] \[\mu =p_{1}e_{1}e_{2}\sin f_{1},\] \[\delta =p_{1}e_{1}\sin f_{1}+p_{2}e_{2}(1+e_{1}\cos f_{1})(\cos f_{1} \sin\omega_{2}-\sin f_{1}\cos\omega_{2}+e_{1}\sin\omega_{2}).\]
\begin{table}
\begin{tabular}{c c c c c} \hline \(q\) & \(e\) & \(i\) & \(\Omega\) & \(\omega\) \\ \hline
0.16582 & 0.84577 & 0 & 0 & 9.09466 \\
1 & 0.2 & 0 & 0 & 10 \\ \hline \end{tabular}
\end{table}
Table 2: Cometary orbital elements of a pair of coplanar elliptic orbits giving 10 critical points. Angles are in degrees.
Figure 3: Level curves of the squared distance for the case reported in Table 2. The position of the critical points is highlighted: saddle points are represented by black asterisks, while the red and blue crosses correspond to maximum and minimum points, respectively.
\begin{table}
\begin{tabular}{c c|c|c} \hline \(u_{1}\) & \(u_{2}\) & \(d\) & Type \\ \hline
116.0625325 & 153.9899286 & 0.0000000 & MINIMUM \\
243.6382848 & 203.6865581 & 0.0000000 & MINIMUM \\
179.8948964 & 178.9198966 & 0.4845432 & SADDLE \\
1.6247542 & 2.0946456 & 0.8341185 & MINIMUM \\
24.0090191 & 38.3799855 & 0.8401907 & SADDLE \\
334.2162041 & 317.5202237 & 0.8445898 & SADDLE \\
324.5270438 & 126.4762243 & 1.6264123 & SADDLE \\
34.8254033 & 231.0377067 & 1.6334795 & SADDLE \\
0.9077692 & 180.7796090 & 1.6658557 & MAXIMUM \\
179.9346562 & 358.9929507 & 2.9845260 & MAXIMUM \\ \hline \end{tabular}
\end{table}
Table 3: Critical points and critical values for the pair of orbits displayed in Table 2.
From (38) we obtain
\[\sin f_{2}=-\frac{\alpha}{\beta}(\cos f_{2}+e_{2}),\]
which is replaced into relation \(\cos^{2}f_{2}+\sin^{2}f_{2}-1=0\) to give
\[(\alpha^{2}+\beta^{2})\cos^{2}f_{2}+2\alpha^{2}e_{2}\cos f_{2}+e_{2}^{2}\alpha^ {2}-\beta^{2}=0. \tag{40}\]
Moreover, after replacing in (40)
\[\cos f_{2}=-\frac{\delta}{\mu},\]
which follows from (39), we obtain
\[(\alpha^{2}+\beta^{2})\delta^{2}-2e_{2}\alpha^{2}\delta\mu+\mu^{2}(e_{2}^{2} \alpha^{2}-\beta^{2})=0. \tag{41}\]
Since
\[\alpha^{2}+\beta^{2}=1+e_{1}^{2}+2e_{1}\cos f_{1},\]
the trigonometric polynomial in (41) has, in general, degree 5 in \(\cos f_{1}\), \(\sin f_{1}\). Therefore, we can not have more than 10 critical points which do not correspond to trajectory intersections. Then, the maximum number of critical points of \(d^{2}\) (including intersections) for two elliptic orbits in the planar case is at most 12. However, we remark that this bound has never been reached in our numerical tests, where we got at most 10 critical points that we think is the maximum number. This conjecture adds a new question to Problem 8 in [1].
In Table 2 we write a set of orbital elements giving 10 critical points. We draw the level curves of \(d^{2}\) in Figure 3, as function of the eccentric anomalies \(u_{1},u_{2}\), where the position of each critical point is highlighted: we use an asterisk for saddle points, and crosses for local extrema. Finally, the critical points, the corresponding values of \(d\) and their type (minimum, maximum, saddle) are displayed in Table 3.
## 8 Conclusions
In this work we investigate different approaches for the computation of the critical points of the squared distance function \(d^{2}\), with particular care for the minimum values. We focus on the case of bounded trajectories. Two algebraic approaches are used: the first employs ordinary polynomials, the second trigonometric polynomials. In both cases we detail all the steps to reduce the problem to the computation of the roots of a univariate polynomial of minimal degree (that is 16, in the general case). The different methods are compared through numerical tests using the orbits of all the known near-Earth asteroids. We also perform some reliability tests of the results, which make use of known optimal bounds on the orbit distance. Finally, we improve the theoretical bound on the number of critical points in the planar case, and refine the related conjecture.
## 9 Acknowledgments
We wish to thank Leonardo Robol for his useful comments and suggestions. The authors have been partially supported through the H2020 MSCA ETN Stardust-Reloaded, Grant Agreement n. 813644. The authors also acknowledge the project MIUR-PRIN 20178CJA2B "New frontiers of Celestial Mechanics: theory and applications" and the GNFM-INdAM (Gruppo Nazionale per la Fisica Matematica).
Coefficients of shifted polynomials for method with ordinary polynomials and eccentric anomalies
The coefficients of system (10) are
\[\tilde{\alpha} =\big{(}2(A_{1}-A_{3})\cos s_{1}\sin s_{1}-A_{10}\sin s_{1}-A_{11} \cos s_{1}+A_{12}\sin s_{1}+A_{8}\cos s_{1}\big{)}z^{4}\] \[\quad+\big{(}-8(A_{1}-A_{3})\cos^{2}s_{1}+2A_{10}\cos s_{1}-2A_{1 1}\sin s_{1}\] \[\quad-2A_{12}\cos s_{1}+2A_{8}\sin s_{1}+4(A_{1}-A_{3})\big{)}z^ {3}-12(A_{1}-A_{3})\sin s_{1}\cos s_{1}z^{2}\] \[\quad+\big{(}8(A_{1}-A_{3})\cos^{2}s_{1}+2A_{10}\cos s_{1}-2A_{1 1}\sin s_{1}-2A_{12}\cos s_{1}\] \[\quad+2A_{8}\sin s_{1}-4(A_{1}-A_{3})\big{)}z+2(A_{1}-A_{3})\cos s _{1}\sin s_{1}\] \[\quad+A_{10}\sin s_{1}+A_{11}\cos s_{1}-A_{12}\sin s_{1}-A_{8} \cos s_{1},\] \[\tilde{\beta} =(-2A_{7}\cos s_{1}+2A_{9}\sin s_{1})\,z^{4}+\left(-4A_{7}\sin s _{1}-4A_{9}\cos s_{1}\right)z^{3}\] \[\quad+\left(-4A_{7}\sin s_{1}-4A_{9}\cos s_{1}\right)z+2A_{7}\cos s _{1}-2A_{9}\sin s_{1},\] \[\tilde{\gamma} =\big{(}2(A_{1}-A_{3})\cos s_{1}\sin s_{1}+A_{10}\sin s_{1}-A_{1 1}\cos s_{1}+A_{12}\sin s_{1}-A_{8}\cos s_{1}\big{)}z^{4}\] \[\quad+\big{(}-8(A_{1}-A_{3})\cos^{2}s_{1}-2A_{10}\cos s_{1}-2A_{1 1}\sin s_{1}\] \[\quad-2A_{12}\cos s_{1}-2A_{8}\sin s_{1}+4(A_{1}-A_{3})\big{)}z^ {3}-12(A_{1}-A_{3})\sin s_{1}\cos s_{1}z^{2}\] \[\quad+\big{(}8(A_{1}-A_{3})\cos^{2}s_{1}-2A_{10}\cos s_{1}-2A_{1 1}\sin s_{1}-2A_{12}\cos s_{1}\] \[\quad-2A_{8}\sin s_{1}-4(A_{1}-A_{3})\big{)}z+2(A_{1}-A_{3})\cos s _{1}\sin s_{1}\] \[\quad-A_{10}\sin s_{1}+A_{11}\cos s_{1}-A_{12}\sin s_{1}+A_{8} \cos s_{1},\] \[\tilde{A} =(A_{7}\sin s_{1}+A_{9}\cos s_{1}-A_{13})\,z^{2}+\left(-2A_{7} \cos s_{1}+2A_{9}\sin s_{1}\right)z\] \[\quad-A_{7}\sin s_{1}-A_{9}\cos s_{1}-A_{13},\] \[\tilde{B} =\big{(}2A_{10}\cos s_{1}+2A_{8}\sin s_{1}-2A_{14}-4(A_{4}-A_{6} )\big{)}z^{2}+\left(4A_{10}\sin s_{1}-4A_{8}\cos s_{1}\right)z\] \[\quad-2A_{10}\cos s_{1}-2A_{8}\sin s_{1}-2A_{14}-4(A_{4}-A_{6}),\] \[\tilde{D} =\big{(}2A_{10}\cos s_{1}+2A_{8}\sin s_{1}-2A_{14}+4(A_{4}-A_{6} )\big{)}z^{2}+\left(4A_{10}\sin s_{1}-4A_{8}\cos s_{1}\right)z\] \[\quad-2A_{10}\cos s_{1}-2A_{8}\sin s_{1}-2A_{14}+4(A_{4}-A_{6}).\]
## Appendix B Factorization of \(\mathscr{H}(f_{2})\)
Let
\[\mathscr{T}=\left[\begin{array}{cccc}\alpha^{2}+\beta^{2}&0&\beta\kappa- \alpha\lambda&0\\ 2\alpha\gamma&\alpha^{2}+\beta^{2}&\beta\mu-\lambda\gamma-\alpha\nu&\beta\kappa -\alpha\lambda\\ \gamma^{2}-\beta^{2}&2\alpha\gamma&\beta\kappa-\gamma\nu&\beta\mu-\lambda \gamma-\alpha\nu\\ 0&\gamma^{2}-\beta^{2}&0&\beta\kappa-\gamma\nu\end{array}\right]\]
and define
\[\mathscr{H}(f_{2})=\det\mathscr{T}.\]
**Proposition 2**.: _We can extract the factor \(\beta^{2}(1+e_{2}\cos f_{2})^{2}\) from \(\det\mathscr{T}\)._
Proof.: We first prove that we can extract the factor \(\beta^{2}\).
Noting that
\[\kappa=\mu\eta,\qquad\eta=\frac{e_{1}}{1+e_{1}^{2}}, \tag{42}\]
we consider
\[\mathscr{T}=\left[\begin{array}{cccc}\alpha^{2}+\beta^{2}&0&\beta\mu\eta-\alpha \lambda&0\\ 2\alpha\gamma&\alpha^{2}+\beta^{2}&\beta\mu-\lambda\gamma-\alpha\nu&\beta\mu\eta- \alpha\lambda\\ \gamma^{2}-\beta^{2}&2\alpha\gamma&\beta\mu\eta-\gamma\nu&\beta\mu-\lambda \gamma-\alpha\nu\\ 0&\gamma^{2}-\beta^{2}&0&\beta\mu\eta-\gamma\nu\end{array}\right].\]
We can write \(\det\mathscr{T}\) as a sum of terms where the only one that is independent on \(\beta\) is
\[\left|\begin{array}{cccc}\alpha^{2}&0&-\alpha\lambda&0\\ 2\alpha\gamma&\alpha^{2}&-\lambda\gamma-\alpha\nu&-\alpha\lambda\\ \gamma^{2}&2\alpha\gamma&-\gamma\nu&-\lambda\gamma-\alpha\nu\\ 0&\gamma^{2}&0&-\gamma\nu\end{array}\right|,\]
which is equal to \(0\), as previously proved at the beginning of Proposition 1. The terms that are linearly dependent on \(\beta\) are given by
\[\left|\begin{array}{cccc}\alpha^{2}&0&-\alpha\lambda&0\\ 2\alpha\gamma&\alpha^{2}&-\lambda\gamma-\alpha\nu&\beta\mu\eta\\ \gamma^{2}&2\alpha\gamma&-\gamma\nu&\beta\mu\\ 0&\gamma^{2}&0&\beta\mu\eta\end{array}\right|,\qquad\left|\begin{array}{ ccccc}\alpha^{2}&0&\beta\mu\eta&0\\ 2\alpha\gamma&\alpha^{2}&\beta\mu&-\alpha\lambda\\ \gamma^{2}&2\alpha\gamma&\beta\mu\eta&-\lambda\gamma-\alpha\nu\\ 0&\gamma^{2}&0&-\gamma\nu\end{array}\right|,\]
and their sum is equal to \(0\), because the two determinants are opposite. Therefore, \(\mathscr{H}(f_{2})\) is made by terms of degree higher than \(1\) in \(\beta\). Thus, we can write
\[\mathscr{H}(f_{2})=\mathfrak{D}_{1}+\mathfrak{D}_{2}+\mathfrak{D}_{3}+ \mathfrak{D}_{4}+\mathfrak{D}_{5},\]
where
\[\mathfrak{D}_{1}=\left|\begin{array}{cccc}\beta^{2}&0&\beta\mu\eta&0\\ 0&\beta^{2}&\beta\mu&\beta\mu\eta\\ -\beta^{2}&0&\beta\mu\eta&\beta\mu\\ 0&-\beta^{2}&0&\beta\mu\eta\end{array}\right|,\]
\[\mathfrak{D}_{2}=\left|\begin{array}{cccc}\beta^{2}&0&\beta\mu\eta&0\\ 0&\beta^{2}&\beta\mu&-\alpha\lambda\\ -\beta^{2}&0&\beta\mu\eta&-\lambda\gamma-\alpha\nu\\ 0&-\beta^{2}&0&-\gamma\nu\end{array}\right|+\left|\begin{array}{cccc}\beta ^{2}&0&-\alpha\lambda&0\\ 0&\beta^{2}&-\lambda\gamma-\alpha\nu&\beta\mu\eta\\ -\beta^{2}&0&-\gamma\nu&\beta\mu\\ 0&-\beta^{2}&0&\beta\mu\eta\end{array}\right|,\]
\[\mathfrak{D}_{3}=\left|\begin{array}{cccc}\beta^{2}&0&-\alpha\lambda&0\\ 0&\beta^{2}&-\lambda\gamma-\alpha\nu&-\alpha\lambda\\ -\beta^{2}&0&-\gamma\nu&-\lambda\gamma-\alpha\nu\\ 0&-\beta^{2}&0&-\gamma\nu\end{array}\right|+\left|\begin{array}{cccc}\alpha ^{2}&0&\beta\mu\eta&0\\ 2\alpha\gamma&\beta^{2}&\beta\mu&\beta\mu\eta\\ \gamma^{2}&0&\beta\mu\eta&\beta\mu\\ 0&-\beta^{2}&0&\beta\mu\eta\end{array}\right|\]
\[\mathfrak{D}_{4}= \left|\begin{array}{cccc}\alpha^{2}&0&\beta\mu\eta&0\\ 2\alpha\gamma&\beta^{2}&\beta\mu&-\alpha\lambda\\ \gamma^{2}&0&\beta\mu\eta&-\lambda\gamma-\alpha\nu\\ 0&-\beta^{2}&0&-\gamma\nu\end{array}\right|+\left|\begin{array}{cccc}\alpha ^{2}&0&-\alpha\lambda&0\\ 2\alpha\gamma&\beta^{2}&-\lambda\gamma-\alpha\nu&\beta\mu\eta\\ \gamma^{2}&0&-\gamma\nu&\beta\mu\\ 0&-\beta^{2}&0&\beta\mu\eta\end{array}\right|\] \[+\left|\begin{array}{cccc}\beta^{2}&0&\beta\mu\eta&0\\ 0&\alpha^{2}&\beta\mu&-\alpha\lambda\\ -\beta^{2}&2\alpha\gamma&\beta\mu\eta&-\lambda\gamma-\alpha\nu\\ 0&\gamma^{2}&0&-\gamma\nu\end{array}\right|+\left|\begin{array}{cccc}\beta ^{2}&0&-\alpha\lambda&0\\ 0&\alpha^{2}&-\lambda\gamma-\alpha\nu&\beta\mu\eta\\ -\beta^{2}&2\alpha\gamma&-\gamma\nu&\beta\mu\\ 0&\gamma^{2}&0&\beta\mu\eta\end{array}\right|,\] \[\mathfrak{D}_{5}= \left|\begin{array}{cccc}\alpha^{2}&0&-\alpha\lambda&0\\ 2\alpha\gamma&\beta^{2}&-\lambda\gamma-\alpha\nu&-\alpha\lambda\\ \gamma^{2}&0&-\gamma\nu&-\lambda\gamma-\alpha\nu\\ 0&-\beta^{2}&0&-\gamma\nu\end{array}\right|+\left|\begin{array}{cccc}\beta ^{2}&0&-\alpha\lambda&0\\ 0&\alpha^{2}&-\lambda\gamma-\alpha\nu&-\alpha\lambda\\ -\beta^{2}&2\alpha\gamma&-\gamma\nu&-\lambda\gamma-\alpha\nu\\ 0&\gamma^{2}&0&-\gamma\nu\end{array}\right|\] \[+\left|\begin{array}{cccc}\alpha^{2}&0&\beta\mu\eta-\alpha \lambda&0\\ 2\alpha\gamma&\alpha^{2}&\beta\mu-\lambda\gamma-\alpha\nu&\beta\mu\eta-\alpha \lambda\\ \gamma^{2}&2\alpha\gamma&\beta\mu\eta-\gamma\nu&\beta\mu-\lambda\gamma-\alpha \nu\\ 0&\gamma^{2}&0&\beta\mu\eta-\gamma\nu\end{array}\right|.\]
We have
\[\mathfrak{D}_{1} =\beta^{6}\mu^{2}(4\eta^{2}-1),\] \[\mathfrak{D}_{2} =2\beta^{5}\mu\left[\lambda\gamma+\alpha\nu-2\eta(\alpha\lambda+ \gamma\nu)\right],\] \[\mathfrak{D}_{3} =\beta^{4}(\alpha^{2}-\gamma^{2})\left[\lambda^{2}-\nu^{2}+\mu^{2 }(4\eta^{2}-1)\right],\] \[\mathfrak{D}_{4} =-2\beta^{3}\mu\left[\gamma^{3}\lambda-\alpha^{3}\nu+\eta(\alpha ^{3}\lambda-3\alpha\gamma^{2}\lambda+3\alpha^{2}\gamma\nu-\gamma^{3}\nu) \right],\] \[\mathfrak{D}_{5} =\beta^{2}\mu^{2}\left[\alpha\gamma-\eta(\alpha^{2}+\gamma^{2}) \right]^{2}-\beta^{2}(\alpha^{2}-\gamma^{2})(\lambda\gamma-\alpha\nu)^{2}.\]
**Remark 1**.: \(\mathscr{H}(f_{2})/\beta^{2}\) _is a trigonometric polynomial of degree 10 in \((\cos f_{2}\), \(\sin f_{2})\)._
Then, we show that \(\xi^{2}=(1+e_{2}\cos f_{2})^{2}\) is a factor of \(\mathscr{H}(f_{2})/\beta^{2}\). For this purpose, using the definitions of \(\alpha\), \(\beta\), \(\nu\), \(\lambda\), \(\tilde{\alpha}\), \(\tilde{\beta}\), \(\tilde{\lambda}\) given in Section 5, we write
\[\alpha =\xi\tilde{\alpha}+e_{1}\gamma, \tag{43}\] \[\beta =\xi\tilde{\beta},\] \[\lambda =e_{1}\tilde{\lambda},\] (44) \[\nu =p_{1}e_{1}\xi+\tilde{\lambda}. \tag{45}\]
The factor \(\beta^{2}\) can be extracted from \((\mathfrak{D}_{1}+\mathfrak{D}_{2}+\mathfrak{D}_{3})/\beta^{2}\), therefore also \(\xi^{2}\) is a factor of this polynomial. Consider now \(\mathfrak{D}_{4}/\beta^{2}\) and write it as
\[\mathfrak{D}_{4}/\beta^{2}=6\mu\beta\alpha\gamma(\gamma\lambda-\alpha\nu)-2 \mu\beta\left[\lambda(\gamma^{3}+\eta\alpha^{3})-\nu(\alpha^{3}+\eta\gamma^{3} )\right].\]
Noting that
\[\gamma\lambda-\alpha\nu=-\xi(\nu\tilde{\alpha}+p_{1}e_{1}^{2}\gamma) \tag{46}\]
and
\[\gamma^{3}\left[\lambda(1+\eta e_{1}^{3})-\nu(e_{1}^{3}+\eta)\right]=-\gamma^{ 3}(1+\eta e_{1}^{3})\xi p_{1}e_{1}^{2},\]
where we used the expressions of \(\alpha\), \(\lambda\), \(\nu\) in (43), (44), (45) and the definition of \(\eta\) in (42), we prove that \(\xi^{2}\) factors \(\mathfrak{D}_{4}/\beta^{2}\).
Finally, using relation (46) and
\[\alpha\gamma-\eta(\alpha^{2}+\gamma^{2})=\xi\tilde{\alpha}\left[\gamma(1-2 \eta e_{1})-\xi\eta\tilde{\alpha}\right],\]
we show that also \(\mathfrak{D}_{5}/\beta^{2}\) contains the factor \(\xi^{2}\).
The trigonometric polynomial
\[h(f_{2}) =\mathscr{H}(f_{2})/(\beta^{2}\xi^{2})\] \[=\tilde{\beta}^{4}\xi^{2}\mu^{2}(4\eta^{2}-1)+2\tilde{\beta}^{3} \xi\mu\left[\lambda\gamma+\alpha\nu-2\eta(\alpha\lambda+\gamma\nu)\right]\] \[\quad+\tilde{\beta}^{2}(\alpha^{2}-\gamma^{2})\left[\lambda^{2}- \nu^{2}+\mu^{2}(4\eta^{2}-1)\right]\] \[\quad-2\mu\tilde{\beta}\tilde{\alpha}^{2}\xi^{2}\left[\tilde{ \alpha}(\lambda\eta-\nu)-3\eta e_{1}^{3}p_{1}\gamma\right]+\mu^{2}\tilde{ \alpha}^{2}\left[\gamma(1-2\eta e_{1})-\xi\eta\tilde{\alpha}\right]^{2}\] \[\quad-2\mu\tilde{\beta}p_{1}e_{1}(1-e_{1}^{2})\eta\gamma^{2} \left[3e_{1}\tilde{\alpha}\xi-\gamma(1-e_{1}^{2})\right]-(\alpha^{2}-\gamma^{ 2})(\nu\tilde{\alpha}+p_{1}e_{1}^{2}\gamma)^{2}\]
is of degree \(8\) in \((\cos f_{2},\,\sin f_{2})\).
## Appendix C Angular shift for trigonometric polynomials
Let
\[p(x,y)=\sum_{(i,j)\in\mathcal{K}}p_{i,j}x^{i}y^{j},\]
with \(x=\cos u,y=\sin u\) and \(\mathcal{K}\subset\mathbb{N}\times\mathbb{N}\) (non-negative \(2\)-index integers) be a trigonometric polynomial. We wish to write \(p(x,y)\) in terms of the variables \((z,w)\), where
\[z=\cos v,\quad w=\sin v,\qquad v=u-\alpha,\quad\alpha\in\mathbb{R}.\]
Writing \(c_{\alpha}\), \(s_{\alpha}\) for \(\cos\alpha\), \(\sin\alpha\), respectively, we have
\[x=c_{\alpha}z-s_{\alpha}w,\qquad y=s_{\alpha}z+c_{\alpha}w,\]
so that we obtain
\[q(z,w) =p(c_{\alpha}z-s_{\alpha}w,s_{\alpha}z+c_{\alpha}w)\] \[=\sum_{(i,j)\in\mathcal{K}}p_{i,j}\left[\sum_{h=0}^{i}\left( \begin{array}{c}i\\ h\end{array}\right)(c_{\alpha}z)^{h}(-s_{\alpha}w)^{i-h}\right]\left[\sum_{k=0 }^{j}\left(\begin{array}{c}j\\ k\end{array}\right)(s_{\alpha}z)^{k}(c_{\alpha}w)^{j-k}\right]\] \[=\sum_{(i,j)\in\mathcal{K}}p_{i,j}\sum_{\ell=0}^{i+j}\sum_{h+k= \ell}\left(\begin{array}{c}i\\ h\end{array}\right)\left(\begin{array}{c}j\\ k\end{array}\right)(-1)^{i-h}(c_{\alpha})^{h+j-k}(s_{\alpha})^{i-h+k}z^{h+k}w ^{i-h+j-k}\] \[=\sum_{(i,j)\in\mathcal{K}}p_{i,j}\sum_{\ell=0}^{i+j}\sum_{h=\max \{\ell-j,0\}}^{\min\{\ell,i\}}\left(\begin{array}{c}i\\ h\end{array}\right)\left(\begin{array}{c}j\\ \ell-h\end{array}\right)(-1)^{i-h}(c_{\alpha})^{2h+j-\ell}(s_{\alpha})^{i-2h+ \ell}z^{\ell}w^{i+j-\ell}.\]
If
\[\mathcal{K}=\{0,1,\ldots,m\}\times\{0,1,\ldots,n\},\]
introducing the coefficients
\[C_{i,j,\ell}=\sum_{h=\max\{\ell-j,0\}}^{\min\{\ell,i\}}\left(\begin{array}[] {c}i\\ h\end{array}\right)\left(\begin{array}{c}j\\ \ell-h\end{array}\right)(-1)^{i-h}(c_{\alpha})^{2h+j-\ell}(s_{\alpha})^{i-2h+ \ell},\]
we can write
\[q(z,w) =\sum_{i=0}^{m}\sum_{j=0}^{n}p_{i,j}\sum_{\ell=0}^{i+j}C_{i,j,\ell}z^ {\ell}w^{i+j-\ell}=\sum_{r=0}^{m+n}\sum_{i=\max\{r-n,0\}}^{\min\{r,m\}}p_{i,r-i} \sum_{\ell=0}^{i+r\not\sim}C_{i,r-i,\ell}z^{\ell}w^{\not\sim-\ell}\] \[=\sum_{r=0}^{m+n}\sum_{\ell=0}^{r}q_{\ell,r-\ell}z^{\ell}w^{r- \ell},\]
where
\[q_{\ell,r-\ell}=\sum_{i=\max\{r-n,0\}}^{\min\{r,m\}}p_{i,r-i}\,C_{i,r-i,\ell}.\]
|
2302.11168 | Influence of Gender Composition in Pedestrian Single-File Experiments | Various studies address the question of what factors are relevant to the
course of the fundamental diagram in single-file experiments. Some indicate
that there are differences due to group composition when gender is taken into
account. For this reason, further single-file experiments with homogeneous and
heterogeneous group compositions were conducted. A Tukey HSD test was performed
to investigate whether there are differences between the mean of velocity in
different density ranges. A comparison of different group compositions shows
that the effect of gender can only be seen, if at all, in a small density
interval. Regression analyses were also conducted to determine whether, at high
densities, the distance between individuals depends on the gender of the
neighboring pedestrians and to establish what human factors have an effect on
the velocity. An analysis of the distances between individuals at high
densities indicates that there is no effect of the gender of the neighboring
pedestrians. Taking into account additional human factors in a regression
analysis does not improve the model. | Sarah Paetzke, Maik Boltes, Armin Seyfried | 2023-02-22T06:20:50Z | http://arxiv.org/abs/2302.11168v1 | # Influence of Gender Composition
###### Abstract
Various studies address the question of what factors are relevant to the course of the fundamental diagram in single-file experiments. Some indicate that there are differences due to group composition when gender is taken into account. For this reason, further single-file experiments with homogeneous and heterogeneous group compositions were conducted. A Tukey HSD test was performed to investigate whether there are differences between the mean of velocity in different density ranges. A comparison of different group compositions shows that the effect of gender can only be seen, if at all, in a small density interval. Regression analyses were also conducted to determine whether, at high densities, the distance between individuals depends on the gender of the neighboring pedestrians and to establish what human factors have an effect on the velocity. An analysis of the distances between individuals at high densities indicates that there is no effect of the gender of the neighboring pedestrians. Taking into account additional human factors in a regression analysis does not improve the model.
P Pedestrian dynamics single-file movement culture gender effect regression analysis
## 1 Introduction
In recent years, there have been a number of studies that have shown that fundamental diagrams of various geometrical settings such as stairs [1; 2], single-file experiments [3; 4; 5], corridors [6; 7; 8; 9; 10]. or crossings [11; 12] vary [13; 14; 15; 16; 17]. However, it is not only the spatial structure that creates differences. When we go look more closely at into more detail about the specific structure, it becomes clear that there are also variations depending
on the experiment setup. The type of flow such as uni-, bi-, or multidirectional streams, human factors such as age, gender, height, and culture [3, 18, 19, 20, 21, 22, 23, 24, 25, 26], or external factors such as restricted visibility [27], different height adjustments due to smoke [28], motivation or instruction [18], rhythm or background music [29, 30], or properties of human movement such as step length and frequency [31, 32, 33, 34, 35, 36, 37] all affect the fundamental diagram.
The question of what factors are relevant to the course of the fundamental diagram in single-file experiments is not yet clear. It is also difficult to compare experiments partly owing to the combination of different human factors and partly because the measurement methods or the experimental scenarios vary, too. For instance, for one experiment, there might only be data in the low-density range whereas another experiment might also have data for the high-density range. Furthermore, it should be noted that often the problem arises that the fundamental diagrams represent a group that is homogeneous in one factor but different in terms of other factors. This problem was discussed in more detail in [38] where single-file school experiments were studied to analyze how human factors affect the fundamental diagram of pedestrian dynamics.
With respect to the effect of gender, the results of some existing studies can be summarized as follows. Subaih et al. [20] have shown that for densities higher than 1.0 m\({}^{-1}\) groups compositions homogeneous in gender lead to higher speeds than a heterogeneous group composition with alternating order. But a comparison with data from other cultures and different ages raise the question of what other factors also need to be considered. In [21], using the data from the experiments introduced in[20], Subaih et al. have shown that the headway to the front and to the back is important, too. This result suggests that the arrangement by gender has an effect on the distances between pedestrians and must be taken into account in modeling the speed-density relation. While these findings indicate a significant contribution of gender in Paetzke et al. [38], it was still concluded that gender could be neglected. This analysis is based on a multiple linear regression from experiments with heterogeneous group compositions.
To analyze these contradictory findings, further single-file experiments are performed for the present study. Four different group compositions, female, male, gender alternating, and gender random order are considered to investigate the following three hypotheses derived from the studies to date [20, 21, 38].
1. The speed-density relation depends on the gender composition of the group of test persons.
2. At high densities, the distance between individuals depends on the gender of the neighboring pedestrians.
3. The inclusion of additional human factors that were not previously included such as the weight, the exact height, and the gender of the previous pedestrian improves the multiple linear regression model developed in [38].
For the first hypothesis, the question is whether there are differences within the density-velocity relation between the mean values of the homogeneous and heterogeneous pedestrian group compositions when gender is taken into account. This has been tested in seven
density intervals using the Tukey HSD test. For the second hypothesis, simple linear regression analysis is used to determine whether there are differences between the group compositions at high densities. For the third hypothesis, a multiple linear regression analysis is performed with different human factors.
The second section of this paper describes the experimental setup, the measurement methods, the data preparation, and the experiments that are compared. Section 3 deals with the results and analysis of the hypotheses. A comparison of two different experiments based on the group composition is carried out and the regression analysis, which includes simple and multiple linear regression, is performed. The conclusions are presented in the last section and further research is proposed.
## 2 Materials and Methods
### Experimental setup
The subject of the present study is a one-dimensional single-file experiment performed within an experimental series [39] of the projects CroMa and CrowdDNA at the Mitsubishi Electric Halle in Dusseldorf, Germany in 2021. The oval path measurements in the experiment are total length of the central line \(l=14.97\) m by width of \(w=0.8\) m. The middle radius is \(1.65\) m while the straight sections are \(2.3\) m long. The two measurement areas are highlighted in the background in the sketch on the left in Figure 1.
When the pedestrians' trajectories are projected into the central line of the oval, the complexity of the system is reduced to one dimension and, consequently, only the change in movement direction is taken into consideration [40]. Four different group compositions were considered in the experiment. Two homogeneous group compositions with respect to gender were chosen, male (m) and female (f). A third group is a heterogeneous group
Figure 1: Single-file experiment performed at the Mitsubishi Electric Halle in Düsseldorf, Germany in 2021. Figure a) shows the oval path with the lengths of the central line, the width of the oval path, the middle radius, the length of the straight sections, and two measurement areas highlighted in the background. Figure b) shows the experiment from above with the students wearing green caps with personal ID codes on top.
composition where the male and female participants are arranged in an alternating order m, f, m, f, etc. A fourth group is also a heterogeneous group composition, but with a random structure, as the male and female pedestrians were randomly distributed in the oval, for example, m, m, f, m, f, f, etc. With both homogeneous group compositions, ten experimental runs at different global densities were performed. The global density is adjusted by \(N\), the number of persons situated in the oval. For the runs, we chose \(N=4,N=8,N=16,N=20,N=24,N=32,N=36\), and \(N=40\). In the two experiments with heterogeneous group compositions, all test subjects participate at least once in a run at each density. This is not the case in the two homogeneous runs. The densities of the experiments with the heterogeneous group compositions correspond to the densities in the experiments with the homogeneous group compositions, but there are now a total of 25 different experimental runs instead of ten. Global densities \(\rho_{gl}=N/l\) for these cases were \(\rho_{gl}\in[0.27,2.67]\) m\({}^{-1}\).
For all parts, the runs have a duration of between two and three minutes. Two minutes were chosen for runs with \(N<32\) because, up to this density, the pedestrians have moved a long distance in the oval in the time considered. The test persons were instructed to walk behind each other without haste or overtaking. In total, 80 different pedestrians participated in the experiment with an equal ratio of male and female pedestrians. They all wear green caps with personal ID codes on top. These codes are used to extract the trajectories of different participants in several experimental scenarios and to assign personal information to a participant such as a gender, age, shoulder width, weight, and height [41, 42].
Table 1 shows a detailed overview of the mean values and the standard errors for age, height, weight, and shoulder width for the four different groups: female, gender alternating, gender random order, and male. The average age of the participants is between 26 and 28 years, their average heights range between 1.70 m and 1.83 m, and their average weights are between 76.26 kg and 92.24 kg and, lastly, their average shoulder widths are between 0.43 m and 0.49 m.
### Measurement methods
The individual velocity, the Voronoi tessellation, and density are calculated on the basis of the one-dimensional trajectories obtained by tracking the head from the video recording.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & **Female** & **Gender alternating** & **Gender random order** & **Male** \\ \hline \(\overline{age}\pm\sigma\) in \(years\) & \(27.12\pm 8.11\) & \(27.74\pm 6.03\) & \(25.97\pm 5.14\) & \(26.42\pm 4.92\) \\ \(\overline{height}\pm\sigma\) in \(m\) & \(1.70\pm 0.08\) & \(1.75\pm 0.09\) & \(1.77\pm 0.11\) & \(1.83\pm 0.07\) \\ \(\overline{weight}\pm\sigma\) in \(kg\) & \(76.26\pm 21.16\) & \(92.24\pm 20.61\) & \(80.95\pm 20.88\) & \(88.74\pm 26.08\) \\ \(\overline{shoulder\ width}\pm\sigma\) in \(m\) & \(0.43\pm 0.03\) & \(0.45\pm 0.04\) & \(0.46\pm 0.05\) & \(0.49\pm 0.03\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The columns show a detailed overview of the mean values and the standard errors for age, height, weight, and shoulder width for the different groups: female, gender alternating, gender random order, and male.
For this case, \(x_{i}(t)\) describes the position of individual \(i\) at time \(t\). Pedestrian \(i+1\) is walking directly in front and a person \(i-1\) directly behind person \(i\). The Voronoi distance \(d_{V_{i}}(t)\) of pedestrian \(i\) at time \(t\) is calculated by
\[d_{V_{i}}(t)=\frac{1}{2}\cdot(x_{i+1}(t)-x_{i-1}(t))\, \tag{1}\]
which is the half of the distance between the centers of the heads \(x_{i+1}(t)\) and \(x_{i-1}(t)\). The density is calculated by \(\rho_{i}(t)=\frac{1}{d_{V_{i}}}\). The individual velocity is calculated by
\[v_{i}(t)=\frac{x_{i}(t+\frac{\Lambda t}{2})-x_{i}(t-\frac{\Lambda t}{2})}{ \Delta t}. \tag{2}\]
As explained in [38], the value \(\Delta t=0.8\) s is a good assumption. The intended direction and negative velocities of the pedestrians are also included. Both straight sections of the oval are used as measurement areas.
### Data processing
For the various experimental runs, only the data in a steady state are considered. The range was determined by the CUSUM algorithm [43]. To ensure the independence between two successive measurement values, such as for the velocity, autocorrelation was used to determine one value for the time gaps to be considered in an experimental run. On average, the time gap between these measurement values is about 1.38 seconds. For each group composition, approximately 3,000 data points are considered for the analysis.
### Experiments in comparison
The single-file experiments in Dusseldorf, Germany - performed for the present study are compared with the single-file experiments conducted by Subaih et al. [20, 44] at the Arab American University in Palestine. Therefore, the data from Subaih's study, already including the velocity and density, are used. In this section, only selected features of the experiment conducted by Subaih et al. are described. For further details, we refer to [20]. In Subaih's experiment, the measurements of the oval path are total length of the central line of \(l=17.30\) m by width of \(w=0.6\) m indicated by markings on the floor. The straight sections are 3.15 m long. In total, 47 different pedestrians participated in the experiment with 26 female and 21 male students. Their heights are within the range of 1.52 m to 1.84 m. On average, the height of men is 1.75 m and women 1.61 m. Their age is between 18 and 23 years. For the homogeneous group compositions including both male (UM) and female (UF), the number of persons situated in the oval is \(N=14\) and \(N=20\). For the heterogeneous group composition with a gender alternating order (UX), there are also \(N=24\) and \(N=30\). The global densities are 0.81 m\({}^{-1}\), 1.16 m\({}^{-1}\), 1.38 m\({}^{-1}\) and 1.73 m\({}^{-1}\), respectively. Compared to the experiments in Dusseldorf, Germany, the participants in Palestine are younger and shorter than the pedestrians in Germany. In both experiments, the experimental scenarios of the homogeneous group compositions in terms
of gender were performed first. Furthermore, the same measurement methods are used in both experiments.
## 3 Results and Analysis
### Comparison of group compositions for the experiments
performed in Germany
In order to check the first hypothesis, we conducted an analysis for different density intervals to see whether there are systematic differences in the velocity between homogeneous and heterogeneous group compositions with respect to gender. First, a visual comparison was performed. Figure 2 shows a density vs. velocity fundamental diagram for the groups female, male, gender alternating, and gender random order in Germany with binned data so that the trend and possible differences can be seen more clearly.
The data suggest that the question of whether the fundamental diagrams correspond or not, depends on which density interval is considered. Up to a density of about 1.15 m\({}^{-1}\), there seems to be no systematic variation in equality and inequality between the group compositions. For densities larger than 1.15 m\({}^{-1}\) and smaller than 2.0 m\({}^{-1}\), it could be assumed that the velocity is higher for homogeneous group compositions. For densities higher than 2 m\({}^{-1}\), the course of the individual curves looks very similar. For a more detailed analysis, the mean values of the velocity are compared for each group composition in seven small density intervals \([0.15,0.25]\), \([0.55,0.65]\), \([0.85,0,95]\), \([1.05,1.15]\)
Figure 2: Fundamental diagram of density vs. velocity for the groups female, male, gender alternating and gender random order in Germany with binned data within 0.1 intervals.
\([1.25,1.35]\), \([2.05,2.15]\), and \([3.05,3.15]\) which are highlighted in grey areas in the fundamental diagram in Figure 3. First, the Kolmogorov-Smirnov test was conducted to determine the velocity distribution for all group compositions and this resulted in a difference between almost all distributions in all intervals. Therefore, a statistical test, a Tukey HSD (honest significant difference) test was then performed to check whether the means are significantly different from each other. Here, all group compositions are directly compared pairwise. The test takes into account that the sample size is approximately the same. If the Tukey test shows that the p-value is larger than 0.05, this means that there is equality between the means of the observed group compositions considered.
Table 2 shows the mean values and the standard deviation for the velocity of each group in Germany and Palestine in all seven intervals. The results of the experiments performed in Palestine by Subhai et.al. will be discussed in the next subsection in the comparison.
For every interval, group comparisons where the p-value is larger than 0.05-and so their mean values are therefore equal-are shown in the same color. In the first four intervals in the range from 0.15 m\({}^{-1}\) to 1.15 m\({}^{-1}\), both the group with the highest speed changes and the group compositions that are rated as equal by the Tukey test change from interval to interval. First, the two means are significantly equal for female and gender random order. In the second interval, the test results show equality between male, gender alternating, and gender random order, then between female, male, and gender random order and, finally, in the fourth interval, between female and gender random order as well as male and gender alternating. Consequently, there is no systematic equality or inequality between different group compositions in the low-density regime. In the density range between 1.25 m\({}^{-1}\) and 1.45 m\({}^{-1}\), the mean values of the homogeneous group compositions are equal to
Figure 3: Fundamental diagram of density vs. velocity fundamental diagram for the groups female, male, gender alternating, and gender random order in Germany with binned data that represent only the mean values of the velocity. Furthermore, the individual intervals \([0.15,0.25]\), \([0.55,0.65]\), \([0.85,0,95]\), \([1.05,1.15]\), \([1.25,1.35]\), \([2.05,2.15]\), and \([3.05,3.15]\) are highlighted in grey areas.
or higher than those of the heterogeneous group compositions. Between \(1.45\) m\({}^{-1}\) and approximately \(2.15\) m\({}^{-1}\), the two means are significantly equal for all group compositions except gender random order. For the group gender random order, the velocity is the lowest of all groups. At a density of \(2.15\) m\({}^{-1}\) and above, there is equality for all group compositions.
In addition to Table 2, the boxplots in Figure 4 also illustrate the results of the Tukey test. The left-hand figure a) shows the first interval \([0.15,0.25]\) at which there is only
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \([0.15,0.25]\) & \([0.55,0.65]\) & \([0.85,0.95]\) & \([1.05,1.15]\) & \([1.25,1.35]\) & \([2.05,2.15]\) & \([3.05,3.15]\) \\ \hline Female, & \(1.07\pm 0.04\) & \(1.10\pm 0.05\) & \(0.62\pm 0.14\) & \(0.58\pm 0.12\) & \(0.41\pm 0.14\) & \(0.10\pm 0.07\) & \(0.06\pm 0.06\) \\ Germany & & & & & & \\ \hline Male, & \(1.17\pm 0.09\) & \(1.11\pm 0.10\) & \(0.61\pm 0.19\) & \(0.48\pm 0.23\) & \(0.44\pm 0.21\) & \(0.11\pm 0.09\) & \(0.06\pm 0.08\) \\ Germany & & & & & & \\ \hline Gender alternating, & \(1.17\pm 0.16\) & \(1.14\pm 0.15\) & \(0.74\pm 0.24\) & \(0.43\pm 0.22\) & \(0.30\pm 0.19\) & \(0.10\pm 0.07\) & \(0.05\pm 0.06\) \\ Germany & & & & & & \\ \hline Gender random order, & \(1.26\pm 0.14\) & \(1.11\pm 0.07\) & \(0.54\pm 0.30\) & \(0.55\pm 0.15\) & \(0.35\pm 0.16\) & \(0.07\pm 0.07\) & \(0.04\pm 0.06\) \\ Germany & & & & & & \\ \hline \hline Female, & & & \(1.14\pm 0.01\) & \(0.80\pm 0.11\) & \(0.68\pm 0.07\) & \\ Palestine & & & & & & \\ \hline Male, & & & \(0.94\pm 0.20\) & \(0.81\pm 0.13\) & \(0.74\pm 0.13\) & \\ Palestine & & & & & & \\ \hline Gender alternating, & & & \(1.00\pm 0.18\) & \(0.70\pm 0.14\) & \(0.50\pm 0.18\) & \(0.11\pm 0.05\) \\ Palestine & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean values and the standard deviation for the velocity (\(\overline{v}\pm\sigma\)) in seven different density intervals for different group compositions in Germany and Palestine. Equal colors in an interval indicate equality between the corresponding groups.
Figure 4: Boxplots for the velocity for different group compositions in Germany. Equal letters above the boxes indicate equality of the mean values of the velocity between the corresponding groups. The first interval \([0.15,0.25]\) is represented by a) and the last one \([3.05,3.15]\) by b).
equality between male and gender alternating groups and the right-hand one b) shows the last interval \([3.05,3.15]\) at which all group compositions are equal. This equality or inequality is indicated by the letters above the boxes. The same letters represent equality between group compositions. In addition, the boxplots show the minimum and maximum values, the median within the box, and the lower and upper quantiles, the boundaries of the box, and of the individual group compositions. The color of the box is darker, the higher the median is.
### Comparison with the experiments performed in Palestine
In this section, data from the experiments in Palestine [20, 44] are compared with those in Dusseldorf, Germany [45]. First, the binned data are plotted in a fundamental diagram up to a density of 2.5 m\({}^{-1}\) (see Figure 5). The left-hand diagram illustrates the values for Germany and the right-hand one those for Palestine.
A visual comparison based on Figure 5 shows that in the density interval of [0.6, 1.5] m\({}^{-1}\), the mean velocity is higher in the experiments performed in Palestine than in Germany. For densities higher than 1.5 m\({}^{-1}\), the heterogeneous group in Palestine approaches the mean speed of the groups in Germany. When we only consider the data from Palestine, we see that the heterogeneous group composition in Palestine is faster for densities less than 0.8 m\({}^{-1}\). With increasing density, the homogeneous group compositions of males and females show higher means than the heterogeneous group composition.
To enable us to compare the data of the experiments in Germany and Palestine based on a statistical test, the Tukey HSD test was used again. The corresponding mean values and the standard deviation of the velocity are shown in Table 2. For the experiments performed in Palestine, only data for the intervals \([0.85,0.95]\) m\({}^{-1}\) to \([2.05,2.15]\) m\({}^{-1}\) are available. Up to a density of approximately 1.0 m\({}^{-1}\), the means are significantly equal for all group compositions in Palestine, as well as for gender alternating in Germany, and for the group compositions of female, male, and gender random order in Germany.
Figure 5: Relation of density and velocity for the experiments in a) Germany and in b) Palestine by binned data of the mean values of the velocity for different group compositions.
In the interval \([2.05,2.15]\) m\({}^{-1}\), the two means are significantly equal between german group compositions and between all group compositions in Palestine. Above a density of 1.25 m\({}^{-1}\), the heterogeneous group composition in Palestine approaches the group compositions in Germany. Only in the interval \([1.25,1.35]\) m\({}^{-1}\) are the homogeneous group compositions equal in Palestine and in Germany and have a higher mean value of the velocity than the heterogeneous groups in the corresponding countries. In other density intervals, there are no significant differences in Palestine and no systematic differences in Germany. Figure 6 shows the results of the Tukey test for the comparison between Germany and Palestine in the interval of \([1.25,1.35]\) m\({}^{-1}\).
### Gender differences in distances of neighboring pedestrians
A simple linear regression analysis for the speed-distance relation of each individual \(i\) was conducted to determine whether the distance between individuals depends on the gender of the neighboring pedestrians at high densities. The model is:
\[v_{i,l}=\beta_{0_{i}}+\beta_{1_{i}}\cdot(d_{V})_{i,l}+\epsilon_{i,l}\, \tag{3}\]
where \(l=1,...,n_{i}\) and \(n_{i}\) is the number of individual observations. \(v_{i,l}\) is the individual velocity, the predicted variable, \(\beta_{0_{i}}\) is the intercept, \(\beta_{1_{i}}\) is the regression coefficient, \((d_{V})_{i,l}\) is the Voronoi distance as an independent variable, and \(\epsilon_{i,l}\) describes the random experimental error. For a good adjustment, the values \(\beta_{0_{i}}\) and \(\beta_{1_{i}}\) need to be estimated. This results in the following equation:
\[\hat{v}_{i,l}=\hat{\beta}_{0_{i}}+\hat{\beta}_{1_{i}}\cdot(d_{V})_{i,l}. \tag{4}\]
Figure 6: Boxplots for the velocity for different group compositions in Germany and Palestine in the interval of \([1.25,1.35]\) m\({}^{-1}\). The same letters above the boxes indicate equality for the mean values of the velocity between the corresponding group compositions.
When we transform formula 4 for \(\hat{v}_{i,l}=0\), the minimum Voronoi distance for each individual results:
\[(d_{V})_{i,min}=-\frac{\hat{\beta}_{0_{i}}}{\hat{\beta}_{1_{i}}}. \tag{5}\]
\(\hat{\beta}_{1_{i}}\) can be interpreted as the reaction time for acceleration and braking.
In Table 3, the columns provide an overview of the mean values and the standard deviation for \((d_{V})_{i,min}\) and \(\hat{\beta}_{1_{i}}\) for the four different group compositions in Germany. Here, the values appear to be more or less identical.
In addition to obtaining the data shown in Table 3, we also compared the different values for \((d_{V})_{i,min}\) and \(\hat{\beta}_{1_{i}}\) using a statistical Tukey test. For \((d_{V})_{i,min}\) and \(\hat{\beta}_{1_{i}}\), for each group composition, the p-value is larger than 0.05 so the means are significantly equal for all four group compositions female, male, gender alternating, and gender random order in Germany.
Figure 7 illustrates the results for \((d_{V})_{i,min}\) in a) and for \(\hat{\beta}_{1_{i}}\) in b) with boxplots based on the Tukey test for different group compositions female, male, gender alternating, and gender random order in Germany. The same letters above the boxes indicate equality of the mean values between the corresponding groups. As the letters
\begin{table}
\begin{tabular}{c c c c c} \hline & **Female** & **Male** & **Gender** & **Gender random** \\ & & & **alternating** & **order** \\ \hline \((d_{V})_{i,min}\pm\sigma\) & \(0.31\pm 0.07\) & \(0.34\pm 0.08\) & \(0.34\pm 0.08\) & \(0.36\pm 0.08\) \\ \(\overline{\hat{\beta}_{1_{i}}}\pm\sigma\) & \(0.96\pm 0.22\) & \(0.95\pm 0.23\) & \(0.94\pm 0.18\) & \(0.90\pm 0.21\) \\ \hline \end{tabular}
\end{table}
Table 3: The columns provide an overview of the mean values and standard error for \((d_{V})_{i,min}\) and \(\hat{\beta}_{1_{i}}\) for the different group compositions.
Figure 7: Boxplots based on a Tukey test for \((d_{V})_{i,min}\) in a) and for \(\hat{\beta}_{1_{i}}\) in b) for different group compositions of female, male, gender alternating, and gender random order in Germany. The same letters above the boxes indicate equality for the mean values between the corresponding groups.
all the same, this indicates that there is equality between the means. As also shown in the previous analysis in this subsection, 3.3, it is confirmed that there are no differences between the group compositions at high density. Consequently, the second hypothesis, namely that at high densities, the distance between individuals depends on the gender of the neighboring pedestrians, can not be confirmed.
### Human factors in fundamental diagrams
In [38], the multiple linear regression analysis showed that the headway has the most significant effect on the velocity and other human factors such as gender only have a small effect or can be neglected.1 In this section, we will determine whether taking into consideration additional human factor leads to a more sensitive model. For this purpose, additional factors such as the weight, age, exact height, and gender of the previous pedestrian are taken into account.
Footnote 1: Further details on the procedure and the structure of the model can also be found in the publication cited.
Accordingly, in the new model, the velocity depending on the Voronoi distance, height, gender, age, weight, gender of the previous pedestrian is studied. The variable _gender.prev_ is used for this, and, for all other individual effects, for example, motivation, attention, or excitement which was described in [38] the variable _alloence_ is used. It was taken into account that there could be strong correlations between certain human factors. A measurement of the correlation of the factors considered shows obvious dependencies such as the correlation of gender and body height (\(p=0.66\)), gender and shoulder width (\(p=0.71\)), or weight and shoulder width (\(p=0.75\)). In addition, in this analysis, one model is sufficient for all four groups, female, male, gender random order, and gender alternating as the previous results did not show any significant differences between the groups. Furthermore, the new model was applied to a low density, \(\rho_{gl}\leq 0.75\), and a high density, \(\rho_{gl}>0.75\). Taking into account the results of the previous sections, the low density is the region in which there is no systematic difference between equality and inequality between the mean values of the velocities in the different group compositions.
First, the model evaluation using Akaike's Information Criterion (AIC) is applied to the model introduced in [38]. Step by step, it was decided what factors should be considered in order to obtain the best possible model with the fewest factors without degrading the model. The AIC procedure indicates that gender, age, height, and weight can be omitted. Accordingly, the resulting model is as follows:
\[\text{Model: }v_{m}=\beta_{0}+\beta_{1}\cdot d_{V_{m}}+\beta_{2}\cdot gender. prev_{m}+\sum_{i=1}^{N}\beta_{3i}\cdot alloence_{m}+\varepsilon_{m}\;, \tag{6}\]
where \(m=1,...,n\) and \(n\) is the number of all observations of all individuals, \(v_{m}\) is the velocity and \(\text{alloence}_{m}=1\) for all \(m\) belonging to individual \(i\) and \(0\) for all other \(m\). \(\beta_{3i}\) is an individual coefficient across all measurement points for each pedestrian. The new model in 6 is applied to study the effect of the variables Voronoi distance \(d_{V}\), gender.prev, and alloence on the velocity the new model in 6 is applied. The ANOVA table shows that the p-values for all variables are less than 0.05. This means that all variables considered
in the new model have an effect on the individual velocity. Figure 8 shows the result using pie charts for low and high density and for a combination of these.
Again, the headway has the most effect on the velocity, then all other unknown individual effects. The effect of the gender of the pedestrian in front of the person observed is less than 1% and can therefore be neglected. Furthermore, the model is better suited to lower densities than to high densities because the velocity is affected more by the Voronoi distance at a low density. At a high density, the effect of the Voronoi distance is decreased, since variations in the range of millimeters occur due to the swaying of the heads. With regard to the third hypothesis, it can be concluded that this hypothesis is false. The analysis provides the same model as before in [38]. When it is taken into account that the gender of the pedestrian in front has an effect, the model is different. However, since the effect is so small, less than 1%, and therefore negligible, the model is identical.
## 4 Conclusions
The analysis of the single-file experiments conducted by Subaih et al. [20] in Palestine showed that in a density range around 1.0 m\({}^{-1}\) and above, the velocity of groups that are homogeneous with respect to gender is higher than that of heterogeneous groups. In order to test whether this result could be reproduced, further single-file experiments with homogeneous and heterogeneous group compositions were performed in Germany.
The comparison of different group compositions with respect to gender in different density intervals in the experiment with test persons from Germany could be summarized as follows. At low densities, the comparison shows no systematic variation, neither with respect to the equality of the mean values of the speed between the group compositions nor with respect to the question of which group is faster. Only in a small density interval around 1.4 m\({}^{-1}\) do homogeneous groups differ from the heterogeneous groups and show a
Figure 8: Different effects on the individual velocity based on the ANOVA table in pie charts for a low density, \(\rho_{gl}\leq 0.75\), a high density, \(\rho_{gl}>0.75\), and for a combination of low and high density.
higher velocity. Between densities of 1.5 to 2.15 m\({}^{-1}\), the differences lessen and the mean value of the speed of the homogeneous groups approaches the mean of the heterogeneous group gender alternating group. The group gender random order group is the slowest group in this interval. Finally, at high densities, the mean values of the velocities of all groups are equal.
In comparison to the results of the experiments performed in Palestine, there is a certain correspondence but not in every detail. No systematic variation between the homogeneous and heterogeneous groups can be observed for densities lower than 0.8 m\({}^{-1}\). The difference between homogeneous and heterogeneous groups around a density of 1.4 m\({}^{-1}\) could be reproduced, but this is less pronounced in the experiments performed in Germany. Moreover, the velocity in the density interval from 1 m\({}^{-1}\) to 1.5 m\({}^{-1}\) is higher in Palestine than in Germany. For higher densities, only data for the heterogeneous group in Palestine are available and the mean speed of this group approaches the mean of the German groups.
Therefore, the first hypothesis, namely that the speed-density relation depends on the gender composition of the group of test persons, is proven to be correct. However, a closer look shows its weak relevance to the experiment in Germany. The difference can only be seen in a narrow density interval and it is small. It cannot be ruled out, however, that the relevance of the effect is stronger in other cultures.
With respect to the relevance of the effect, it should be noted that the verification of the hypothesis depends on the test method as well as on the data preparation. Obviously, the size of the binning intervals has an effect on the data. Depending on the size of the individual density intervals selected for different test methods, the systematic variation described above could not be seen. This was already the case for intervals of 0.2. Methods others than the Tukey test, such as the t-test, give similar results. However, tests with a high sensitivity such as the Kolmogorov-Smirnov test, lead to no correspondence of velocity distributions in almost all density intervals and would lead to a rejection of the first hypothesis. For all density intervals, the differences between the means speed of the different group compositions are smaller than the standard deviation.
For the second hypothesis, a simple linear regression analysis was performed. We used these results to derive the values for the minimal distance \((d_{V})_{i,min}\) and for the reaction time \(\hat{\beta}_{1_{i}}\). A comparison of the mean values of \((d_{V})_{i,min}\) and \(\hat{\beta}_{1_{i}}\) using the Tukey HSD test shows that there are no discernable differences between the four group compositions female, male, gender alternating, and gender random order in Germany. Thus, it can be verified that there are no discernable differences between the reaction time for the different group compositions and the second hypothesis, namely that at high densities, the distance between individuals depends on the gender of the neighboring pedestrians, can not be confirmed.
Finally, we consider the hypothesis on the inclusion of additional human factors that were not previously included such as the weight, exact height, and the gender of the neighboring pedestrian improves the multiple linear regression model developed in [38] is considered. It can be concluded that this hypothesis is false. The analysis provides the same model. Taking into account that the gender of the pedestrian in front has an effect
the model makes a difference. However, since the effect is so small, less than 1%, and therefore negligible, the model is identical.
For further research, the factor of culture could be further investigated. The reason for this is that when comparing the data between Germany and Palestine, we see that the velocity is higher in Palestine. It is unclear where this difference comes from, so more data from other cultures are needed. For Palestine, no data in the higher density range are available to date, so no further conclusions can be derived about the further course of the velocity at present.
###### Acknowledgements.
This study is based on a one-dimensional single-file experiment that was performed in 2021 at the Mitsubishi Electric Halle in Dusseldorf, Germany. Many thanks to Mohcine Chraibi, Anna Sieben, and Mira Beermann, who all helped with the conceptualization of the study and with the implementation of the experiments.
The publication costs are funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) with grant number 49111148. The experiments were financially supported by the German Federal Ministry of Education and Research (BMBF) within the project CroMa (Crowd Management in Verkehrsinfrastrukturen / Crowd Management in transport infrastructures) under grant number 13N14530 to 13N14533 and by the European Union's Horizon 2020 research and innovation program within the project CrowdDNA under grant agreement number 899739.
### Ethics Statement
Informed consent was obtained from all subjects involved in the study.
The data including videos and trajectories used for this study are publicly available in the following archive [45].
###### Author Contributions
Conceptualization, S.P.; methodology, S.P.; software, S.P.; validation, S.P., M.B. and A.S.; formal analysis, S.P.; investigation, S.P.; data curation, S.P.; writing--original draft preparation, S.P.; writing--review and editing, M.B. and A.S.; visualization, S.P.; supervision, M.B. and A.S.. All authors have read and agreed to the published version of the manuscript.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.